source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
154,919 | The other day I tried installing opencv-git from the AUR with makepkg on Arch Linux. Of course it pulls from the git repository as the name indicates. This pulls 1Gb. I am reading about making a shallow clone with git . When I look at the PKGBUILD file, using grep git PKGBUILD , I see: pkgname="opencv-git"makedepends=('git' 'cmake' 'python2-numpy' 'mesa' 'eigen2')provides=("${pkgname%-git}")conflicts=("${pkgname%-git}")source=("${pkgname%-git}::git+http://github.com/Itseez/opencv.git" cd "${srcdir}/${pkgname%-git}" git describe --long | sed -r 's/([^-]*-g)/r\1/;s/-/./g' cd "${srcdir}/${pkgname%-git}" cd "${srcdir}/${pkgname%-git}" cd "${srcdir}/${pkgname%-git}" install -Dm644 "LICENSE" "${pkgdir}/usr/share/licenses/${pkgname%-git}/LICENSE" Is there a way to modify the recipe or the makepkg command to pull only a shallow clone (the latest version of the source is what I want) and not the full repository to save space and bandwidth? Reading man 5 PKGBUILD doesn't provide the insight I'm looking for. Also looked quickly through the makepkg and pacman manpages - can't seem to find how to do that. | This can be done by using a custom dlagent . I do not really understand Arch packaging or how the dlagents work, so I only have a hack answer, but it gets the job done. The idea is to modify the PKGBUILD to use a custom download agent. I modified the source "${pkgname%-git}::git+http://github.com/Itseez/opencv.git" into "${pkgname%-git}::mygit://opencv.git" and then defined a new dlagent called mygit which does a shallow clone by makepkg DLAGENTS='mygit::/usr/bin/git clone --depth 1 http://github.com/Itseez/opencv.git' Also notice that the repository that is being cloned is hard-coded into the command. Again, this can probably be avoided. Finally, the download location is not what the PKGBUILD expects. To work around this, I simply move the repository after downloading it. I do this by adding mv "${srcdir}/../mygit:/opencv.git" "${srcdir}/../${pkgname%-git}" at the beginning of the pkgver function. I think the cleaner solution would be to figure out what the git+http dlagent is doing and redfine that temporarily. This should avoid all the hack aspects of the solution. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/154919",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
154,955 | I'm trying to change the value of $JAVA_HOME & I just can't seem to find in which file is it being set currently. I can't remember where did I set it the last time. Already tried How to determine where an environment variable came from? but I need(ed) a list of files where the variable can be set. | You didn't specify a shell. So, I will assume bash . The next issue is: did you set it for your user only or system-wide? If you set it for your user only, then run: grep JAVA_HOME ~/.bash_profile ~/.bash_login ~/.profile ~/.bashrc If you set it system-wide, then it may vary with distribution but try: grep JAVA_HOME /etc/environment /etc/bash.bashrc /etc/profile.d/* /etc/profile If the above give no answer, you can cast a wider net: grep -r JAVA_HOME /etcgrep -r JAVA_HOME ~/ See also the suggestions in How to determine where an environment variable came from . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/154955",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62605/"
]
} |
155,012 | Zombie processes are created in Unix/Linux systems.We can remove them via the kill command. But is there any in-built clean-up mechanism in Linux to handle zombie processes? | Zombie processes are already dead. You cannot kill them. The kill command or system call has no effect on a zombie process. (You can make a zombie go away with kill , but you have to shoot the parent, not the zombie, as we'll see in a minute.) A zombie process is not really a process, it's only an entry in the process table. There are no other resources associated with the zombie process: it doesn't have any memory or any running code, it doesn't hold any files open, etc. When a process dies, the last thing to go, after all other resources are cleaned up, is the entry in the process table. This entry is kept around, forming a zombie, to allow the parent process to track the exit status of the child. The parent reads the exit status by calling one of the wait family of syscalls; at this point, the zombie disappears. Calling wait is said to reap the child, extending the metaphor of a zombie being dead but in some way still not fully processed into the afterlife. The parent can also indicate that it doesn't care (by ignoring the SIGCHLD signal, or by calling sigaction with the SA_NOCLDWAIT flag), in which case the entry in the process table is deleted immediately when the child dies. Thus a zombie only exists when a process has died and its parent hasn't called wait yet. This state can only last as long as the parent is still running. If the parent dies before the child or dies without reading the child's status, the zombie's parent process is set to the process with PID 1, which is init . One of the jobs of init is to call wait in a loop and thus reap any zombie process left behind by its parent. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/155012",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/180404/"
]
} |
155,017 | A fork() system call clones a child process from the running process. The two processes are identical except for their PID. Naturally, if the processes are just reading from their heaps rather than writing to it, copying the heap would be a huge waste of memory. Is the entire process heap copied? Is it optimized in a way that only writing triggers a heap copy? | The entirety of fork() is implemented using mmap / copy on write. This not only affects the heap, but also shared libraries, stack, BSS areas. Which, incidentally, means that fork is a extremely lightweight operation, until the resulting 2 processes (parent and child) actually start writing to memory ranges. This feature is a major contributor to the lethality of fork-bombs - you end up with way too many processes before kernel gets overloaded with page replication and differentiation. You'll be hard-pressed to find in a modern OS an example of an operation where kernel performs a hard copy (device drivers being the exception) - it's just far, far easier and more efficient to employ VM functionality. Even execve() is essentially "please mmap the binary / ld.so / whatnot, followed by execute" - and the VM handles the actual loading of the process to RAM and execution. Local uninitialized variables end up being mmaped from a 'zero-page' - special read-only copy-on-write page containing zeroes, local initialized variables end up being mmaped (copy-on-write, again) from the binary file itself, etc. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/155017",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1079/"
]
} |
155,023 | As far as I know no update on a linux machine requires a restart. Windows however needs to restart several times for a update to complete which is understandable because the hardware might be in use at the moment and a restart ensures that no software uses the driver. But how can an OS (or linux as an example) handle such a situation where you want to update a driver but it is currently in use? | Updates on Linux require a restart if they affect the kernel. Drivers are part of the kernel. It's sometimes possible to upgrade a driver on Linux without rebooting, but that doesn't happen often: the peripheral controller by the driver can't be in use during the update, and the new driver version has to be compatible with the running kernel. Upgrading a driver to a running system where the peripheral controlled by the driver is in use requires that the old driver leaves the peripheral in a state that the new driver is able to start with. The old and new driver must manage the handover of connections from clients as well. This is doable but difficult; how difficult depends on what the driver is driving. For example, a filesystem update without unmounting the filesystem requires the handover of some very complex data structures but is easy to cope with on the hardware side (just flush the buffers before the update, and start over with an empty cache). Conversely, an input driver only has to transmit a list of open descriptors or the like on the client side, but the hardware side requires that the new driver know what state the peripheral is in and must be managed carefully not to lose events. Updating a driver on a live system is a common practice during development on operating systems where drivers can be dynamically loaded and unloaded, but usually not while the peripheral is in use. Updating a driver in production is not commonly done on OSes like Linux and Windows; I suppose it does get done on high-availability systems that I'm not familiar with. Some drivers are not in the kernel (for example FUSE filesystems). This makes it easy to update them without updating the rest of the system, but it still requires that the driver not be in use (e.g. instances of the FUSE filesystem have to be unmounted and mounted again to make use of the new driver version). Linux does have mechanisms to upgrade the kernel without restarting: Ksplice , Kpatch , KGraft . This is technically difficult as the updated version has to be compatible with the old version to a large extent; in particular, its data structures have to have exactly the same binary layout. A few distributions offer this service for security updates. These features are not (yet?) available in the mainline Linux kernel. On a mainline Linux kernel, a driver can be updated only if it's loaded as a module and if the module can be unloaded and the new module is compatible with the running kernel. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/155023",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
155,026 | I would like to read a bit of the source code to try and understand how it all fits together, but cannot see where to start. Which file in the Linux source code is the main file used to compile the kernel? I was half expecting to find a kernel/main.c , but there are many files under kernel/ and I cannot see which one is the main one? Is it kernel/sys.c ? | The handover from the bootloader to the kernel necessarily involves some architecture-specific considerations such as memory addresses and register use. Consequently, the place to look is in the architecture-specific directories ( arch/* ). Furthermore, handover from the bootloader involves a precise register usage protocol which is likely to be implemented in assembler. The kernel even has different entry points for different bootloaders on some architectures. For example, on x86, the entry point is in arch/x86/boot/header.S (I don't know of other entry points, but I'm not sure that there aren't any). The real entry point is the _start label at offset 512 in the binary . The 512 bytes before that can be used to make a master boot record for an IBM PC-compatible BIOS (in the old days, a kernel could boot that way, but now this part only displays an error message). The _start label starts some fairly long processing, in real mode , first in assembly and then in main.c . At some point the initialization code switches to protected mode . I think this is the point where decompression happens if the kernel is compressed ; then control reaches startup_32 or startup_64 in arch/x86/kernel/head_*.S depending on whether this is a 32-bit or 64-bit kernel. After more assembly, i386_start_kernel in head32.c or x86_64_start_kernel in head64.c is invoked. Finally, the architecture-independent start_kernel function in init/main.c is invoked. start_kernel is where the kernel starts preparing for the real world. When it starts, there is only a single CPU and some memory (with virtual memory, the MMU is already switched on at that point). The code there sets up memory mappins, initializes all the subsystems, sets up interrupt handlers, starts the scheduler so that threads can be created, starts interacting with peripherals, etc. The kernel has other entry points than the bootloader: entry points when enabling a core on a multi-core CPU, interrupt handlers, system call handlers, fault handlers, … | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/155026",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78410/"
]
} |
155,033 | I have read the other questions about its functionality -- that fork bombs operate both by consuming CPU time in the process of forking, and by saturating the operating system's process table. A basic implementation of a fork bomb is an infinite loop that repeatedly launches the same processes. But I really want to know: what's the story of this command? why this :(){ :|:& };: and not another one? | It is not something new. It dates way back to 1970's when it got introduced. Quoting from here , One of the earliest accounts of a fork bomb was at the University ofWashington on a Burroughs 5500 in 1969. It is described as a "hack"named RABBITS that would make two copies of itself when it was run,and these two would generate two more copies each, and the copieswould continue making more copies until memory was full, causing asystem crash. Q The Misanthrope wrote a Rabbit-like program usingBASIC in 1972 while in grade 7. Jerry Leichter of Yale Universitydescribes hearing of programs similar to rabbits or fork bombs at hisAlma Mater of Princeton and says given his graduation date, they mustbe from 1973 or earlier. An account dating to 1974 describes a programactually named "rabbit" running on an IBM 360 system at a large firmand a young employee who was discharged for running it. So the :(){ :|:& };: is just a way of implementing the fork bomb in shell. If you take some other programming language, you could implement in those languages as well. For instance, in python you could implement the fork bomb as, import os while True: os.fork() More ways of implementing the fork bomb in different languages can be found from the wikipedia link. If you want to understand the syntax, it is pretty simple. A normal function in shell would look like, foo(){ # function code goes here} The fork() bomb is defined as follows: :(){ :|:&};: :|: - Next it will call itself using programming technique called recursion and pipes the output to another call of the function : . The worst part is function get called two times to bomb your system. & - Puts the function call in the background so child cannot die at all and start eating system resources. ; - Terminate the function definition : - Call (run) the function aka set the fork() bomb. Here is more human readable code: bomb() { bomb | bomb &}; bomb References http://www.cyberciti.biz/faq/understanding-bash-fork-bomb/ | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/155033",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79615/"
]
} |
155,035 | I have a partition scheme like this: //home//home/user/files/ When logged in as user , I cannot save any files at /home/user/files/ . The only solution I can find is to create a new subdirectory as root, e.g. /home/user/files/files/ , then as root, change the ownership to user . That would allow me to put files into /home/user/files/files/ . How can I create partitions within /home/user/ , but which are still usable, as if they were simply folders, by the ordinary user account? Preferably, the user account should not have the ability to accidentally delete this partition. | It is not something new. It dates way back to 1970's when it got introduced. Quoting from here , One of the earliest accounts of a fork bomb was at the University ofWashington on a Burroughs 5500 in 1969. It is described as a "hack"named RABBITS that would make two copies of itself when it was run,and these two would generate two more copies each, and the copieswould continue making more copies until memory was full, causing asystem crash. Q The Misanthrope wrote a Rabbit-like program usingBASIC in 1972 while in grade 7. Jerry Leichter of Yale Universitydescribes hearing of programs similar to rabbits or fork bombs at hisAlma Mater of Princeton and says given his graduation date, they mustbe from 1973 or earlier. An account dating to 1974 describes a programactually named "rabbit" running on an IBM 360 system at a large firmand a young employee who was discharged for running it. So the :(){ :|:& };: is just a way of implementing the fork bomb in shell. If you take some other programming language, you could implement in those languages as well. For instance, in python you could implement the fork bomb as, import os while True: os.fork() More ways of implementing the fork bomb in different languages can be found from the wikipedia link. If you want to understand the syntax, it is pretty simple. A normal function in shell would look like, foo(){ # function code goes here} The fork() bomb is defined as follows: :(){ :|:&};: :|: - Next it will call itself using programming technique called recursion and pipes the output to another call of the function : . The worst part is function get called two times to bomb your system. & - Puts the function call in the background so child cannot die at all and start eating system resources. ; - Terminate the function definition : - Call (run) the function aka set the fork() bomb. Here is more human readable code: bomb() { bomb | bomb &}; bomb References http://www.cyberciti.biz/faq/understanding-bash-fork-bomb/ | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/155035",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13099/"
]
} |
155,046 | I have a script which runs rsync with a Git working directory as destination. I want the script to have different behavior depending on if the working directory is clean (no changes to commit), or not. For instance, if the output of git status is as below, I want the script to exit: git statusAlready up-to-date.# On branch masternothing to commit (working directory clean)Everything up-to-date If the directory is not clean then I would like it to execute some more commands. How can I check for output like the above in a shell script? | Parsing the output of git status is a bad idea because the output is intended to be human readable, not machine-readable. There's no guarantee that the output will remain the same in future versions of Git or in differently configured environments. UVVs comment is on the right track, but unfortunately the return code of git status doesn't change when there are uncommitted changes. It does, however, provide the --porcelain option, which causes the output of git status --porcelain to be formatted in an easy-to-parse format for scripts, and will remain stable across Git versions and regardless of user configuration. We can use empty output of git status --porcelain as an indicator that there are no changes to be committed: if [ -z "$(git status --porcelain)" ]; then # Working directory cleanelse # Uncommitted changesfi If we do not care about untracked files in the working directory, we can use the --untracked-files=no option to disregard those: if [ -z "$(git status --untracked-files=no --porcelain)" ]; then # Working directory clean excluding untracked fileselse # Uncommitted changes in tracked filesfi To make this more robust against conditions which actually cause git status to fail without output to stdout , we can refine the check to: if output=$(git status --porcelain) && [ -z "$output" ]; then # Working directory cleanelse # Uncommitted changesfi It's also worth noting that, although git status does not give meaningful exit code when the working directory is unclean, git diff provides the --exit-code option, which makes it behave similar to the diff utility, that is, exiting with status 1 when there were differences and 0 when none were found. Using this, we can check for unstaged changes with: git diff --exit-code and staged, but not committed changes with: git diff --cached --exit-code Although git diff can report on untracked files in submodules via appropriate arguments to --ignore-submodules , unfortunately it seems that there is no way to have it report on untracked files in the actual working directory. If untracked files in the working directory are relevant, git status --porcelain is probably the best bet. | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/155046",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78937/"
]
} |
155,053 | Is there any way to automate Linux server configuration? I'm working on setting up a couple of new build servers, as well as an FTP server, and would like to automate as much of the process as possible. The reason for this is that the setup and configuration of these servers needs to be done in an easily repeatable way. We figured that automating as much of this process as possible would make it easiest to repeat as needed in the future. Essentially, all the servers need is to install the OS, as well as a handful of packages. There's nothing overly complicated about the setups. So, is there a way to automate this process (or at least some amount of it)? EDIT: Also, say I use Kickstart, is there a way to remove the default Ubuntu repositories, and just install the packages from a collection of .deb files we have locally (preferably through apt, rather than dpkg)? | Yes! This is a big deal, and incredibly common. And there are two basic approaches. One way is simply with scripted installs, as for example used in Fedora, RHEL, or CentOS's kickstart. Check this out in the Fedora install guide: Kickstart Installations . For your simple case, this may be sufficient. (Take this as an example; there are similar systems for other distros, but since I work on Fedora that's what I'm familiar with.) The other approach is to use configuration management . This is a big topic, but look into Puppet, Chef, Ansible, cfengine, Salt, and others. In this case, you might use a very basic generic kickstart to provision a minimal machine, and the config management tool to bring it into its proper role. As your needs and infrastructure grow, this becomes incredibly important. Using config management for all your changes means that you can recreate not just the initial install, but the evolved state of the system as you introduce the inevitable tweaks and fixes caused by interacting with the real world. We figured that automating as much of this process as possible would make it easiest to repeat as needed in the future. You are absolutely on the right track — this is the bedrock principle of professional systems administration. We even have a meme image for it: It's often moderately harder to set up initially, and there can be a big learning curve for some of the more advanced systems, but it pays for itself forever. Even if you have only a handful of systems, think about how much you want to work at recreating them in the event of catastrophe in the middle of the night, or when you're on vacation. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/155053",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83835/"
]
} |
155,064 | I'm using cygwin tail to follow busy java web application logs, on a windows server, generating roughly 16Gb worth of logs a day. I"m constrained to 10MB log sizes, so the files roll very often. The command line I'm using is : /usr/bin/tail -n 1000 -F //applicationserver/logs/logs.log It survives 2-4 rolls of the file, about 4-6 minutes, but eventually, usually reports: "File truncated" and then echos the name of the file every second. the file is busily filling and rotating. Am I exceeding the capability of tail? | Yes! This is a big deal, and incredibly common. And there are two basic approaches. One way is simply with scripted installs, as for example used in Fedora, RHEL, or CentOS's kickstart. Check this out in the Fedora install guide: Kickstart Installations . For your simple case, this may be sufficient. (Take this as an example; there are similar systems for other distros, but since I work on Fedora that's what I'm familiar with.) The other approach is to use configuration management . This is a big topic, but look into Puppet, Chef, Ansible, cfengine, Salt, and others. In this case, you might use a very basic generic kickstart to provision a minimal machine, and the config management tool to bring it into its proper role. As your needs and infrastructure grow, this becomes incredibly important. Using config management for all your changes means that you can recreate not just the initial install, but the evolved state of the system as you introduce the inevitable tweaks and fixes caused by interacting with the real world. We figured that automating as much of this process as possible would make it easiest to repeat as needed in the future. You are absolutely on the right track — this is the bedrock principle of professional systems administration. We even have a meme image for it: It's often moderately harder to set up initially, and there can be a big learning curve for some of the more advanced systems, but it pays for itself forever. Even if you have only a handful of systems, think about how much you want to work at recreating them in the event of catastrophe in the middle of the night, or when you're on vacation. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/155064",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83834/"
]
} |
155,085 | I need to make a fairly simple bash script to pull bytes one at a time from a binary file, send it out a serial port, and then wait for a byte to come back before I send the next one. This is effectively for an EEPROM programmer, where an off the shelf solution won't work. I'm mostly stuck at pulling the bytes out of the file into a variable, beyond that I know I can echo the values down the serial port and read them back with dd. (Is there a better way to do that also?) Thanks for the help! | Use dd or xxd (part of Vim), for example to read one byte ( -l ) at offset 100 ( -s ) from binary file try: xxd -p -l1 -s 100 file.bin to use hex offsets, in Bash you can use this syntax $((16#64)) , e.g.: echo $((16#$(xxd -p -l1 -s $((16#FC)) file.bin))) which reads byte at offset FC and print it in decimal format. Alternatively use dd , like: dd if=file.bin seek=$((16#FC)) bs=1 count=5 status=none which will dump raw data of 5 bytes at hex offset FC . Then you can assign it into variable, however it won't work when your data has NULL bytes , therefore either you can skip them ( xxd -a ) or as workaround you can store them in plain hexadecimal format. For example: Read 10 bytes at offset 10 into variable which contain bytes in hex format: hex=$(xxd -p -l 10 -s 10 file.bin) Then write them into file or device: xxd -r -p > out.bin <<<$hex Here are few useful functions in Bash: set -e# Read single decimal value at given offset from the file.read_value() { file="$1" offset="$2" [ -n "$file" ] [ -n "$offset" ] echo $((16#$(xxd -p -l1 -s $offset "$file")))}# Read bytes in hex format at given offset from the file.read_data() { file="$1" offset="$2" count="$3:-1" xxd -p -l $count -s $offset "$file"} Sample usage: read_value file.bin $((16#FC)) # Read 0xFC offset from file.bin. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/155085",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78910/"
]
} |
155,086 | I have just installed Vimperator 3.8.2 on Firefox 32 (running Fedora 20). The status bar at the bottom of the window does not display any information in Normal mode , even though when I do :set status? I get status=input,location,bookmark,history,tabcount,position, Nor do I get the little highlighted Error indicator, or mode indicators when I enter Insert or Caret. Everything else seems to work just fine. I haven't modified any of the default values. What might be causing this? | There's a possibility of forgeting enabling Liberator Statusline Toolbar. Right-click on places like Menu bar, Address bar, Toolbar, etc. A menu shows up in which you will find a line named "Liberator Statusline Toolbar". Click on it to enable. The Vimperator status bar magically appears again. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/155086",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83867/"
]
} |
155,105 | Fedora documentation says: 5.2. Advanced Searches If you do not know the name of the package, use the search or provides options. Alternatively, use wild cards or regular expressions with any yum search option to broaden the search critieria. Well, at first I thought that this is simply wrong or outdated, since no known syntax of regular expressions would work with yum search , but then I found this : yum search [cl-*] for example. But it does something otherworldly. It finds things which have neither "c" nor "l" letters in the name or description. (What I wanted is to find all packages, whose names would be matched by cl-.* regexp. I also found few people suggesting to pipe yum results to grep , which, of course, solves the problem. But, just on principle, I want to find out what did the thing in the square brackets do. What if yum actually can search by regexp? | searching with YUM You generally don't use any regular expressions (globs) when searching with yum search since the command search is already looking for sub-strings within the package names and their summaries. How do I know this? There's a message that tells you this when you use yum search . Name and summary matches only, use "search all" for everything. NOTE: The string [cl-*] is technically a glob in the Bash shell. So you generally look for fragments of strings that you want with search . The regular expressions come into play when you're looking for particular packages. These are the YUM commands like list and install . For example: $ yum list cl-* | expandLoaded plugins: fastestmirror, langpacks, refresh-packagekit, tsflagsLoading mirror speeds from cached hostfile * fedora: mirror.dmacc.net * rpmfusion-free: mirror.nexcess.net * rpmfusion-free-updates: mirror.nexcess.net * rpmfusion-nonfree: mirror.nexcess.net * rpmfusion-nonfree-updates: mirror.nexcess.net * updates: mirror.dmacc.netAvailable Packagescl-asdf.noarch 20101028-5.fc19 fedora cl-clx.noarch 0.7.4-4.3 home_zhonghuarencl-ppcre.noarch 2.0.3-3.3 home_zhonghuaren The only caveat you have to be careful with regexes/globs, is if there are files within your shell that are named such that they too would match cl-* . In those cases your shell will expand the regex/glob prior to it being presented to YUM. So instead of running yum list cl-* you'll be running the command yum list cl-file , if there's a file matching the regex/glob cl-* . For example: $ ls cl-filecl-file$ yum list cl-*Loaded plugins: fastestmirror, langpacks, refresh-packagekit, tsflagsLoading mirror speeds from cached hostfile * fedora: mirror.steadfast.net * rpmfusion-free: mirror.nexcess.net * rpmfusion-free-updates: mirror.nexcess.net * rpmfusion-nonfree: mirror.nexcess.net * rpmfusion-nonfree-updates: mirror.nexcess.net * updates: mirror.steadfast.netError: No matching Packages to list You can guard against this happening by escaping the wildcard like so: $ yum list cl-\* | expandLoaded plugins: fastestmirror, langpacks, refresh-packagekit, tsflagsLoading mirror speeds from cached hostfile * fedora: mirror.dmacc.net * rpmfusion-free: mirror.nexcess.net * rpmfusion-free-updates: mirror.nexcess.net * rpmfusion-nonfree: mirror.nexcess.net * rpmfusion-nonfree-updates: mirror.nexcess.net * updates: mirror.dmacc.netAvailable Packagescl-asdf.noarch 20101028-5.fc19 fedora cl-clx.noarch 0.7.4-4.3 home_zhonghuarencl-ppcre.noarch 2.0.3-3.3 home_zhonghuaren So what about the brackets I suspect you have files in your local directory that are getting matched when you used [cl-*] as an argument to yum search . These files after being matched by the shell, were passed to the yum search command where matches where then found. For example: $ ls cl-filecl-file$ yum search cl-*Loaded plugins: fastestmirror, langpacks, refresh-packagekit, tsflagsLoading mirror speeds from cached hostfile * fedora: mirror.dmacc.net * rpmfusion-free: mirror.nexcess.net * rpmfusion-free-updates: mirror.nexcess.net * rpmfusion-nonfree: mirror.nexcess.net * rpmfusion-nonfree-updates: mirror.nexcess.net * updates: mirror.dmacc.net======================================================================= N/S matched: cl-file =======================================================================opencl-filesystem.noarch : OpenCL filesystem layout Name and summary matches only, use "search all" for everything. NOTE: The match above was matched against my file's name, cl-file , and not the cl-* as I had intended. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/155105",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43851/"
]
} |
155,111 | I want to dynamically assign values to variables using eval .The following dummy example works: var_name="fruit"var_value="orange"eval $(echo $var_name=$var_value)echo $fruitorange However, when the variable value contains spaces, eval returns an error, even if $var_value is put between double quotes: var_name="fruit"var_value="blue orange"eval $(echo $var_name="$var_value")bash: orange : command not found Any way to circumvent this ? | Don't use eval, use declare $ declare "$var_name=$var_value"$ echo "fruit: >$fruit<"fruit: >blue orange< | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/155111",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83889/"
]
} |
155,139 | In my /etc/passwd file, I can see that the www-data user used by Apache, as well as all sorts of system users, have either /usr/sbin/nologin or /bin/false as their login shell. For example, here is a selection of lines: daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologinbin:x:2:2:bin:/bin:/usr/sbin/nologinsys:x:3:3:sys:/dev:/usr/sbin/nologingames:x:5:60:games:/usr/games:/usr/sbin/nologinwww-data:x:33:33:www-data:/var/www:/usr/sbin/nologinsyslog:x:101:104::/home/syslog:/bin/falsewhoopsie:x:109:116::/nonexistent:/bin/falsemark:x:1000:1000:mark,,,:/home/mark:/bin/bash Consequently, if I try to swap to any of these users (which I'd sometimes like to do to check my understanding of their permissions, and which there are probably other at least halfway sane reasons for), I fail: mark@lunchbox:~$ sudo su www-data This account is currently not available.mark@lunchbox:~$ sudo su syslog mark@lunchbox:~$ Of course, it's not much of an inconvenience, because I can still launch a shell for them via a method like this: mark@lunchbox:~$ sudo -u www-data /bin/bash www-data@lunchbox:~$ But that just leaves me wondering what purpose is served by denying these users a login shell. Looking around the internet for an explanation, many people claim that this has something to do with security, and everybody seems to agree that it would be in some way a bad idea to change the login shells of these users. Here's a collection of quotes: Setting the Apache user's shell to something non-interactive is generally good security practice (really all service users who don't have to log in interactively should have their shell set to something that's non-interactive). -- https://serverfault.com/a/559315/147556 the shell for the user www-data is set to /usr/sbin/nologin, and it's set for a very good reason. -- https://askubuntu.com/a/486661/119754 [system accounts] can be security holes , especially if they have a shell enabled: Bad bin:x:1:1:bin:/bin:/bin/sh Good bin:x:1:1:bin:/bin:/sbin/nologin -- https://unix.stackexchange.com/a/78996/29001 For security reasons I created a user account with no login shell for running the Tomcat server: # groupadd tomcat# useradd -g tomcat -s /usr/sbin/nologin -m -d /home/tomcat tomcat -- http://www.puschitz.com/InstallingTomcat.html While these posts are in unanimous agreement that not giving system users real login shells is good for security, not one of them justifies this claim, and I can't find an explanation of it anywhere. What attack are we trying to protect ourselves against by not giving these users real login shells? | If you take a look at the nologin man page you'll see the following description. excerpt nologin displays a message that an account is not available and exits non-zero. It is intended as a replacement shell field to deny login access to an account. If the file /etc/nologin.txt exists, nologin displays its contents to the user instead of the default message. The exit code returned by nologin is always 1. So the actual intent of nologin is just so that when a user attempts to login with an account that makes use of it in the /etc/passwd is so that they're presented with a user friendly message, and that any scripts/commands that attempt to make use of this login receive the exit code of 1. Security With respect to security, you'll typically see either /sbin/nologin or sometimes /bin/false , among other things in that field. They both serve the same purpose, but /sbin/nologin is probably the preferred method. In any case they're limiting direct access to a shell as this particular user account. Why is this considered valuable with respect to security? The "why" is hard to fully describe, but the value in limiting a user's account in this manner, is that it thwarts direct access via the login application when you attempt to gain access using said user account. Using either nologin or /bin/false accomplishes this. Limiting your system's attack surface is a common technique in the security world, whether disabling services on specific ports, or limiting the nature of the logins on one's systems. Still there are other rationalizations for using nologin . For example, scp will no longer work with a user account that does not designate an actual shell, as described in this ServerFault Q&A titled: What is the difference between /sbin/nologin and /bin/false? . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/155139",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29001/"
]
} |
155,142 | I managed to shoot myself where it hurts (really bad) by reformatting a partition that held valuable data. Of course it was not intentional, but it happened. However, I managed to use testdisk and photorec to recover most of the data. So now I have all that data distributed over almost 25,000 directories. Most of the files are .txt files, while the rest are image files. There are more than 300 .txt files in each directory. I can grep or use find to extract certain strings from the .txt files and output them to a file. For example, here's a line that I've used to verify that my data is in the recovered files: find ./recup*/ -name '*.txt' -print | xargs grep -i "searchPattern" I can output "searchPattern" to a file, but that just gives me that pattern. Here's what I really would like to accomplish: Go through all the files and look for a specific string. If that string is found in a file, cat ALL the contents of that file to an output file. If the pattern is found in more than one file, append the contents of subsequent files to that output file. Note that I just don't want to output the pattern I'm searching for, but ALL the contents of the file in which the patterns is found. I think this is doable, but I just don't know how to grab all the contents of a file after grepping a specific pattern from it. | If I understand your goal correctly, the following will do what you want: find ./recup*/ -name '*.txt' -exec grep -qi "searchPattern" {} \; -exec cat {} \; > outputfile.txt This will look for all *.txt files in ./recup*/ , test each one for searchPattern , if it matches it'll cat the file. The output of all cat ed files will be directed into outputfile.txt . Repeat for each pattern and output file. If you have a very large number of directories matching ./recup* , you might end up with a argument list too long error . The simple way around this is to do something like this instead: find ./ -mindepth 2 -path './recup*.txt' -exec grep -qi "searchPattern" {} \; -exec cat {} \; > outputfile.txt This will match the full path. So ./recup01234/foo/bar.txt will be matched. The -mindepth 2 is so that it won't match ./recup.txt , or ./recup0.txt . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/155142",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83902/"
]
} |
155,150 | NOTE: This is related to my question: " Apache 2.4 won't reload, any problem with my configuration? ". I'm trying to test a local site, locally. As I understand Apache 2 (and perhaps Apache as well) has something called VirtualHost . My little bit of understanding tells me that virtualhosting is a way where one server/IP address can serve multiple domains. https://en.wikipedia.org/wiki/Virtual_hosting . Anyway, I'm getting the following error when running Apache 2's configtest to see where I'm failing. I'm running Apache 2.4.10-1, and it seems there are a lot of changes which have happened between Apache 2.2 and Apache 2.4 which I'm not aware of. $ sudo apache2ctl configtest[sudo] password for shirish:AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this messageSyntax OK This is the /etc/hosts file: $ cat /etc/hosts127.0.0.1 localhost127.0.1.1 debian mini I also see an empty /etc/hosts.conf file. Perhaps the data in /etc/hosts needs to be copied to /etc/hosts.conf for the server to take cognizance? My hostname: $ hostnamedebian This is the configuration file of the site: $ cat /etc/apache2/sites-available/minidebconfindia.conf<VirtualHost mini:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/html/in2014.mini/website <Directory /> Options +FollowSymLinks +Includes Require all granted </Directory> <Directory /var/www/html/in2014.mini/website/> Options +Indexes +FollowSymLinks +MultiViews +Includes Require all granted </Directory></VirtualHost> I also read about binding to addresses and ports , but I haven't understood that well for multiple reasons. It doesn't give/share an example as to in which file those lines need to be put and what will come before and after. An example would have been much better. I did that and restarted the server, but I still get the same error. ~$ sudo apache2ctl configtestAH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this messageSyntax OK It seems there are three configuration files in Debian that I need to know and understand. /etc/apache2$ ls *.confapache2.conf ports.conf and /etc/apache2/conf.d$ ls *.confhttpd.conf Apparently, apache2.conf IS the global configuration file while the httpd.conf is a user-configuration file. There is also ports.conf. Both apache2.conf and ports.conf are at the defaults except I have changed the loglevel of Apache from warn to debug . I tried one another thing: $ sudo apache2ctl -SAH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this messageVirtualHost configuration:127.0.1.1:80 debian (/etc/apache2/sites-enabled/minidebconfindia.conf:1)*:80 127.0.1.1 (/etc/apache2/sites-enabled/000-default.conf:1)ServerRoot: "/etc/apache2"Main DocumentRoot: "/var/www/html"Main ErrorLog: "/var/log/apache2/error.log"Mutex watchdog-callback: using_defaultsMutex default: dir="/var/lock/apache2" mechanism=fcntlMutex mpm-accept: using_defaultsPidFile: "/var/run/apache2/apache2.pid"Define: DUMP_VHOSTSDefine: DUMP_RUN_CFGUser: name="www-data" id=33Group: name="www-data" id=33 Maybe somebody has more insight. | The file to edit: /etc/apache2/apache2.conf Command to edit file: sudo nano /etc/apache2/apache2.conf For a global servername you can put it at the top of the file (outside of virtual host tags). The first line looks like: ServerName myserver.mydomain.com Then save and test the configuration with the following command: apachectl configtest You should get: Syntax OK Then you can restart the server and check you don't get the error message: sudo service apache2 restart | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/155150",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50490/"
]
} |
155,154 | I have been trying to harden my Debian system by stopping and disabling the 20 or so unnecessary services listening by default. One of them is called "minissdpd". Apparently this provides "discovery" services to plug-and-play devices, whatever that means. Seems kind of crazy to me that something intended to help local peripherals needs to be listening to Chinese hackers on the other side of the world. What does discovery services even mean? I looked in some vulnerability database, and sure enough minissdpd had a whole slew of vulnerabilities listed. How can they have this enabled in the default distribution? Seriously, its like install Debian, get hacked. Anyway, my main question is: now that I have disabled this service, is something bad going to happen (like plug something in and it won't work)? | I would say there's no issue with disabling this service, assuming you have no need for UPnP (Universal Plug and Play) . This is a service which allows for devices to "auto discover" one another on your network and advertise services that they can either provide or are looking for to consume. http://miniupnp.free.fr/minissdpd.html excerpt I first coded MiniSSDPd as a small daemon used by MiniUPnPc (a UPnP control point for IGD devices) to speed up device discoveries. MiniSSDPd keep memory of all UPnP devices that announced themselves on the network through SSDP NOTIFY packets. More recently, some MiniUPnPd (an implementation of a UPnP IDG) users complained about the non-possibility to run MiniUPnPd and MediaTomb (an implementation of a UPnP Media Server) on the same computer because these two piece of software needed to open UDP port 1900. I then added to MiniSSDPd the ability to handle all SSDP traffic recieved on a computer via the multicast group 239.255.255.250:1900. You may be interested in reading this forum thread about all this. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/155154",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47542/"
]
} |
155,184 | I have got one folder for log with 7 folders in it. Those seven folders too have subfolders in them and those subfolders have subfolders too. I want to delete all the files older than 15 days in all folders including subfolders without touching folder structrure, that means only files. mahesh@inl00720:/var/dtpdev/tmp/ > lsA1 A2 A3 A4 A5 A6 A7mahesh@inl00720:/var/dtpdev/tmp/A1/ > lsB1 B2 B3 B4 file1.txt file2.csv | You could start by saying find /var/dtpdev/tmp/ -type f -mtime +15 .This will find all files older than 15 days and print their names.Optionally, you can specify -print at the end of the command,but that is the default action.It is advisable to run the above command first,to see what files are selected. After you verify that the find command is listing the filesthat you want to delete (and no others),you can add an "action" to delete the files.The typical actions to do this are: -exec rm -f {} \; (or, equivalently, -exec rm -f {} ';' ) This will run rm -f on each file; e.g., rm -f /var/dtpdev/tmp/A1/B1; rm -f /var/dtpdev/tmp/A1/B2; rm -f /var/dtpdev/tmp/A1/B3; … -exec rm -f {} + This will run rm -f on many files at once; e.g., rm -f /var/dtpdev/tmp/A1/B1 /var/dtpdev/tmp/A1/B2 /var/dtpdev/tmp/A1/B3 … so it may be slightly faster than option 1. (It may need to run rm -f a few times if you have thousands of files.) -delete This tells find itself to delete the files, without running rm .This may be infinitesimally faster than the -exec variants,but it will not work on all systems. So, if you use option 2, the whole command would be: find /var/dtpdev/tmp/ -type f -mtime +15 -exec rm -f {} + | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/155184",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83926/"
]
} |
155,189 | I thought about whether this question is suitable for SE or not, I hope you agree it is. Some time ago I asked on SE how to find text in files and leave the file with only the matching lines that contain the text I was searching for. The question is here: How to find text in files and only keep the respective matching lines using the terminal on OS X? While the answer worked perfectly I now wonder, how come sed is so fast? In my use case, I had quite a lot of files which in total were about 30 Gb in size. The sed command ran in about 12 seconds which I never would have believed (working with a normal HDD). Within 12 seconds the command read through 30 Gb of text, truncating each file to only keep the respective lines I was filtering for. How does this work? (or: what is this sorcery?) The actual command was: find . -type f -exec sed -i'' '/\B\/foobar\b/!d' {} \; | You could start by saying find /var/dtpdev/tmp/ -type f -mtime +15 .This will find all files older than 15 days and print their names.Optionally, you can specify -print at the end of the command,but that is the default action.It is advisable to run the above command first,to see what files are selected. After you verify that the find command is listing the filesthat you want to delete (and no others),you can add an "action" to delete the files.The typical actions to do this are: -exec rm -f {} \; (or, equivalently, -exec rm -f {} ';' ) This will run rm -f on each file; e.g., rm -f /var/dtpdev/tmp/A1/B1; rm -f /var/dtpdev/tmp/A1/B2; rm -f /var/dtpdev/tmp/A1/B3; … -exec rm -f {} + This will run rm -f on many files at once; e.g., rm -f /var/dtpdev/tmp/A1/B1 /var/dtpdev/tmp/A1/B2 /var/dtpdev/tmp/A1/B3 … so it may be slightly faster than option 1. (It may need to run rm -f a few times if you have thousands of files.) -delete This tells find itself to delete the files, without running rm .This may be infinitesimally faster than the -exec variants,but it will not work on all systems. So, if you use option 2, the whole command would be: find /var/dtpdev/tmp/ -type f -mtime +15 -exec rm -f {} + | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/155189",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62627/"
]
} |
155,274 | Similar to my last question: Open a text file and let it update itself ; is there a way I could do the same but for a folder instead? As I have a log folder, can I use tail -f with a folder? i.e. $ tail -f /tmp/logs/ I know that this won't work, but is there an alternative? I am using RHEL 5.10 | Yes there is an alternative, after a bit of research, I saw that you can use: $ watch "ls -l" You need to be in the folder you want to watch . Also, you can use tail -10 at the end: $ watch "ls -l | tail -10" The command types ls every 2 seconds and filters the output to the last 10 files. If you read the reference link, it has some great tips, also if you can't remember the above command, then you can add the following to your .bashrc file: alias taildir='watch "ls -l | tail -10"' So you can just type taildir instead of writing the full command out again. Reference: How to Tail A Directory . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/155274",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20752/"
]
} |
155,331 | In a file containing lines like this one: # lorem ipsum blah variable I would like to remove # (comment) character in the same line that contains a specific string, in place. Is sed good for this? I'm struggling to get this conditional working. I have a "clumsy" way of doing this; I can find the matching line number with awk or sed and then use this number in a separate sed command, but I believe that this can be done in a much better way. | Use the string you are looking for as the selector for the lines to be operated upon: sed '/ipsum/s/#//g' /ipsum/ selects lines containing "ipsum" and only on these lines the command(s) that follow are executed. You can use braces to run more commands /ipsum/{s/#//g;s/@/-at-/g;} | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/155331",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/211711/"
]
} |
155,347 | Is there a way to tell rsync to sync only source directories that are missing on a destination (whole directories, not just any missing file)? Considering S as the set of source directories, and D the set of destination directories, I want to: copy the entire directory s in S if s not in D . skip entirely the directory s (indep. of the files it contains) if s in D . Of course it's possible to list the directories on both sides, rsync the list from the destination, do some perl and generate a list of all directories that need to be copied, but it would be better if it were possible with just one invocation of rsync . For example, if source and destination are on the same server, one could do: src=/some/wheredst=/else/wherecd /tmp(cd $src; find . -type d) | sort > a(cd $dst; find . -type d) | sort > bcomm -23 a b | perl -e 'L: while(<>) {chomp; $p=$_; while ($p=~s,/[^/]+$,,) { next L if $n{$p}; } $n{$_}++; s,^./,/,; print "$_/***\n"}' > tocopyrsync -vmazn --include-from tocopy --exclude '*' $src/ $dst/ (without the -n if for real). PS: note that the perl "one-liner" above (more or less) strips subdirs of dirs that we decide to copy (as those subdirs are subsumed in the copy of their parent). That recipe ends up with the minimal set of dirs to copy. | Use the string you are looking for as the selector for the lines to be operated upon: sed '/ipsum/s/#//g' /ipsum/ selects lines containing "ipsum" and only on these lines the command(s) that follow are executed. You can use braces to run more commands /ipsum/{s/#//g;s/@/-at-/g;} | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/155347",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31177/"
]
} |
155,352 | There's a pretty detailed set of instructions for how to take screenshots under debian online. The first paragraph suggests that debian supports an inbuilt screenshot facility: "Print Screen" key to take a screenshot of the whole screen.Alt+"Print Screen" key to take a screenshot of the current active window. The instructions imply that when I press PrtScn I should see this popup.However, I am running Debian Jessie and when I press PrtScn I just hear a camera shutter sound and don't see a popup. I tried pasting into GIMP ( edit -> paste ) but there was nothing on the clipboard. The fact that I'm hearing the shutter sound suggests that something is happening, but how do I get a copy of the image? | GNOME has an in-built screenshot feature for quite some time. Screenshots are stored in $HOME/Pictures , there is no dialog or any confirmation. You just hear the camera click when pressing the screenshot shortcut. By default, the shortcuts are: PrtScn - capture whole screen Alt + PrtScn - capture the current window Shift + PrtScn - the cursor changes to crosshairs, now you can select the region to be captured. Ctrl + Shift + PrtScn - Same as above but save to clipboard. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/155352",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65634/"
]
} |
155,360 | I want to generate xorg.conf but for that X needs to not be running. How do I stop X or start without it?I tried ctrl + alt + F2 but the X server is still running. I'm running Lubuntu 14.10. | I ended up doing the following: sudo service lightdm stop Then I had to run ctrl + alt + F2 and log in the second terminal, otherwise it would just sit there with dark screen. To start it back up: sudo service lightdm start | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/155360",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84023/"
]
} |
155,384 | I want to split a file into chunks with 2 words each. $cat tmpword1 word2 word3 word4 word5 word6 word7$sed -e 's/word. word. /&\n/g' tmpword1 word2 word3 word4 word5 word6 word7$sed -e 's/word. \{2\}/&\n/g' tmpword1 word2 word3 word4 word5 word6 word7 I expected the last command to give same result as the one before it. What is wrong? | Sorry, seems like I figured it out just after posting. It needs to be sed -e 's/\(word. \)\{2\}/&\n/g' tmp Apparently the parentheses are needed to let sed apply {2} condition on the entire pattern word. and not just preceding space. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/155384",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23301/"
]
} |
155,396 | I have installed Ubuntu 14.04 and Windows 8 as a dual boot system, now I want to install Linux Mint 17 and have a triple boot system. How can I have Mint 17 in my triple booting? How can I partitioning my HDD? I want to install that with USB and not using windows installer. | Just install normally. You can boot from your USB and follow the steps of the installer. You can either create a partition manually before installing or use the tools provided by the Mint installer to partition during the installation process. The only thing you really have to worry about is when the installer asks you whether to install a boot loader (GRUB). Then, you have 2 choices. Since you already have a GRUB installed by Ubuntu, you can either choose to not install a new one from mint and use Ubuntu's or you can install Mint's and overwrite Ubuntu's. Use the existing GRUB. When the installer asks you whether to install a boot loader, say no. Once the installation has finished, reboot and load Ubuntu (Mint will not appear in the list of available OSs). From Ubuntu, refresh GRUB so it will detect your new Mint installation: sudo update-grub Use Mint's GRUB. When the installer asks you whether to install a boot loader, say yes. Make sure you install it in the same location where your Ubuntu's GRUB was installed. This will probably be the master boot record (MBR) of your primary hard drive. Reboot and you should now have Mint's GRUB installed and will be able to choose Mint, Ubuntu or Windows. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/155396",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72456/"
]
} |
155,417 | Supose you present the following TUI in the shell: I need a set of libraries that can be used in the shell to do it. Being sure, ncurses has not been used, because it make a dependecy. Question: How to build the widget/window or another TUI in the shell? | Okay, I feel like you might be asking one of two questions, so I will try to answer both. What libraries can one use to create ncurses like interfaces for shell scripts? Actually, I would never have recommended ncurses directly for shell scripts anyway since it's really not meant to be used by shell languages. Instead, I would recommend dialog . Dialog is a shim library which sits between ncurses and the shell making its use much simpler. This would functionally give you two dependencies (one on ncurses and one on dialog ) which you seem to be against for some reason. Given that I don't want any external dependencies, how can I create my own ncurses-like TUI library? This is way outside the scope of *nix.SE. Creating a new TUI library is not going to be trivial (particularly if you're trying to create it in pure shell). There have been plenty of projects to attempt making new libraries to replace some of the use of ncurses (e.g., termbox is one of the more successful ones). If you intend to create your own library, you may want to look at the lower-level projects like ncurses and termbox and higher-level projects like dialog. Looking at their work might give you an idea of how to get started. A final recommendation: Dependencies on external projects, though they require some extra work (for integration and support), are not a bad thing. It means that you can focus only on the tool you want to make and leave the ground-work to those doing the lower infrastructure. Linux, in particular out of the *nix platforms, has a long history of dependency interaction. If your goal is to learn how the lower-level stuff is done, great go for it. If instead you're trying to make a tool that would benefit from such low level work, just depend on an external tool. You'll be happier and so will be everyone that looks at your code. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/155417",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21911/"
]
} |
155,423 | I would like to download a SQL tutorial here http://www.w3schools.com/sql/default.asp , as a book with all linked SQL related sections. Here is my command wget -r -np -nH -p -k http://www.w3schools.com/sql/default.asp Under the downloaded sql directory, I get some asp files, which I don't know how to open in Chrome. Did I download the webpages correctly? How shall I do? Thanks! | I would use an appropriate tool such as httrack and not waste my time in trying to coax this out of a tool such as wget or curl . Here's how you can download the URL that you're asking about, I did it myself and it even works just fine in Chrome! $ httrack http://www.w3schools.com/sql/default.aspMirror launched on Sat, 13 Sep 2014 22:50:32 by HTTrack Website Copier/3.48-19 [XR&CO'2014]mirroring http://www.w3schools.com/sql/default.asp with the wizard help..Done.57: www.w3schools.com/sql/trysql_view.asp?x= (0 bytes) - OKThanks for using HTTrack! After it's complete I'm left with the following directory structure: $ ls -ltotal 36-rw-r--r--. 1 slm slm 4243 Sep 13 22:50 backblue.gif-rw-rw-r--. 1 slm slm 181 Sep 13 22:51 cookies.txt-rw-r--r--. 1 slm slm 828 Sep 13 22:50 fade.gifdrwx------. 2 slm slm 4096 Sep 13 22:51 hts-cache-rw-rw-r--. 1 slm slm 736 Sep 13 22:51 hts-log.txt-rw-r--r--. 1 slm slm 5057 Sep 13 22:50 index.htmldrwxr-xr-x. 3 slm slm 4096 Sep 13 22:50 www.w3schools.com To check things out simply navigate to the index.html file at the root level and you'll be greeted with the following page: Clicking the link will take you to your downloaded pages: And just for measure, here I'm clicking one of the side links to demonstrate that it can navigate just fine. References httrack website httrack documentation | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/155423",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
155,433 | I am not asking for per-request of a key, I know how to do that, but I want a chosen alternative gpg keyserver to be defaulted to, when no specific keyserver is specified in a request then-on-out. Any ideas? | At ubuntu a standard keyserver is set. You can add the entry: keyserver NAME_OF_KEYSERVER in file ~/.gnupg/gpg.conf | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/155433",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
155,438 | — It's known you can run x86_32 programms with x86_64 kernel if it was compiled with support for that. But dynamic linker doesn't provide any way to define a separate set of preload libraries for 32-bit programs, so every time you run such a program, had you x86_64 preloads, you would face this error message: ERROR: ld.so: object '… … …' from /etc/ld.so.preload cannot be preloaded (wrong ELF class: ELFCLASS64): ignored. In case you put there the same list of x86_32-libraries to pre-load, you would get it working, but all pure x86_64 runs would start complaining as well. The best possible way is to modify the dynamic loader to support pre-loading from separate files, obviously, but it's at least lengthy process. Can you think of some clean workaround?… For now I'm thinking about some multi-class-pre-load.so , which could load needed files by itself, but, as I can see, there's no "multi-class" support in ELF. | In your ld.so.preload, you want to specify "$LIB" in your path rather than an explicit "lib" or "lib64". Thus, on a Redhat-style distro, "/usr/alternates/$LIB/libfoo.so" becomes "/usr/alternates/lib/libfoo.so" for a 32-bit process and "/usr/alternates/lib64/libfoo.so" for a 64-bit process. On an Debian-style distro, "/usr/alternates/$LIB/libfoo.so" becomes "/usr/alternates/lib/i386-linux-gnu/libfoo.so" and "/usr/alternates/x86_64-linux-gnu/libfoo.so" respectively. Your tree then needs to be populated with libraries for both architectures. See "rpath token expansion" in the ld.so(8) man page for more on this. Note however, that rather than preloading a library, if you're compiling the binaries whose loading you are attempting to modify, you may find it better to modify the paths by setting DT_RUNPATH on the link line (using the same "$LIB"-style paths, thus configuring the binary to prefer your library location over the system defaults. Alternately, as others have noted, you may edit an ELF file to set DT_RUNPATH on binaries you're not compiling. The following works for me on an x86_64 Centos 6.5 box: cd /tmpmkdir lib lib64wget http://carrera.databits.net/~ksb/msrc/local/lib/snoopy/snoopy.hwget http://carrera.databits.net/~ksb/msrc/local/lib/snoopy/snoopy.cgcc -m64 -shared -fPIC -ldl snoopy.c -o /tmp/lib64/snoopy.sogcc -m32 -shared -fPIC -ldl snoopy.c -o /tmp/lib/snoopy.socat > true.c <<EOFint main(void){ return 0; }EOFgcc -m64 true.c -o true64gcc -m32 true.c -o true32sudo bash -c "echo '/tmp/\$LIB/snoopy.so' > /etc/ld.so.preload"strace -fo /tmp/strace64.out /tmp/true64strace -fo /tmp/strace32.out /tmp/true32sudo rm /etc/ld.so.preload" In the strace output, strace64.out has: open("/tmp/lib64/snoopy.so", O_RDONLY) = 3 while strace32.out has: open("/tmp/lib/snoopy.so", O_RDONLY) = 3 This is with an ld.so.preload contents of: /tmp/$LIB/snoopy.so | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/155438",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6622/"
]
} |
155,548 | I'm trying to find a way to check inside a given directory for duplicate files (even with different names) and replace them with symlinks pointing to the first occurrence. I've tried with fdupes but it just lists those duplicates. That's the context: I'm customizing an icon theme to my liking, and I've found that many icons, even if they have different names and different locations inside their parent folder, and are used for different purposes, basically are just the same picture. Since applying the same modification twenty or thirty times is redundant when just one is really necessary, I want to keep just one image and symlink all the others. As an example, if I run fdupes -r ./ inside the directory testdir , it might return to me the following results: ./file1.png./file2.png./subdir1/anotherfile.png./subdir1/subdir2/yetanotherfile.png Given this output, I'd like to keep just the file file1.png , delete all the others and replace them with symlinks pointing to it, while maintaining all original file names. So file2.png will retain its name, but will become a link to file1.png instead of being a duplicate. Those links should not point to an absolute path, but should be relative to the parent testdir directory; i.e. yetanotherfile.png will be point to ../../file1.png , not to /home/testuser/.icons/testdir/file1.png I'm interested both in solutions that involve a GUI and CLI. It is not mandatory to use fdupes I've cited it because it is a tool that I know, but I'm open to solutions that use other tools as well. I'm pretty sure that a bash script to handle all of this should not be that difficult to create, but I'm not expert enough to find out how to write it myself. | If you don't fancy much scripting then I can recommend rdfind . Which will scan given directories for duplicate files and replace them with either hard or symbolic links. I've used it for deduplicating my Ruby gems directory with great success. It's available in Debian/Ubuntu. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/155548",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37439/"
]
} |
155,551 | I'm having some problems with some scripts in bash, about errors and unexpected behaviors. I would like to investigate the causes of the problems so I can apply fixes. Is there a way I can turn some kind of "debug-mode" for bash, to get more information? | Start your bash script with bash -x ./script.sh or add in your script set -x to see debug output. Additional with bash 4.1 or later: If you want to write the debug output to a separate file, add this to your script: exec 5> debug_output.txtBASH_XTRACEFD="5" See: https://stackoverflow.com/a/25593226/3776858 If you want to see line numbers add this: PS4='$LINENO: ' If you have access to logger command then you can use this to write debug output via your syslog with timestamp, script name and line number: #!/bin/bashexec 5> >(logger -t $0)BASH_XTRACEFD="5"PS4='$LINENO: 'set -x# Place your code here You can use option -p of logger command to set an individual facility and level to write output via local syslog to its own logfile. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/155551",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41104/"
]
} |
155,557 | I have a Intel BayTrial Z3735D tablet which comes with a 32bit UEFI BIOS.After some search I've found that most linux distro don't come with a 32bit efi file. How can I insert one (or build a new ISO) According to https://wiki.archlinux.org/index.php/HCL/Firmwares/UEFI#Intel_Atom_SoC_Bay_Trail , this should be possible. | The Baytrail tablets run a 64b processor and a 32b EFI, for reasons best known to Intel. Grub2 (compiled for 32b EFI) will start a 64b UEFI operating system from a 32b EFI. Just like a 64b or 32b CPU processor calling into a traditional 16b BIOS, a thunk is needed in the operating system to marshal the arguments from 64b to 32b, change the processor mode, call the firmware, and then restore the processor mode and marshal the arguments from 32b to 64b. A x86-64 Linux kernel built with the option CONFIG_EFI_MIXED=y includes such a thunk to allow the x86-64 kernel to call to a i686 EFI. At this point in time there is no thunk for AMD's AtomBIOS, and thus the "radeon" module fails. This isn't an issue for the Baytrail tablets, as they use the Intel GPU. I would look at the Ubuntu operating system when considering Baytrail, as Fedora is yet to build their stock kernels with CONFIG_EFI_MIXED=y . Use a USB stick like Super Grub2 Disk to get to the Grub2 (32b) command line and then load and run the x86-64 installer kernel from the Grub2 command line. Once you have installed Ubuntu go back and install Grub2 32b bootloader to the EFI partition by hand and remove the Grub2 64b bootloader. The lack of advanced video driver is a showstopper for the MacBookPro2,2 as it uses the AMD Radeon X1600. Linux can boot using the EFI "UGA" driver (roughly equivalent to using the VESA option in BIOS-land). But the result is so much overhead that then fans run at full rate continually. Note that the "radeon" module copies the AtomBIOS contents into RAM, and thus a small change to the driver to allow the AtomBIOS to be loaded from disk is a path to solving this issue. Probably the best approach on a early Mac is to run a 32b operating system, although most of the popular distributions do not support EFI in their i686 32b builds. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/155557",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84147/"
]
} |
155,638 | I was expecting brace expansion to work for any number of arguments. However, for n=1 I get the following: $ find models/nsf-projects-{7}*models/nsf-projects-{7}.rdf For n>1 expansion occurs as expected, e.g.: $ find models/nsf-projects-{6,7}*find: ‘models/nsf-projects-6*’: No such file or directoryfind: ‘models/nsf-projects-7*’: No such file or directory I have browsed the GNU manuals a bit, but have not found the requirement to >1 arguments stated explicitly anywhere. Q: Is n>1 indeed a requirement for brace expansion? If so, why is it useful? | Yes, n > 1 is an explicit requirement : A correctly-formed brace expansion must contain unquoted opening and closing braces, and at least one unquoted comma or a valid sequence expression. Any incorrectly formed brace expansion is left unchanged. As for the why - historical reasons, to some extent (though it was copied from csh originally, which has the other behaviour). There are commands that take {} as a literal argument ( find , parallel , and others with more complex arguments), and also other uses of {} in the shell language. Because brace expansions are only processed when written literally (and not from variables), there's really no motivation to support degenerate expansions, and some reasons not to. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/155638",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81398/"
]
} |
155,702 | I use set -o vi setting in bash. The shorcut Alt+. (recalling last argument of previous command) here doesn't work as in emacs mode, so what is the equivalent for vi? | There are various method to get the last argument of last command: 1. inputrc: insert-last-argument & yank-last-arg Copy the following code into your ~/.inputrc file set editing-mode vi# Insert Modeset keymap vi-insert"\e.":yank-last-arg"\e_": yank-last-arg You can use my inputrc file . And here the inputrc manual for insert-last-argument and yank-last-arg 2. Word Designators: !!:$ & !$ For example: ┌─ (marslo@MarsloJiao ~) ->└─ # echo arg1 arg2 arg3 arg4 arg5arg1 arg2 arg3 arg4 arg5┌─ (marslo@MarsloJiao ~) ->└─ # echo !$echo arg5arg5┌─ (marslo@MarsloJiao ~) ->└─ # echo arg1 arg2 arg3 arg4 arg5arg1 arg2 arg3 arg4 arg5┌─ (marslo@MarsloJiao ~) ->└─ # echo !!:$echo arg5arg5┌─ (marslo@MarsloJiao ~) ->└─ # echo arg1 arg2 arg3 arg4 arg5arg1 arg2 arg3 arg4 arg5┌─ (marslo@MarsloJiao ~) ->└─ # echo !!:^echo arg1arg1┌─ (marslo@MarsloJiao ~) ->└─ # echo arg1 arg2 arg3 arg4 arg5arg1 arg2 arg3 arg4 arg5┌─ (marslo@MarsloJiao ~) ->└─ # echo !!:2-4echo arg2 arg3 arg4arg2 arg3 arg4 The manual of Shell Word Designator shows: !!:$ designates the last argument of the preceding command. This may be shortened to !$. 0 (zero) The 0th word. For many applications, this is the command word. n The nth word. ^ The first argument; that is, word 1. $ The last argument. % The word matched by the most recent ‘?string?’ search. x-y A range of words; ‘-y’ abbreviates ‘0-y’. * All of the words, except the 0th. This is a synonym for ‘1-$’. It is not an error to use ‘ ’ if there is just one word in the event; the empty string is returned in that case. x Abbreviates ‘x-$’ x- Abbreviates ‘x-$’ like ‘x*’, but omits the last word. 3. Shell Special Parameters: $_ For example: ┌─ (marslo@MarsloJiao ~) ->└─ # echo very-very-very-very-very-long-argumentvery-very-very-very-very-long-argument┌─ (marslo@MarsloJiao ~) ->└─ # echo $_very-very-very-very-very-long-argument┌─ (marslo@MarsloJiao ~) ->└─ # ls /usr/local/etc/┌─ (marslo@MarsloJiao ~) ->└─ # cd $_┌─ (marslo@MarsloJiao /usr/local/etc) ->└─ # In the manual of Shell Special Parameters : _ (An underscore.) At shell startup, set to the absolute pathname used to invoke the shell or shell script being executed as passed in the environment or argument list. Subsequently, expands to the last argument to the previous command, after expansion. Also set to the full pathname used to invoke each command executed and placed in the environment exported to that command. When checking mail, this parameter holds the name of the mail file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/155702",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57885/"
]
} |
155,704 | How does Whonix Linux keep a user secure? I am aware of how it works in terms of the Gateway which uses Tor, but how does the Whonix workstation ensure nothing is logged, or stored, on the OS and no fingerprinting can be done on the user who was using the workstation session? | how does the Whonix workstation ensure nothing is logged, or stored, on the OS It doesn't. Whonix is not amnesic. ; There is no Whonix Live version yet. ; There is no substitute for Whonix's lack of an Amnesic feature and no fingerprinting can be done on the user who was using the workstation session? See also documentation about Fingerprint as well as technical information Whonix's Protocol-Leak-Protection and Fingerprinting-Protection . Full disclosure: I am maintainer of Whonix. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/155704",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67073/"
]
} |
155,709 | I have a report that is generated in rows and I need to move it to columns. The number of rows per record will always be the same. This will be in a bash script on Linux, so I would like to use standard tools that are available. I am partial to awk and sed . I need to change this: R1R2R3R4R5R6R7R1R2R3R4R5R6R7 To this: R1,R2,R3,R4,R5,R6,R7R1,R2,R3,R4,R5,R6,R7 Any help would be greatly appreciated. I've had several people suggest excellent examples that do work with the above data. Unfortunately, my data is a little more robust. Here is a semi real-world example of what I am working with. This: 00000ND00000056888Doe, Jane JF99 Y09/01/2014 8:01:08 AMEE00001ND00000056889Doe, John JM66 Y09/02/2014 5:01:08 PMDD To: 00000;ND00000056888;Doe, Jane J;F;99 Y;09/01/2014 8:01:08 AM;EE00001;ND00000056889;Doe, John J;M;66 Y;09/02/2014 5:01:08 PM;DD Any delimiter other than a comma or single space will do fine. | try awk '{printf "%s%s",$0,NR%7?",":"\n" ; }' the big part NR%7?",":"\n" : if-then-else : if (NR%7) : NR, (Number of Record) % (modulo) 7 (divide by 7 != 0 ) then printf a ',' else new line. $0 is the whole line | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/155709",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72055/"
]
} |
155,718 | Please advice how to do the following magic ( I have linux red-hat machine – 5.4 ) I created the script in the following path: /usr/cti/my_scripts/MAGIC.bash I want to run this script from evry dir in my linux by alias name – M For example Under /tmp or /usr/or /var or evry dirWhen I type M , then it will run the script /usr/cti/my_scripts/MAGIC.bash Please advice what the steps that need to configure in my linux machine? EXAMPLE under /usr when I enter - M then it will run the script - /usr/cti/my_scripts/MAGIC.bash | Edit your "~/.bashrc" or "~/.bash_profile" to include the alias command. Add this line to your profile: alias M="/usr/cti/my_scripts/MAGIC.bash" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/155718",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67059/"
]
} |
155,775 | I have files named as 0-n.jpg for n from 1 to 500, for example. The problem is that some guy using Windows didn't use leading zeros so when I do ls I obtain 0-100.jpg0-101.jpg...0-10.jpg...0-199.jpg0-19.jpg0-1.jpg So I'd like to rename them to insert the leading zeros, so that the result of ls could be 0-001.jpg0-002.jpg...0-100.jpg...0-499.jpg0-500.jpg In other words, I'd like all files with the same file name lengths. I tried this solution but I'm getting a sequence of errors like bash: printf: 0-99: invalid number | If your system has the perl-based rename command you could do something like rename -- 's/(\d+)-(\d+)/sprintf("%d-%03d",$1,$2)/e' *.jpg Testing it using the -v (verbose) and -n (no-op) options: $ rename -vn -- 's/(\d+)-(\d+)/sprintf("%d-%03d",$1,$2)/e' *.jpg0-10.jpg renamed as 0-010.jpg0-19.jpg renamed as 0-019.jpg0-1.jpg renamed as 0-001.jpg | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/155775",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19195/"
]
} |
155,793 | Is it correct to use certain special characters, as + , & , ' , . (dot) and , (comma), basically, in filenames. I understand that you can use - and _ with no problem, but doing some research I have been unable to find something definite about the other symbols; some say that you can, some say that you can't, and some others say that it is "not encouraged" to use them (whatever that means). | Is it correct to use certain special characters, as +, &, ', . (dot) and , (comma), basically, in filenames. Yes. Correct but not necessarily advisable or convenient. You can use any characters except for null and / within a filename in modern Unix and Linux filesystems. You can use ASCII punctuation . Some utilities use stops ( dot ) and commas in the names of files they create. You can use ASCII control characters , however this is inadvisable as they are unlikely to be displayed acceptably and are difficult to use. You can use shell meta-characters such as ASCII ampersand and ASCII apostrophe. However this is inconvenient and requires that when constructing commands you take special care to quote or escape such characters. You can use multi-byte characters using a variety of encodings. It is up to the shell and/or utilities to correctly interpret and display non-ASCII characters. It is advisable to restrict yourself to a popular encoding such as UTF-8 and set locale appropriately. You will have fewest problems using ASCII printable characters, limiting the set of punctuation characters to ones that are not shell meta-characters and not starting a name with a hyphen (or a stop - unless you want to hide the file). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/155793",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84311/"
]
} |
155,805 | I want to replace only the first k instances of a word. How can I do this? Eg. Say file foo.txt contains 100 instances occurrences of word 'linux' . I need to replace first 50 occurrences only. | The first section belows describes using sed to change the first k-occurrences on a line. The second section extends this approach to change only the first k-occurrences in a file, regardless of what line they appear on. Line-oriented solution With standard sed, there is a command to replace the k-th occurrance of a word on a line. If k is 3, for example: sed 's/old/new/3' Or, one can replace all occurrences with: sed 's/old/new/g' Neither of these is what you want. GNU sed offers an extension that will change the k-th occurrance and all after that. If k is 3, for example: sed 's/old/new/g3' These can be combined to do what you want. To change the first 3 occurrences: $ echo old old old old old | sed -E 's/\<old\>/\n/g4; s/\<old\>/new/g; s/\n/old/g'new new new old old where \n is useful here because we can be sure that it never occurs on a line. Explanation: We use three sed substitution commands: s/\<old\>/\n/g4 This the GNU extension to replace the fourth and all subsequent occurrences of old with \n . The extended regex feature \< is used to match the beginning of a word and \> to match the end of a word. This assures that only complete words are matched. Extended regex requires the -E option to sed . s/\<old\>/new/g Only the first three occurrences of old remain and this replaces them all with new . s/\n/old/g The fourth and all remaining occurrences of old were replaced with \n in the first step. This returns them back to their original state. Non-GNU solution If GNU sed is not available and you want to change the first 3 occurrences of old to new , then use three s commands: $ echo old old old old old | sed -E -e 's/\<old\>/new/' -e 's/\<old\>/new/' -e 's/\<old\>/new/'new new new old old This works well when k is a small number but scales poorly to large k . Since some non-GNU seds do not support combining commands with semicolons, each command here is introduced with its own -e option. It may also be necessary to verify that your sed supports the word boundary symbols, \< and \> . File-oriented solution We can tell sed to read the whole file in and then perform the substitutions. For example, to replace the first three occurrences of old using a BSD-style sed: sed -E -e 'H;1h;$!d;x' -e 's/\<old\>/new/' -e 's/\<old\>/new/' -e 's/\<old\>/new/' The sed commands H;1h;$!d;x read the whole file in. Because the above does not use any GNU extension, it should work on BSD (OSX) sed. Note, thought, that this approach requires a sed that can handle long lines. GNU sed should be fine. Those using a non-GNU version of sed should test its ability to handle long lines. With a GNU sed, we can further use the g trick described above, but with \n replaced with \x00 , to replace the first three occurrences: sed -E -e 'H;1h;$!d;x; s/\<old\>/\x00/g4; s/\<old\>/new/g; s/\x00/old/g' This approach scales well as k becomes large. This assumes, though, that \x00 is not in your original string. Since it is impossible to put the character \x00 in a bash string, this is usually a safe assumption. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/155805",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81580/"
]
} |
155,806 | How can I use the diff command to compare 2 commands' outputs? Does something like this exist? diff ($cat /etc/passwd) ($cut -f2/etc/passwd) | Use process substitution : diff <(cat /etc/passwd) <(cut -f2 /etc/passwd) <(...) is called process substitution. It converts the output of a command into a file-like object that diff can read from. While process substitution is not POSIX, it is supported by bash, ksh, and zsh. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/155806",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82729/"
]
} |
155,808 | I have a file which contains word 'name' multiple times ( 250 times ). I need to replace every instance of 'name' with its count attached to it as name_1, name_2, name_3....name_250. Can I do this using sed ? What flags I need to use ? | A job for perl : perl -pe 's/\bname\b\K/"_".++$n/ge' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/155808",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81580/"
]
} |
155,829 | Executing kill -l on linux gives: 1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR111) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM16) SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR31) SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+338) SIGRTMIN+4 39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+843) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+1348) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-1253) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-758) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-263) SIGRTMAX-1 64) SIGRTMAX What happened to 32 and 33 ? Why is it not listed? They could have started at 1 and ended at 62 instead of skipping 2 in the middle? | It is because of NPTL . Since it is part of the GNU C library nearly every modern linux distribution don't uses the first two real time signals anymore. NPTL is an implementation of the POSIX Threads . NPTL makes internal use of the first two real-time signals. This part of the signal manpage is very interesting: The Linux kernel supports a range of 32 different real-time signals, numbered 33 to 64. However, the glibc POSIX threads implementation internally uses two (for NPTL) or three (for LinuxThreads) real-time signals (see pthreads(7)), and adjusts the value of SIGRTMIN suitably (to 34 or 35). Because the range of available real-time signals varies according to the glibc threading implementation (and this variation can occur at run time according to the available kernel and glibc), and indeed the range of real-time signals varies across UNIX systems, programs should never refer to real-time signals using hard-coded numbers, but instead should always refer to real-time signals using the notation SIGRTMIN+n, and include suitable (run-time) checks that SIGRTMIN+n does not exceed SIGRTMAX. I also checked the source code for glibc; see line 22 . __SIGRTMIN is increased +2, so the first two real time signals are excluded from the range of real time signals. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/155829",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63606/"
]
} |
155,838 | I'm trying to use the following script to generate a sitemap for my website. When I run it as sh thsitemap.sh I get an error like this and creates an empty sitemap.xml file: thsitemap.sh: 22: thsitemap.sh: [[: not foundthsitemap.sh: 42: thsitemap.sh: [[: not foundthsitemap.sh: 50: thsitemap.sh: Syntax error: "(" unexpected But as the same user root when I manually copy and paste these lines on the terminal, it works without any error and the sitemap.xml file have all the urls. What's the problem? How can I fix this? #!/bin/bash############################################### modified version of original http://media-glass.es/ghost-sitemaps/# for ghost.centminmod.com# http://ghost.centminmod.com/ghost-sitemap-generator/##############################################url="techhamlet.com"webroot='/home/leafh8kfns/techhamlet.com'path="${webroot}/sitemap.xml"user='leafh8kfns' # web server usergroup='leafh8kfns' # web server groupdebug='n' # disable debug mode with debug='n'##############################################date=`date +'%FT%k:%M:%S+00:00'`freq="daily"prio="0.5"reject='.rss, .gif, .png, .jpg, .css, .js, .txt, .ico, .eot, .woff, .ttf, .svg, .txt'############################################### create sitemap.xml file if it doesn't exist and give it same permissions# as nginx server user/groupif [[ ! -f "$path" ]]; then touch $path chown ${user}:${group} $pathfi# check for robots.txt defined Sitemap directive# if doesn't exist add one# https://support.google.com/webmasters/answer/183669if [ -f "${webroot}/robots.txt" ]; thenSITEMAPCHECK=$(grep 'Sitemap:' ${webroot}/robots.txt) if [ -z "$SITEMAPCHECK" ]; then echo "Sitemap: http://${url}/sitemap.xml" >> ${webroot}/robots.txt fifi##############################################echo "" > $path# grab list of site urlslist=`wget -r --delete-after $url --reject=${reject} 2>&1 |grep "\-\-" |grep http | grep -v 'normalize\.css' | awk '{ print $3 }'`if [[ "$debug" = [yY] ]]; then echo "------------------------------------------------------" echo "Following list of urls will be submitted to Google" echo $list echo "------------------------------------------------------"fi# put list into an arrayarray=($list)echo "------------------------------------------------------"echo ${#array[@]} "pages detected for $url" echo "------------------------------------------------------"# formatted properly according to# https://support.google.com/webmasters/answer/35738echo "<?xml version=\"1.0\" encoding=\"UTF-8\"?><urlset xsi:schemaLocation=\"http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"xmlns=\"http://www.sitemaps.org/schemas/sitemap/0.9\">" > $pathecho ' ' >> $path; for ((i=0;i<${#array[*]};i++)); doecho "<url> <loc>${array[$i]:0}</loc> <lastmod>$date</lastmod> <changefreq>$freq</changefreq> <priority>$prio</priority></url>" >> $path doneecho "" >> $pathecho "</urlset>" >> $path# notify Google# URL encode urls as per https://support.google.com/webmasters/answer/183669if [[ "$debug" = [nN] ]]; then wget -q --delete-after http://www.google.com/webmasters/tools/ping?sitemap=http%3A%2F%2F${url}%2Fsitemap.xml rm -rf ${url}else echo "wget -q --delete-after http://www.google.com/webmasters/tools/ping?sitemap=http%3A%2F%2F${url}%2Fsitemap.xml" echo "rm -rf ${url}"fiecho "------------------------------------------------------"exit 0 | Run the script either as: bash script.sh or just: ./script.sh When bash is run using the name sh , it disables most of its extensions, such as the [[ testing operator. Since you have the #!/bin/bash shebang line, you don't need to specify the shell interpreter explicitly on the command line. Running the script as a command will use that line to find the shell. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/155838",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27743/"
]
} |
155,933 | I saw some kind of workflow in vim: Vim had a file open. Then some combination of keys made vim disappear and the user was in the command line working in cli mode, then opened another file and then suddenly returned to the previously opened file exactly at the place/line he was. It reminded me the switch among windows we do in Windows. Does anyone know how this worflow is done in vim? | There are several options to do so: You can use a terminal multiplexer like screen or tmux . In screen, for example, the shortcut Ctrl + a - a , has the same functiononality as Alt + Tab in graphical environments: switch to the last screen. Or you use vim 's internal function. Type :!command in vim 's command mode. For example: :!ls -l . After the command finishes press Enter to switch back to vim . There is one more option: Job conrol . Press Ctrl + z to stop the current process ( vim ). You will find yourself in a terminal. To bring the stopped process back to the foreground type fg . For me, I prefer screen. I have an unwritten rule for myself: "Always open a screen." | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/155933",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42132/"
]
} |
155,993 | I am trying to change my current password in Red Hat Enterprise Linux Server release 5.10 (Tikanga) but it says my new password is too similar . Is there any way to force change ? $ passwdChanging password for user XYChanging password for XY(current) UNIX password:New UNIX password:BAD PASSWORD: is too similar to the old oneNew UNIX password: | If you can run the command as root, you can force the change to be accepted. Example: $ sudo passwd myusernameChanging password for user myusername.New password: Retype new password: passwd: all authentication tokens updated successfully. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/155993",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84441/"
]
} |
156,015 | I have several text files. All these text files need to undergo a set to editing which can be done in vim . I want to automate this. vim has several commands for replacement. Suppose the text files need to undergo the following replacements: replace boy by Boy: %s/boy/Boy/g replace girl by Girl: %s/girl/Girl/g delete empty lines: g/^$/d This is just a simple example, Is there a way to write all these rules and then automate this on several files? | Yes there is a way to automate this. And it starts with selecting the right tool, for the job. In this case you should be using e.g sed and not try to bend vi which was designed for interactive use (and not for automation). The replacement syntax for sed is largely the same as the one for vi . sed -i.backup 's/boy/Boy/g' file-name-1 file-name-2 ... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156015",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17678/"
]
} |
156,038 | I was trying to set up a simple backup script to run automatically that would copy a file from a Windows machine to a Linux one through SSH. As a lot of simple online tutorials suggest I used pscp with a private key generated with puttygen and placed the corresponding public key (presented in copy/paste form by putty itself) in the authorized_keys file in Linux. This seems pretty straightforward considering that it worked in 2 other windows machines to a different Linux machine, with the same configuration. There are no connectivity issues AFAICS and the same goes for ssh, considering I'm able to log in as root to the Linux machine. The config file ( sshd_config ) has the AuthorizedKeysFile set to ~/.sshd/authorized_keys . The error "Server refused our key" keeps showing up, no matter what I do... The logs don't show any authentication problems... I'm planning to do more testing and setting the logLevel value to VERBOSE or DEBUG2 or 3 but considering the urgency of the matter and the fact that in order to actually test it on the machine I have to go through a lot of hassle considering the machine is in a place that is quite distant from my actual workplace... Questions Does anyone have any ideas? Has this ever happened to anyone? It seems like this might actually be a problem related to ssh versions or something of the sorts... I was also considering the possibility that I need to have the public key inserted in the authorized_keys file inside the user's .ssh directory ( /user/.ssh/ ) besides having it in root's folder (doesn't make much sense because of the value of AuthorizedKeysFile in sshd_config ). I've done some teting with the ssh server's LogLevel set o VERBOSE but I could'nt retrieve the information (liability issues), so instead here goes an output/debug log from another source which seems to be displaying the same error... Connection from 192.168.0.101 port 4288debug1: Client protocol version 2.0; client software version OpenSSH_4.5debug1: match: OpenSSH_4.5 pat OpenSSH*debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_4.5debug1: permanently_set_uid: 22/22debug1: list_hostkey_types: ssh-rsa,ssh-dssdebug1: SSH2_MSG_KEXINIT sentdebug1: SSH2_MSG_KEXINIT receiveddebug1: kex: client->server aes128-cbc hmac-md5 nonedebug1: kex: server->client aes128-cbc hmac-md5 nonedebug1: SSH2_MSG_KEX_DH_GEX_REQUEST receiveddebug1: SSH2_MSG_KEX_DH_GEX_GROUP sentdebug1: expecting SSH2_MSG_KEX_DH_GEX_INITdebug1: SSH2_MSG_KEX_DH_GEX_REPLY sentdebug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug1: SSH2_MSG_NEWKEYS receiveddebug1: KEX donedebug1: userauth-request for user dcowsill service ssh-connection method nonedebug1: attempt 0 failures 0debug1: PAM: initializing for "dcowsill"debug1: userauth-request for user dcowsill service ssh-connection method publickeydebug1: attempt 1 failures 1debug1: test whether pkalg/pkblob are acceptabledebug1: PAM: setting PAM_RHOST to "192.168.0.101"debug1: PAM: setting PAM_TTY to "ssh"debug1: temporarily_use_uid: 1052/105 (e=0/0)debug1: trying public key file /testuser/.ssh/authorized_keysdebug1: restore_uid: 0/0debug1: temporarily_use_uid: 1052/105 (e=0/0)debug1: trying public key file /testuser/.ssh/authorized_keysdebug1: restore_uid: 0/0Failed publickey for dcowsill from 192.168.0.101 port 4288 ssh2debug1: userauth-request for user dcowsill service ssh-connection method publickeydebug1: attempt 2 failures 2debug1: test whether pkalg/pkblob are acceptabledebug1: temporarily_use_uid: 1052/105 (e=0/0)debug1: trying public key file /testuser/.ssh/authorized_keysdebug1: restore_uid: 0/0debug1: temporarily_use_uid: 1052/105 (e=0/0)debug1: trying public key file /testuser/.ssh/authorized_keysdebug1: restore_uid: 0/0Failed publickey for dcowsill from 192.168.0.101 port 4288 ssh2Connection closed by 192.168.0.101 It seems like the program is trying to open the authorized_keys file with permissions from the owner, but then there is no more information on what is generating the problem.One last thing, I've checked and double-checked the file and foler permissions and they're all ok. | Some possible reasons I know are connected with file permissionsthese are mostly too wide and Particularly I can recall two reasons exposing /home/user directory to more that the owner .ssh and/or authorized_keys file permissions (set them to 700/600 respectively if they are more than that) the exact reason of key is refused by starting an additional sshd server on another port with debug and non-daemon optionsif you have root access on the server you canrun: sudo `which sshd` -p 2020 -Dd on the server After leaving that running run ssh to it: ssh -p 2020 -i /path/to/refusedkey Server output will tell you the reason of refusal | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156038",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84454/"
]
} |
156,046 | Bash lets you specify a redirected input before a command: $ <lines sed 's/^/line: /g'line: fooline: bar Bash also lets you redirect input to a compound command like a while loop: $ while read line; do echo "line: $line"; done <linesline: fooline: bar However, when I try to specify a redirected input before a while loop, I get a syntax error: $ <lines while read line; do echo "line: $line"; donebash: syntax error near unexpected token `do' What's wrong with this? Is it not possible to specify a redirected input before a compound command in Bash? If so, why not? | man bash says: ... redirection operators may precede or appear anywhere within a simple command or may follow a command. while is not a simple command. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/156046",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7733/"
]
} |
156,050 | I'm working on a huge complex application. You connect a disk, press a button, and it partitions and formats the disk, mounts it, and copies some files onto it. To test this application, we have a test system which loop-mounts a disk image and runs through the same process. Except we changed the application logic, and now the test system doesn't work. If you give the program a real disk, everything works fine. But if you give it a loop device, it fails. Specifically, the application partitions the disk, formats it, and then whines that it can't mount the partition. The exact command is mount /dev/vda /mnt --rw -o offset=111149056,sizelimit=314572800 (Here /dev/vda is merely a symlink to /dev/loop0 . It makes no difference if I refer to loop0 directly.) If I run the command by hand, I get this: root# mount /dev/vda /mnt --rw -o offset=111149056,sizelimit=314572800FUSE exfat 1.0.1ERROR: exFAT file system is not found.root# echo $?1 I can run this command over and over again, and it just doesn't work. No reason, it just doesn't. Here is the terrifying part: If I run cfdisk /dev/vda and then immediately quit without changing anything, now it mounts!! What the hell does cfdisk do to the disk that makes it suddenly start working? And how can I remove the need to call this program? (I tried in vain to call sfdisk -R /dev/vda ; it just complains about "invalid parameter" or something.) I can kludge the application to call cfdisk -Ps /dev/vda or something, but I would really, really rather not do that. I want to find out why any of this is even necessary. Before we changed the application, everything worked fine... | man bash says: ... redirection operators may precede or appear anywhere within a simple command or may follow a command. while is not a simple command. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/156050",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26776/"
]
} |
156,075 | Is there anyway to get this through Unix shell scripting?I have a fileA with one column (1000 rows), and fileB with 26 columns(13000 rows). I need to search each value of fileA with fileB and return all the 26 values from FileB if matches. The search value (from FileA) may present in any of the 26 values in FileB. This value is not fixed in any of the columns in B file. FILEA: abcdefghi FILEB: drm|fdm|pln|ess|abc|zeh|....|yer (26 values)fdm|drm|def|ess|yer|zeh|....|pln Here, abc from fileA is 5th col. of FileB—so my result should be all the 26 values from FileB. Similarly, def from fileA is 3rd col. of FileB -so my result should be all the 26 values from FileB. This way, need to do for the entire record set. If unmatched, ignore the record. | You can just use grep : grep -Fwf fileA fileB From man grep : -F, --fixed-strings Interpret PATTERN as a list of fixed strings, separated by newlines, any of which is to be matched. (-F is specified by POSIX.) -f FILE, --file=FILE Obtain patterns from FILE, one per line. The empty file contains zero patterns, and therefore matches nothing. (-f is specified by POSIX.) -w, --word-regexp Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156075",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84495/"
]
} |
156,084 | I am trying to understand named pipes in the context of this particular example. I type <(ls -l) in my terminal and get the output as, bash: /dev/fd/63: Permission denied . If I type cat <(ls -l) , I could see the directory contents. If I replace the cat with echo , I think I get the terminal name (or is it?). echo <(ls -l) gives the output as /dev/fd/63 . Also, this example output is unclear to me. ls -l <(echo "Whatever")lr-x------ 1 root root 64 Sep 17 13:18 /dev/fd/63 -> pipe:[48078752] However, if I give, ls -l <() it lists me the directory contents. What is happening in case of the named pipe? | When you do <(some_command) , your shell executes the command inside the parentheses and replaces the whole thing with a file descriptor, that is connected to the command's stdout. So /dev/fd/63 is a pipe containing the output of your ls call. When you do <(ls -l) you get a Permission denied error, because the whole line is replaced with the pipe, effectively trying to call /dev/fd/63 as a command, which is not executable. In your second example, cat <(ls -l) becomes cat /dev/fd/63 . As cat reads from the files given as parameters you get the content. echo on the other hand just outputs its parameters "as-is". The last case you have, <() is simply replaced by nothing, as there is no command. But this is not consistent between shells, in zsh you still get a pipe (although empty). Summary : <(command) lets you use the ouput of a command, where you would normally need a file. Edit: as Gilles points out, this is not a named pipe, but an anonymous pipe. The main difference is, that it only exists, as long as the process is running, while a named pipe (created e.g. with mkfifo ) will stay without processes attached to it. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/156084",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47538/"
]
} |
156,100 | I read in this answer from @Gilles the following: In zsh, you can load the mv builtin: setopt extended_globzmodload -Fm zsh/files b:zf_\*mv -- ^*.(jpg|png|bmp) targetdir/ as a solution to the "mv: Argument list too long” problem. The answer suggests using zsh's mv (as opposed to GNU's) but what exactly does this line do?: zmodload -Fm zsh/files b:zf_\* | The best way, to look at zsh documentation is using info . If you run info zsh , you can use the index (think of a book 's index) to locate the section that describes the zmodload command. Press i , then you can enter zmo and press Tab . You'll get straight to the zmodload builtin description which will tell you all about it. In short, zmodload -F loads the module (if not loaded) and enables only the specified features from that module. With -m , we enabled the features that m atch a pattern, here b:zf_* . b: is for builtin, so the above command loads the zsh/files module (see info -f zsh -n 'The zsh/files Module,' for details on that) and only enables the builtins whose name starts with zf_ . zmodload -F zsh/files loads the module, but doesn't enable any feature: $ zmodload -FlL zsh/fileszmodload -F zsh/files -b:chgrp -b:chown -b:ln -b:mkdir -b:mv -b:rm -b:rmdir -b:sync -b:zf_chgrp -b:zf_chown -b:zf_ln -b:zf_mkdir -b:zf_mv -b:zf_rm -b:zf_rmdir -b:zf_sync lists the features of that module specifying which are currently enabled (none for now). You'll notice there's both a mv and zf_mv builtin. $ zmodload -mF zsh/files 'b:zf_*'$ zmodload -FlL zsh/fileszmodload -F zsh/files -b:chgrp -b:chown -b:ln -b:mkdir -b:mv -b:rm -b:rmdir -b:sync +b:zf_chgrp +b:zf_chown +b:zf_ln +b:zf_mkdir +b:zf_mv +b:zf_rm +b:zf_rmdir +b:zf_sync You'll notice the zf_mv builtin has been enabled, but not the mv one (same for the other builtins). That means, those builtin versions of the system commands have been enabled, but without overriding the system one: $ type zf_mvzf_mv is a shell builtin$ type mvmv is /bin/mv Now that you have a builtin mv , as zf_mv , not mv , you can do: zf_mv -- ^*.(jpg|png|bmp) targetdir/ Because zf_mv is builtin, there's no execve() system call, so you won't hit the Too many args limit associated with it. Of course, you can also do: zmodload zsh/files # without -F, all the features are enabledmv -- ^*.(jpg|png|bmp) targetdir/ But beware that replaces the system's mv with zsh builtin equivalent. To overcome the E2BIG execve() error (the Too many args upon executing an external command), zsh also provides with a zargs function. You run: autoload zargs # in ~/.zshrc if you use it often To mark it for autoloading. Then you can use: zargs -- ^*.(jpg|png|bmp) -- mv -t targetdir/ (here assuming GNU mv for the -t option). zargs will run as many mv commands as necessary to avoid the E2BIG (as xargs would do). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/156100",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
156,103 | I have an alias set for my rm command. If I run alias command, this is what I get as output. alias rm='rm -i' Now, when I run the rm command it works fine as expected. rm rameshrm: remove regular empty file `ramesh'? y Now, I was learning on the system calls that are being called when I execute a command. For that I got to know about the strace command from here which lists me the files that are being called when I execute some command. The command is as below. strace -ff -e trace=file rm ramesh 2>&1 The command works perfectly fine except that it ignores my aliases that I have in place for my rm command. It deletes the file without prompting the user. So, does strace ignore aliases like this? If so why is it so? EDIT: Not sure, if this has something to do but type -a rm gives me the output as, rm is aliased to `rm -i'rm is /bin/rm So is it considering /bin/rm in this case which is why the user is not prompted before deletion? | strace doesn't run rm -i for the same reason as: echo rm doesn't output rm -i . Aliases are a feature of a few shells to allow some strings to automatically be replaced by another when found in command position. In: alias foo='whatever'foo xxx The shells expands that to: whatever xxx and that undergoes another round of interpretation, in that case leading to executing the whatever command. aliases are only expanded when found in command position (as the first word of a command line). zsh supports global aliases. You could do: alias -g rm='rm -i' But you wouldn't want to, as that would mean that: echo rm would output rm -i for instance. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/156103",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47538/"
]
} |
156,131 | Could I trap a 5-minute signal inside my script? I imagine something like this, function dosomething { echo "It's been 5 minutes."}trap dosomething SIGNAL-EVERY-5-MINUTESwhile truedo sleep 10done Note that this example doesn't have sense. It's just to ask if it's possible to trap something every x time. EDIT: Based on @vinc17 answer, I've noticed that sleep command from the main shell isn't interrupted by USR1 signal, and that's what I want to do. #!/bin/bashtime=10( set -e while true do sleep 30 # Since this is run as a subshell (instead of an external command), # the parent pid is $$, not $PPID. kill -USR1 $$ done) &finish() { kill $! echo "Closing." exit 0}changetime() { echo "Signal USR1 received." time=$(( $RANDOM % 8 + 1))}trap changetime USR1trap finish SIGINTwhile truedo echo "before sleep" sleep $time echo "after sleep"done outputs, before sleepSignal USR1 received.after sleepbefore sleepSignal USR1 received.after sleepbefore sleepSignal USR1 received.Closing. EDIT2: I've edited the above example, resulting in the same output, but adds one more difficulty: The sleep time is changed by the USR1 trap function. So now, every 30 seconds a random time between 1 and 8 is chosen. SO, I need to kill sleep from the main script when signal changes its time. I insist on that it has no sense, but I need to know if it is possible. | You can run a process that will send a signal (e.g. SIGALRM ) to the shell script every x time, and use a trap for this signal. This process could be a script doing something like: set -ewhile truedo sleep 300 kill -ALRM $PPIDdone if started by the main shell script. The main shell script should kill this process when it is no longer needed, and/or the process should terminate when the pid no longer exists (however there's a race condition on that). (EDITED) Note: If your main shell script uses the sleep command, it may badly interact with the ALRM signal if sleep is a builtin and sleep(3) is implemented with alarm(2) . The POSIX description of the sleep shell utility also says in its rationale: " The exit status is allowed to be zero when sleep is interrupted by the SIGALRM signal because most implementations of this utility rely on the arrival of that signal to notify them that the requested finishing time has been successfully attained. Such implementations thus do not distinguish this situation from the successful completion case. Other implementations are allowed to catch the signal and go back to sleep until the requested time expires or to provide the normal signal termination procedures. " To avoid potential issues with some implementations, you can use SIGUSR1 or SIGUSR2 instead of SIGALRM . Here's an example using a subshell. To make the behavior easier to see, I've replaced the 5-minute period ( sleep 300 ) by a 5-second period ( sleep 5 ). #!/bin/sh{ set -e while true do sleep 5 # Since this is run as a subshell (instead of an external command), # the parent pid is $$, not $PPID. kill -USR1 $$ done} &trap 'echo "Signal USR1 received."' USR1while truedo date sleep 1done The script can be interrupted with Ctrl-C. It doesn't kill the subshell when this happens, but if the pid isn't reused, the subshell terminates automatically after no more than the period (here, 5 seconds) because the kill command fails ( kill: No such process ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156131",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79537/"
]
} |
156,205 | Shells like Bash and Zsh expand wildcard into arguments, as many arguments as match the pattern: $ echo *.txt1.txt 2.txt 3.txt But what if I only want the first match to be returned, not all the matches? $ echo *.txt1.txt I don't mind shell-specific solutions, but I would like a solution that works with whitespace in filenames. | One robust way in bash is to expand into an array, and output the first element only: pattern="*.txt"files=( $pattern )echo "${files[0]}" # printf is safer! (You can even just echo $files , a missing index is treated as [0].) This safely handles space/tab/newline and other metacharacters when expanding the filenames. Note that locale settings in effect can alter what "first" is. You can also do this interactively with a bash completion function : _echo() { local cur=${COMP_WORDS[COMP_CWORD]} # string to expand if compgen -G "$cur*" > /dev/null; then local files=( ${cur:+$cur*} ) # don't expand empty input as * [ ${#files} -ge 1 ] && COMPREPLY=( "${files[0]}" ) fi}complete -o bashdefault -F _echo echo This binds the _echo function to complete arguments to the echo command (overriding normal completion). An extra "*" is appended in the code above, you can just hit tab on a partial filename and hopefully the right thing will happen. The code is slightly convoluted, rather than set or assume nullglob ( shopt -s nullglob ) we check compgen -G can expand the glob to some matches, then we expand safely into an array, and finally set COMPREPLY so that quoting is robust. You can partly do this (programmatically expand a glob) with bash's compgen -G , but it's not robust as it outputs unquoted to stdout. As usual, completion is rather fraught, this breaks completion of other things, including environment variables (see the _bash_def_completion() function here for the details of emulating the default behaviour). You could also just use compgen outside of a completion function: files=( $(compgen -W "$pattern") ) One point to note is that "~" is not a glob, it's handled by bash in a separate stage of expansion, as are $variables and other expansions. compgen -G just does filename globbing, but compgen -W gives you all of bash's default expansion, though possibly too many expansions (including `` and $() ). Unlike -G , the -W is safely quoted (I can't explain the disparity). Since the purpose of -W is that it expands tokens, this means it will expand "a" to "a" even if no such file exists, so it's perhaps not ideal. This is easier to understand, but may have unwanted side-effects: _echo() { local cur=${COMP_WORDS[COMP_CWORD]} local files=( $(compgen -W "$cur") ) printf -v COMPREPLY %q "${files[0]}" } Then: touch $'curious \n filename' echo curious* tab Note the use of printf %q to safely quote the values. One final option is to use 0-delimited output with GNU utilities (see the bash FAQ ): pattern="*.txt"while IFS= read -r -d $'\0' filename; do printf '%q' "$filename"; break; done < <(find . -maxdepth 1 -name "$pattern" -printf "%f\0" | sort -z ) This option gives you a little more control over the sorting order (the order when expanding a glob will be subject to your locale/ LC_COLLATE and may or may not fold case), but is otherwise a rather large hammer for such a small problem ;-) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/156205",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9041/"
]
} |
156,209 | I have a folder with more than 30 sub directories and I want to get the list of the files which was modified after a specified date(say sep 8 which is the real case) and to be copied with the same tree structure with only the modified files in that folder I have say 30 dir from that I have the list of the files I need found using last modified dateFind command output a/a.txta/b/b.txta/www.txtetc.. For example I want the folder "a" created and only the a.txt in it...like wise for the other also "a/b" to be created and b.txt inside it... | Assuming you have your desired files in a text file, you could do something like while IFS= read -r file; do echo mkdir -p ${file%/*}; cp /source/"$file" /target/${file%/*}/${file##*/}; done < files.txt That will read each line of your list, extract the directory and the file name, create the directory and copy the file. You will need to change source and target to the actual parent directories you are using. For example, to copy /foo/a/a.txt to /bar/a/a.txt , change source to foo and target to bar . I can't tell from your question whether you want to copy all directories and then only specific files or if you just want the directories that will contain files. The solution above will only create the necessary directories. If you want to create all of them, use find /source -type d -exec mkdir -p {} /target That will create the directories. Once those are there, just copy the files: while IFS= read -r file; do cp /source/"$file" /target/"$file"done Update This little script will move all the files modified after September 8. It assumes the GNU versions of find and touch . Assuming you're using Linux, that's what you will have. #!/usr/bin/env bash ## Create a file to compare against.tmp=$(mktemp)touch -d "September 8" "$tmp"## Define the source and target parent directoriessource=/path/to/sourcetarget=/path/to/target## move to the source directorycd "$source"## Find the files that were modified more recently than $tmp and copy themfind ./ -type f -newer "$tmp" -printf "%h %p\0" | while IFS= read -rd '' path file; do mkdir -p "$target"/"$path" cp "$file" "$target"/"$path" done Strictly speaking, you don't need the tmp file. However, this way, the same script will work tomorrow. Otherwise, if you use find's -mtime , you would have to calculate the right date every day. Another approach would be to first find the directories, create them in the target and then copy the files: Create all directories find ./ -type d -exec mkdir -p ../bar/{} \; Find and copy the relevant files find ./ -type f -newer "$tmp" -exec cp {} /path/to/target/bar/{} \; Remove any empty directories find ../bar/ -type d -exec rmdir {} \; | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156209",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34348/"
]
} |
156,212 | I just installed Linux Mint 17 (MATE) on an old laptop and everything works amazing, however I can't seem to get it to connect to my WiFi network. All my other computers can get access, plus, before when the laptop has Windows XP, it could also find and connect. Is there a way to check if it's even detecting the correct network? If so, how would I set up a proper connection to the network? There is nothing wrong with my network nor the laptop, so it must be Mint's fault. Edit :Output of iwconfig : lo no wireless extensions.eth0 no wireless extensions. Output of lspci -nn | grep 0280 : 02:04.0 Network controller [0280]: Broadcom Corporation BCM4318 [AirForce One 54g] 802.11g Wireless LAN Controller [14e4:4318] (rev 02) | This answer assumes that you can connect your machine to the network using a cable and so get internet access. If that assumption is wrong, let me know and I'll modify this. You need to install the driver for your wireless card. The driver support table of the Linux Wireless page lists it as supported so you should be able to get everything working by simply running: sudo apt-get install firmware-b43-installer If this does not work leave me a comment, you might need to tweak it a bit. Further reading: http://forums.linuxmint.com/viewtopic.php?f=194&t=139947&start=20 https://help.ubuntu.com/community/WifiDocs/Driver/bcm43xx | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156212",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84577/"
]
} |
156,223 | I remember having seen somewhere a bash script using case and shift to walk through the list of positional parameters, parse flags and options with arguments when it encounters them, and removes them after parsing to leave only the bare arguments, which are later processed by the rest of the script. For example, in parsing the command line of cp -R file1 -t /mybackup file2 -f , it would first walk through the parameters, recognize that the user has requested to descend into directories by -R , specified the target by -t /mybackup and to force copying by -f , and remove those from the list of parameters, leaving the program to process file1 file2 as the remaining arguments. But I don't seem to be able to remember/find out whatever script I saw whenever. I'd just like to be able to do that. I have been googling around various sites and append a list of relevant pages I examined. A question on this website specifically asked about "order-independent options" but both the single answer and the answer of the question it was dupped to does not consider cases like the above where the options are mixed with normal arguments, which I presume was the reason for the person to specifically mention order-independent options. Since bash 's built-in getopts seems to stop at the first non-option argument, it does not seem to be sufficient as a solution. This is why the Wooledge BashFAQ's page (see below) explains how to rearrange the arguments. But I'd like to avoid creating multiple arrays in case the argument list is quite long. Since shift does not support popping individual arguments off the middle of the parameter list, I am not sure what is a straightforward way to implement what I am asking. I'd like to hear if anyone has any solutions to removing arguments from the middle of the parameter list without creating a whole new array. Pages that I've already seen: http://mywiki.wooledge.org/ComplexOptionParsing#Rearranging_arguments http://mywiki.wooledge.org/BashFAQ/035 Using getopts in bash shell script to get long and short command line options http://wiki.bash-hackers.org/scripting/posparams http://wiki.bash-hackers.org/howto/getopts_tutorial bash argument case for args in $@ What is the canonical way to implement order independent options in bash scripts? How do I handle switches in a shell script? | POSIXly, the parsing for options should stop at -- or at the first non-option (or non-option-argument) argument whichever comes first. So in cp -R file1 -t /mybackup file2 -f that's at file1 , so cp should recursively copy all of file1 , -t , /mybackup and file2 into the -f directory. GNU getopt(3) however (that GNU cp uses to parse options (and here you're using GNU cp since you're using the GNU-specific -t option)), unless the $POSIXLY_CORRECT environment variable is set, accepts options after arguments. So it is actually equivalent to POSIX option style parsing's: cp -R -t /mybackup -f -- file1 file2 The getopts shell built-in, even in the GNU shell ( bash ) only handles the POSIX style. It also doesn't support long options or options with optional arguments. If you want to parse the options the same way as GNU cp does, you'll need to use the GNU getopt(3) API. For that, if on Linux, you can use the enhanced getopt utility from util-linux ( that enhanced version of the getopt command has also been ported to some other Unices like FreeBSD ). That getopt will rearrange the options in a canonical way which allows you to parse it simply with a while/case loop. $ getopt -n "$0" -o t:Rf -- -Rf file1 -t/mybackup file2 -R -f -t '/mybackup' -- 'file1' 'file2' You'd typically use it as: parsed_options=$( getopt -n "$0" -o t:Rf -- "$@") || exiteval "set -- $parsed_options"while [ "$#" -gt 0 ]; do case $1 in (-[Rf]) shift;; (-t) shift 2;; (--) shift; break;; (*) exit 1 # should never be reached. esacdoneecho "Now, the arguments are $*" Also note that that getopt will parse options the same way as GNU cp does. In particular, it supports the long options (and entering them abbreviated) and honours the $POSIXLY_CORRECT environment variables (which when set disables support for options after arguments) the same way GNU cp does. Note that using gdb and printing the arguments that getopt_long() receives can help building the parameters to getopt(1) : (gdb) bt#0 getopt_long (argc=2, argv=0x7fffffffdae8, options=0x4171a6 "abdfHilLnprst:uvxPRS:T", long_options=0x417c20, opt_index=0x0) at getopt1.c:64(gdb) set print pretty on(gdb) p *long_options@40$10 = {{ name = 0x4171fb "archive", has_arg = 0, flag = 0x0, val = 97 }, { name = 0x417203 "attributes-only",[...] Then you can use getopt as: getopt -n cp -o abdfHilLnprst:uvxPRS:T -l archive... -- "$@" Remember that GNU cp 's list of supported options may change from one version to the next and that getopt will not be able to check if you pass a legal value to the --sparse option for instance. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156223",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54067/"
]
} |
156,229 | I'm trying to learn the basics and I have run into an issue with my script counting the characters of a user's input. Here is my script, can someone point out where I'm going wrong please? #!/bin/bashecho "Enter a word!" read INPUT_STRING len= echo $INPUT_STRING | wc -c echo "Your character length is " $lenexit | every beginning is hard: #!/bin/bashread INPUTecho $INPUTlen=$(echo -n "$INPUT" | LC_ALL=C.UTF-8 wc -m)echo $len specifically, there must not be a space surrounding = and a separate command needs to be enclosed inside $(...) . Also, you might want to write your variables in quotes " using this syntax "${INPUT}" , this ensures that the variable is not accidentally concatenated with what follows and can contain special chars (e.g. newlines \n ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/156229",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84592/"
]
} |
156,240 | We are using linux3.12 and its led driver has a bug which got fixed in later version of Linux. We see that driver change in Linux 3.15 see Linux Cross Reference Now my question is how can I find a patch which induced this change ? Another question is how can I get access to development kernel source tree e.g. kernel-3.14.18 tree ? | every beginning is hard: #!/bin/bashread INPUTecho $INPUTlen=$(echo -n "$INPUT" | LC_ALL=C.UTF-8 wc -m)echo $len specifically, there must not be a space surrounding = and a separate command needs to be enclosed inside $(...) . Also, you might want to write your variables in quotes " using this syntax "${INPUT}" , this ensures that the variable is not accidentally concatenated with what follows and can contain special chars (e.g. newlines \n ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/156240",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60966/"
]
} |
156,261 | I have a file file.gz , when I try to unzip this file by using gunzip file.gz , it unzipped the file but only contains extracted and removes the file.gz file. How can I unzip by keeping both unzipped file and zipped file? | Here are several alternatives: Give gunzip the --keep option (version 1.6 or later) -k --keep Keep (don't delete) input files during compression or decompression. gunzip -k file.gz Pass the file to gunzip as stdin gunzip < file.gz > file Use zcat (or, on older systems, gzcat ) zcat file.gz > file | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/156261",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20346/"
]
} |
156,307 | I went through the answers in this helpful thread , but my problem seems to be different enough that I can't think of good answer (at least with sed ). I have a large CSV file (200+ GB) with rows that look like the following: <alphanumerical_identifier>,<number> where <alphanumerical_identifier> is unique across the entire file. I would like to create a separate file that replaces the first column by an index , i.e. <index>,<number> so that we get: 1, <number>2, <number>3, <number> Can awk generate an increasing index without loading the full file in memory? Since the index increases monotonically, it may be even better to just drop the index. Would the solution for that be that different?, i.e.: <number><number><number> | Not near a terminal to test, but how about the oft-overlooked nl command? Something like: cut -f 2 -d , original.csv | nl -w 1 -p -s , > numbered.csv | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/156307",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
156,343 | I have a bash script that is supposed to take some arguments and then run in a different user: test.sh #!/bin/bashsudo su user2 <<'EOF'echo $1EOF However it prints blank: $ ./test.sh haha I understand that it is because environment variable are reset(?). How can I pass this argument? Security wise I've heard I should not disable environment resetting. The only way comes to my mind to solve this is writing $1 to a file and then reading it back again by user2. But I guess there should be a much better way. | If you want the entire script to run as another user, my usual technique for doing this is adding something similar to the following to the very top of the script: target_user="foo"if [ "$(whoami)" != "$target_user" ]; then exec sudo -u "$target_user" -- "$0" "$@"fi Note that I use sudo here and not su . su makes it stupidly difficult to pass arguments properly, while sudo does not have this issue. If you only want to run a small bit of code, you can alternatively do something such as: target_user="foo"sudo -u "$target_user" sh -s "$@" <<'EOF' echo "$@"EOF This will launch sh , pass it the current script's arguments, and then execute the script provided via the heredoc. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156343",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84668/"
]
} |
156,345 | Is it possible to write a new udev rule that undoes TAG+="uaccess" (removes the uaccess tag) without undoing any of the other tags on a device? If so, how? (I have a device that should be opened by a daemon. Unfortunately, the uaccess tag added in /lib/udev/rules.d/70-uaccess.rules causes the permissions to be mangled whenever someone logs in, breaking access for the daemon.) | From version 217 onwards , is possible to do that using: TAG-="uaccess" For older versions, sadly it isn't. So you can workaround adding some conditional that would prevent the tag from being added: KERNEL=="sdb", GROUP="daemon", OPTIONS+="last_rule" In this case, you set the permissions and then nothing else can be added. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156345",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8555/"
]
} |
156,367 | I have an issue I don't understand. It is simple and it should work, but it doesn't.=EDITED exactly what I can see from terminal=I have a list of filenames: [molni@archlinux picasa_album]$ cat LISTIMG_9282.JPGIMG_9287.JPGIMG_9300.JPGIMG_9324.JPGIMG_9329.JPGIMG_9463.JPGIMG_9412.JPGIMG_9562.JPGIMG_9511.JPGIMG_9607.JPG and want to search for every file in list it's path via find command: [molni@archlinux picasa_album]$ for i in `cat LIST`; do find /mnt/c/e-m10/ -name "$i"; done[molni@archlinux picasa_album]$ no results, when I exchange it for echo $i (to check if variable $i is OK, it works) [molni@archlinux picasa_album]$ for i in `cat LIST`;do echo "$i" ; doneIMG_9282.JPGIMG_9287.JPGIMG_9300.JPGIMG_9324.JPGIMG_9329.JPGIMG_9463.JPGIMG_9412.JPGIMG_9562.JPGIMG_9511.JPGIMG_9607.JPG[molni@archlinux picasa_album]$ when I do it manually, set variable (without loop) it works: [molni@archlinux picasa_album]$ i=IMG_9607.JPG[molni@archlinux picasa_album]$ find /mnt/c/e-m10/ -name "$i" /mnt/c/e-m10/IMG_9607.JPG[molni@archlinux picasa_album]$ What am I doing wrong? | Do a cat -v LIST to see if there are any special characters that you don't see with a simple echo. I suspect DOS line endings, i.e. extraneous carriage returns before the newline. EDIT: to convert the LIST file: dos2unix < LIST > LIST.new && mv LIST.new LIST Or if you don't have dos2unix, but do have vim: vim LIST , then :set notx , then :wq | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156367",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84688/"
]
} |
156,412 | From a currently running X11 session, I would like to provide/run a VNC server such that it appears to my system as a second, “virtual” monitor – i.e. so that I can position it using xrandr and drag/position windows onto it. How, if at all, could I achieve that? Edit: More info from OP in comments: "Also asked here , without an answer. " | tl;dr: Force a "virtual" output of your graphics card to a display mode, and export that with x11vnc . You can achieve this, but there are a few prerequisites: A graphics card with multi-head capabilities (= can render several "desktop" surfaces). Which is most cards these days. x11vnc , a mature software ( x11vnc ) to export X11 surfaces (among others) to VNC clients. Most consumer cards these days can render several different outputs. Mine can do 3, out of the 5 that xrandr shows (eDP1,HDMI[12],DP[12]). Pick an unused output from xrandr , in my example HDMI2 . Pick a resolution for the screen of the vnc client, and generate a mode : $ cvt 1920 1080 # 1920x1080 59.96 Hz (CVT 2.07M9) hsync: 67.16 kHz; pclk: 173.00 MHz Modeline "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync Add that mode to xrandr xrandr --newmode "1920x1080_60" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync Put e.g. HDMI2 in that mode, and attach to the right of eDP1 (Main screen) xrandr --addmode HDMI2 1920x1080_60 --output HDMI2 --mode 1920x1080_60 --right-of eDP1 Now export that with x11vnc , choosing the appropriate offset: x11vnc -display :0 -clip 1920x1080+1600+0 <other options> Note: Add desired encryption/authentication/other options to that command. Now connect with a VNC client to your "virtual monitor". (or modify above command to connect to a "listening" VNC-client. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/156412",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20526/"
]
} |
156,435 | I'm using Xfce 4.10 with xfwm4 as my window manager. I'm finding it difficult to resize windows by grabbing the border. The region where the mouse cursor changes to the "resize window" cursor seems to be only 1 or 2 pixels wide, and I keep moving right through it. How can I make that region a bit wider? I don't want to change the appearance of window borders, just make their hit target a bit wider. (I know about the Resize option in the window menu, but that doesn't allow you to resize a window in only 1 dimension.) I've looked in the window manager settings & tweaks, but I don't see any setting that appears to apply. | It's "very easy", you can use Alt + right-click + drag. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/156435",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2421/"
]
} |
156,453 | The man page description of who command is who - show who is logged on But there exists a similar command whoami . The man page description of whoami is whoami - print effective userid Can anyone explain what exactly these commands do ? How are they different from each other ? | I am logging in as root in my shell and typing who and this is the output. whoroot tty1 2014-08-25 14:01 (:0)root pts/0 2014-09-05 10:22 (:0.0)root pts/3 2014-09-19 10:08 (xxx.xxx.edu) It effectively shows all the users that have established a connection. ssh ramesh@hostname Running who again will result in another entry for the user ramesh. whoroot tty1 2014-08-25 14:01 (:0)root pts/0 2014-09-05 10:22 (:0.0)root pts/3 2014-09-19 10:08 (xxx.xxx.edu)ramesh pts/4 2014-09-19 12:11 (xxx.xxx.edu) Inside the root shell, I just do su ramesh and then run whoami . It will give me the current user, ramesh, as the output. Effectively, who gives the list of all users currently logged in on the machine and with whoami you can know the current user who is in the shell. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156453",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81580/"
]
} |
156,468 | Unfortunately, when I try to edit an XML, trying to read that dark blue against black is murder. I am amazed that Googling "joe editor change highlighting" returns nothing! Is it really impossible to change the colours, while using the binary that came w/the RPM?I'm using joe 3.1 | To be very specific and change the dark blue on black highlighting for PHP scripts in Joe: In Ubuntu 16.04 and with Joe 4.1-2, edit: /usr/share/joe/syntax/php.jsf And change: =Constant_sq blue To some other color of your choosing: =Constant_sq yellow As mentioned you'll need to edit the appropriate .jsf file for your scripting language of choice, and the affected variable. To fix XML recoloring, edit xml.jsf and change: =Tag blue to =Tag yellow. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156468",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84749/"
]
} |
156,473 | Having a look at the default users & groups management on some usual Linux distributions (respectively ArchLinux and Debian), I'm wondering two things about it and about the consequences of modifying the default setup and configuration. The default value for USERGROUPS_ENAB in /etc/login.defs seems to be "yes", which is reflected by the "By default, a group will also be created for the new user" that can be found in the useradd man, so each time a new user is created, a group is created with the same name and only this new user in. Is there any use to that or is this just a placeholder? I'm feeling like we are losing a part of the rights management as user/group/others by doing this. Would it be bad to have a group "users" or "regulars" or whatever you want to call it that is the default group for every user instead of having their own? Second part of my question, which is still based on what I've seen on Arch and Debian: there are a lot of users created by default (FTP, HTTP, etc.). Is there any use to them or do they only exist for historical reasons? I'm thinking about removing them but don't want to break anything that could use it, but I have never seen anything doing so, and have no idea what could. Same goes for the default groups (tty, mem, etc.) that I've never seen any user belong to. | Per-user groups I too don't see a lot of utility in per-user groups. The main use case is if a user wanted to allow "friends" access to their files, they can have the friend user added to their group. Few systems I've encountered actually use it this way. When USERGROUPS_ENAB in /etc/login.defs is set to "no", useradd adds all the created users to the group defined in /etc/default/useradd by the GROUP field. On most of distributions, this is set to the GID 100 which usually corresponds to the users group.This does allow you to have a more generic management of users. Then, if you need finer control, you can manually add these groups and add users to them that makes sense. Default created groups Most of them came about from historic reasons, but many still have valid uses today : disk is the group that owns most disk drive devices lp owns parallel port (and sometimes is configured for admin rights on cups) uucp often owns serial ports (including USB serial ports) cdrom is required for mounting privileges on a cd drive Some systems use wheel for sudo rights; some not etc. Other groups are used by background scripts. For example, man generates temp files and such when it's run; its process uses the man group for some of those files and generally cleans up after itself. According to the Linux Standard Base Core Specification though, only 3 users that are root, bin and daemon are absolutely mandatory . The rationale behind the other groups is : The purpose of specifying optional users and groups is to reduce the potential for name conflicts between applications and distributions. So it looks as it is better to keep these groups in place. It's theorically possible to remove them without breakage, although for some, "mysterious" things may start to not work right (eg, some man pages not rendering if you kill that group, etc). It doesn't do any harm to leave them there, and it's generally assumed that all Linux systems will have them. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/156473",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84744/"
]
} |
156,477 | I have an Ubuntu 14.04 server installation which acts as an LXC host.It has two users: user1 and user2. user1 owns an unprivileged LXC container, which uses a directory (inside /home/user1/.local/...) as backing store. How do I make a full copy of the container for user2?I can't just copy the files because they are mapped with owners ranging from 100000 to 100000+something, which are bound to user1. Also, which I believe is basically the same question, how can I safely make a backup of my user1's LXC container to restore it later on another machine and/or user? | Per-user groups I too don't see a lot of utility in per-user groups. The main use case is if a user wanted to allow "friends" access to their files, they can have the friend user added to their group. Few systems I've encountered actually use it this way. When USERGROUPS_ENAB in /etc/login.defs is set to "no", useradd adds all the created users to the group defined in /etc/default/useradd by the GROUP field. On most of distributions, this is set to the GID 100 which usually corresponds to the users group.This does allow you to have a more generic management of users. Then, if you need finer control, you can manually add these groups and add users to them that makes sense. Default created groups Most of them came about from historic reasons, but many still have valid uses today : disk is the group that owns most disk drive devices lp owns parallel port (and sometimes is configured for admin rights on cups) uucp often owns serial ports (including USB serial ports) cdrom is required for mounting privileges on a cd drive Some systems use wheel for sudo rights; some not etc. Other groups are used by background scripts. For example, man generates temp files and such when it's run; its process uses the man group for some of those files and generally cleans up after itself. According to the Linux Standard Base Core Specification though, only 3 users that are root, bin and daemon are absolutely mandatory . The rationale behind the other groups is : The purpose of specifying optional users and groups is to reduce the potential for name conflicts between applications and distributions. So it looks as it is better to keep these groups in place. It's theorically possible to remove them without breakage, although for some, "mysterious" things may start to not work right (eg, some man pages not rendering if you kill that group, etc). It doesn't do any harm to leave them there, and it's generally assumed that all Linux systems will have them. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/156477",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72956/"
]
} |
156,482 | Most files on a Linux system are normal files, i.e. they are saved on disk and reading from them just reads from a specified chunk of memory on the disk. How can I make something that behaves like a file in terms of being able to read from it as one would a normal file, but is actually returning programmatically generated data instead? As a concrete example, a file that downloads the current google.com and returns it, such that cat ~/myspecialfile would output the contents of google.com to stdout? | As the other answers have indicated, you can do part of what you've asked for using named pipes. For a complete solution, you'd have to develop some kind of virtual filesystem that took the desired actions when a path on the virtual filesystem was accessed. There are a few approaches to doing this: Write a kernel-mode filesystem driver, like the procfs driver. Write a user-mode filesystem implementation, using FUSE for example. Write a program which provides an NFS server interface or another network filesystem protocol. Maybe a program that pretends to be a USB file-storage device or another piece of hardware. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156482",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84755/"
]
} |
156,515 | In XFce4, when I launch applications, I don't want them to appear above the windows I am currently using, or to show alert windows above my current work. Rather, I'd like the windows to load in the background and to not steal focus. The exception would be an alert window or dialog belonging to the application I'm using, e.g. if I am currently using LibreOffice Calc, and it gives me a pop-up to tell me an error, that window can be brought to focus. Is there any way to prevent windows from stealing focus in Xfce4? | Settings/Window Manager/Focus Focus follows mouse (on) [shouldn't matter, but the focus stealing prevention setting seems to work better with this on] Automatically give Focus to newly created windows (off) Also Settings/Window Manager Tweaks/Focus Activate Focus Stealing Prevention (on) Honor standard ICCCM focushint (off) When a window raises itself, (do nothing) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156515",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13099/"
]
} |
156,530 | I would like to convert every two rows to two columns using awk . input.txt: # Query: gi|11465907|ref|NC_001872.1| Chlamydomonas eugametos genome, complete genome# 0 hits found# Query: gi|11465922|ref|NC_000892.1| Pedinomonas minor genome, complete genome# 1 hits found output.txt: Chlamydomonas eugametos genome 0Pedinomonas minor genome 1 | Settings/Window Manager/Focus Focus follows mouse (on) [shouldn't matter, but the focus stealing prevention setting seems to work better with this on] Automatically give Focus to newly created windows (off) Also Settings/Window Manager Tweaks/Focus Activate Focus Stealing Prevention (on) Honor standard ICCCM focushint (off) When a window raises itself, (do nothing) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156530",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20346/"
]
} |
156,534 | I am having trouble getting the basics of Bash scripting down. Here's what I have so far: #!/bin/bashFILES="/home/john/my directory/*.txt"for f in "${FILES}"do echo "${f}"done All I want to do is list all the .txt files in a for loop so I can do stuff with them. But the space in the my directory and the asterisk in *.txt just aren't playing nicely. I tried using it with and without double quotes, with and without curly braces on variable names and still can't print all the .txt files. This is a very basic thing, but I'm still struggling because I'm tired and can't think straight. What am I doing wrong? I've been able to successfully apply the script above if my FILES don't have a space or an asterisk... I had to experiment with or without the use of double quotes and braces to get it to work. But the moment I have both spaces and an asterisk, it messes everything up. | Inside quotes, the * will not expand to a list of files. To use such a wildcard successfully, it must be outside of quotes. Even if the wildcard did expand, the expression "${FILES}" would result in a single string, not a list of files. One approach that would work would be: #!/bin/bashDIR="/home/john/my directory/"for f in "$DIR"/*.txtdo echo "${f}"done In the above, file names with spaces or other difficult characters will be handled correctly. A more advanced approach could use bash arrays: #!/bin/bashFILES=("/home/john/my directory/"*.txt)for f in "${FILES[@]}"do echo "${f}"done In this case, FILES is an array of file names. The parens surrounding the definition make it an array. Note that the * is outside of quotes. The construct "${FILES[@]}" is a special case: it will expand to a list of strings where each string is one of the file names. File names with spaces or other difficult characters will be handled correctly. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/156534",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56021/"
]
} |
156,549 | I am taking the Linux Foundation's Introduction to Linux course . Some of the terminology seems to overlap or contradict, especially when I try to supplement the course material with other sources, such as TLDP and Wikipedia . Is a "Display Manager" the same thing as a "Session Manager"? Display manager: Program that initiates a windowing system session by launching the windowing system and usually asking for a username and password. Session manager: Starts and maintains the components of the graphical session. Likewise, is a "Windowing system" the same thing as a "Window manager"? Windowing system: Software which provides the key elements of the GUI for high-level software to use. Provides applications with a (usually) rectangular, resizeable surface to present its GUI to the user. Window manager: Controls the placement and movement of windows, window chrome, and controls. And just to be sure about X: From what I gather it seems that "X Window System" is a windowing system for bitmap displays, "X11" is the current protocol version for the X Window System, and "X.Org Server" is the reference implementation of the X11 protocol. Is that correct? | Here's a very short rough characterization: Display manager: The program that provides you a graphical login and then starts your session. Runs as root or dedicated user. Session manager: The program that actually controls your session. Runs under your account. Windowing system: The complete GUI drawing/control system. Describes not a component in itself, but all components together. Window manager: The program that determines where windows are placed, what decorations (frame, close/iconify/menu buttons, etc.) they get and how they get/lose focus. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156549",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9760/"
]
} |
156,568 | Long story short, I destroyed /var and restored it from backup - but the backup didn't have correct permissions set, and now everything in /var is owned by root. This seems to make a few programs unhappy. I've since fixed apt failing fopen on /var/cache/man as advised here as well as apache2 failing to start (by giving ownership of /var/lib/apache2 to www-data ). However, right now the only way to fix everything seems to be to manually muck around with permissions as problems arise - this seems very difficult as I would have to wait for a program to start giving problems, establish that the problem is related to permissions of some files in /var and then set them right myself. Is there an easy way to correct this? I already tried reinstalling (plain aptitude reinstall x ) every package that was listed in dpkg -S /var , but that didn't work. | Actually apt-get --reinstall install package should work, with files at least: ➜ ~ ls -l /usr/share/lintian/checks/version-substvars.desc -rw-r--r-- 1 root root 2441 Jun 22 14:19 /usr/share/lintian/checks/version-substvars.desc➜ ~ sudo chmod +x /usr/share/lintian/checks/version-substvars.desc➜ ~ ls -l /usr/share/lintian/checks/version-substvars.desc -rwxr-xr-x 1 root root 2441 Jun 22 14:19 /usr/share/lintian/checks/version-substvars.desc➜ ~ sudo apt-get --reinstall install lintian (Reading database ... 291736 files and directories currently installed.)Preparing to unpack .../lintian_2.5.27_all.deb ...Unpacking lintian (2.5.27) over (2.5.27) ...Processing triggers for man-db (2.6.7.1-1) ...Setting up lintian (2.5.27) ...➜ ~ ls -l /usr/share/lintian/checks/version-substvars.desc-rw-r--r-- 1 root root 2441 Jun 22 14:19 /usr/share/lintian/checks/version-substvars.desc Now, you probably didn't get all the packages that have files on your /var directory, so its better to find them all : ➜ ~ find /var -exec dpkg -S {} + 2> /dev/null | grep -v "no path found" | wc -l 460 In my case, it accounts for 460 paths that have a package, this is actually less if you consider that the same package can have several paths, which with some post processing we can find out that are ~122: ➜ ~ find /var -exec dpkg -S {} + 2> /dev/null | grep -v "no path found" | cut -d : -f 1 | sort | uniq | wc -l122 This of course counts several package that has the same path, like wamerican, aspell-en, ispanish, wspanish, aspell-es, myspell-es . This is easily fixable: ➜ ~ find /var -exec dpkg -S {} + 2> /dev/null | grep -v "no path found" | cut -d : -f 1 | sed 's/, /\n/g' | sort | uniq | wc -l107 So, I have 107 package that have any kind of file in /var or subdirectories. You can reinstall them using: sudo apt-get --reinstall install $(find /var -exec dpkg -S {} + 2> /dev/null | grep -v "no path found" | cut -d : -f 1 | sed 's/, /\n/g') This should fix the permissions. Now, there's another option, find a good installation and copy the file permissions over your installation with: chmod --recursive --reference good/var bad/var | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156568",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36553/"
]
} |
156,579 | I have done the following at command line: $ text="name with space"$ echo $textname with space I am trying to use tr -d ' ' to remove the spaces and have a result of: namewithspace I've tried a few things like: text=echo $text | tr -d ' ' No luck so far so hopefully you wonderful folk can help! | In Bash, you can use Bash's built in string manipulation. In this case, you can do: > text="some text with spaces"> echo "${text// /}"sometextwithspaces For more on the string manipulation operators, see http://tldp.org/LDP/abs/html/string-manipulation.html However, your original strategy would also work, your syntax is just a bit off: > text2=$(echo $text | tr -d ' ')> echo $text2sometextwithspaces | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/156579",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82325/"
]
} |
156,606 | I'm new to the gzip command, so I googled some commands to run so I can gzip an entire directory recursively. Now while it did that, it converted each of my files to the gzip format and added .gz to the end of each of their filenames.. is there a way to ungzip them all, one by one? | There are essentially two options for going through the whole directory tree: Either you can use find(1) : find . -name '*.gz' -exec gzip -d "{}" \; or if your shell has recursive globbing you could do something like: for file in **/*.gz; do gzip -d "$file"; done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156606",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72959/"
]
} |
156,607 | I'm trying to write a script that can monitor a process's CPU usage over an interval (to create a graph). So far, this is the command I'm using ps -p $PROCID -o cputime,etimes My only concern is that the output of cputime appears to be [dd]hh:mm (or something similar, can't remember off the top of my head now) Is there a way to format cputime in seconds, kind of like etime -> etimes to get elapsed time in seconds? Edit: This is the output that I'm currently receiving 2-03:01:33 2653793 I'd like the first parameter to be formatted in seconds, not days-hours:minutes:seconds. | This converts the first time to seconds: ps -p $PROCID -o cputime,etimes | awk -F'[: ]+' '/:/ {t=$3+60*($2+60*$1); print t,$NF}' As an example, the ps command produces: $ ps -p 5403 -o cputime,etimes TIME ELAPSED01:33:38 1128931 The awk command processes that and returns: ps -p 5403 -o cputime,etimes | awk -F'[: ]+' '/:/ {t=$3+60*($2+60*$1); print t,$NF}'5618 1128931 Explanation -F'[: ]+' This tells awk to treat both colons and spaces as field separators. This way, the hours, minutes, and seconds appear as separate fields. /:/ {t=$3+60*($2+60*$1); print t,$NF} The initial /:/ restricts the code to working only on lines that include a colon. This removes the header lines. The number of seconds is calculated from hours, minutes, seconds via t=$3+60*($2+60*$1) . The resulting value for t is then printed along side with the elapsed time. Handling days If ps produces days,hours,minutes,seconds, as in: 2-03:01:33 Then, use this code instead: ps -p $PROCID -o cputime,etimes | awk -F'[-: ]+' '/:/ {t=$4+60*($3+60*($2+24*$1)); print t,$NF}' If days may or may not be prepended to the output, then use this combination command: ps -p $PROCID -o cputime,etimes | awk -F'[-: ]+' '/:/ && NF==5 { t=$4+60*($3+60*($2+24*$1)); print t,$NF} /:/ && NF==4 {t=$3+60*($2+60*$1); print t,$NF}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156607",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84829/"
]
} |
156,615 | I am trying to write a shell script to generate all possible words in the English language less than 20 characters. I doubt there is any truly efficient way to do this other than to brute force some of it. Clearly this is going to generate a lot of gibberish but through the complete set, if the scope is even computable in a decent amount of time, I hope to explore aspects of the human language. Also if anyone knows how to compute or tell me what the space is I'd love to know. I guess this is basic combinatorics or permutations but I don't know which is which. 26 letters. 20 or 25 length. I'm sure 25 provides enough complexity to come up with some good words but this is bound to increase computation dramatically. In the set no doubt would be aaaaaaadfsf and also bungology. | This converts the first time to seconds: ps -p $PROCID -o cputime,etimes | awk -F'[: ]+' '/:/ {t=$3+60*($2+60*$1); print t,$NF}' As an example, the ps command produces: $ ps -p 5403 -o cputime,etimes TIME ELAPSED01:33:38 1128931 The awk command processes that and returns: ps -p 5403 -o cputime,etimes | awk -F'[: ]+' '/:/ {t=$3+60*($2+60*$1); print t,$NF}'5618 1128931 Explanation -F'[: ]+' This tells awk to treat both colons and spaces as field separators. This way, the hours, minutes, and seconds appear as separate fields. /:/ {t=$3+60*($2+60*$1); print t,$NF} The initial /:/ restricts the code to working only on lines that include a colon. This removes the header lines. The number of seconds is calculated from hours, minutes, seconds via t=$3+60*($2+60*$1) . The resulting value for t is then printed along side with the elapsed time. Handling days If ps produces days,hours,minutes,seconds, as in: 2-03:01:33 Then, use this code instead: ps -p $PROCID -o cputime,etimes | awk -F'[-: ]+' '/:/ {t=$4+60*($3+60*($2+24*$1)); print t,$NF}' If days may or may not be prepended to the output, then use this combination command: ps -p $PROCID -o cputime,etimes | awk -F'[-: ]+' '/:/ && NF==5 { t=$4+60*($3+60*($2+24*$1)); print t,$NF} /:/ && NF==4 {t=$3+60*($2+60*$1); print t,$NF}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156615",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84835/"
]
} |
156,616 | I am trying to copy a file from inside a bunch of folders to the current directory I am in.Playing around with the terminal, I see that when I specify the entire location it works: joostin@ubuntu:~$ cp ~/unixstuff/vol/examples/tutorial/science.txt . But when I go into the unixstuff folder and try to bring it into the current directly I get an error. Any idea what is going on? joostin@ubuntu:~$ cd unixstuffjoostin@ubuntu:~/unixstuff$ cp /vol/examples/tutorial/science.txt .cp: cannot stat ‘/vol/examples/tutorial/science.txt’: No such file or directory | There is no such directory /vol, but it is vol (without slash), so try just cp vol/examples/tutorial/science.txt . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156616",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84836/"
]
} |
156,637 | I have a docker container running uhttpd and serving a static HTML page. How can I dynamically insert the hostname of the container into the static HTML page? I want to keep the container as lightweight as possible. | Why not just have a command that runs as part of the container's startup that generates the static HTML page with the hostname within it. $ cat <<EOF > /path/to/var/www/hostname.html<html><body><p>hostname is: $(hostname)</p></body></html>EOF This command can be placed in /etc/rc.d/rc.local assuming you're using a SysV style of startup scripts. If you're using systemd you can also do the same, but you'll need to enable the service: $ sudo service rc-local start This will mark it to run, to make it run per startup: $ sudo systemctl enable rc-local If you're using something else, such as Upstart, there are equivalent methods for doing the same things above. References [Solved] systemd services to replace /etc/rc.local{,.shutdown} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156637",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
156,652 | I have a script that "generates" a sequentially named image (e.g. img_001.jpg) and saves it in a fixed directory, e.g. ~/Documents/Images. After the file has been created, I'd like to display the folder with the file selected , i.e. similar to how Chrome and Firefox will open the directory of a downloaded file with it already selected. Apparently my Linux Mint edition uses Caja. I tried, caja $filename But Caja decides to actually open the file using the default application. The caja help isn't very useful and I've looked everywhere but can't find any similar questions. Hopefully I'm just using the wrong search terms and Caja does actually support something as basic as this? | This command work fine for me: dbus-send --session --type=method_call --dest="org.freedesktop.FileManager1" "/org/freedesktop/FileManager1" "org.freedesktop.FileManager1.ShowItems" array:string:"file:///etc/hosts" string:"" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156652",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84856/"
]
} |
156,698 | Consider the following script: #!/bin/bashset -o pipefailset -o historytrapper() { func="$1" ; shift for sig ; do trap "$func $sig" "$sig" done}err_handler () { case $2 in INT) stop_received=1 ;; TSTP) ;; ERR) if [[ $2 != "INT" ]]; then # for some reason, ERR gets triggered on SIGINT code=$? if [ $code -ne 0 ]; then echo "Failed on line $1" echo "$BASH_COMMAND returned $?" echo "Content of variables at the time of failure:" echo "$(set -o posix; set)" exit 1 fi fi ;; esac}main() { ping -c 5 www.google.com # this is a test to see if INT interrupts main() # do a bunch of stuff, I mean a real bunch of stuff, like check some # files, do some processing, connect to a database, # do more processing, move some files around, you get the drift}exec > >(tee -a my.log)exec 2>&1trapper 'err_handler $LINENO' INT TSTP ERRwhile maindo if [[ "$stop_received" == "1" ]]; then break fi setsid sleep 2 & waitdonetrap ERR What I am trying to accomplish is to run the script in an infinite loop until either the function main() returns some non zero value, i.e. some error occurred, or SIGINT is received. However, I don't want SIGINT to stop main() from executing, in other words, if the script receives a SIGINT, it should wait for main() to finish, then exit nicely. But when I hit CTRL+C, I can see that ping is interrupted. At the moment I commented out everything under main() just to see if this works. Since ping gets interrupted, I am assuming other commands under main() would also get interrupted. When ping is interrupted, the processing jumps to the line where I check if $stop_received=1 and then the loop breaks and the script quits. If I replace the break with just an echo, then the script just continues on to the next iteration of the while main loop. How can I stop SIGINT from interrupting the currently running command(s)? Since my script does a bunch of stuff, including DML statements in a database, interrupting main() would cause lot of grief. Secondly, the script does not trap ctrl+z either. Or rather, the script just gets stuck on ctrl+z requiring a kill pid to terminate. I assumed that sleep being a child of bash as opposed to the script itself, ctrl+z would cause the script to pause, leaving sleep in limbo land. Hence the setsid and wait on sleep, but it still hangs. thanks. | There are several ways you can cut off the effect of Ctrl + C : Change the terminal setting so that it doesn't generate a signal. Block the signal so that it is saved for later delivery, when the signal becomes unblocked. Ignore the signal, or set a handler for it. Run subprocesses in a background process group. Since you want to detect that Ctrl + C has been pressed, ignoring the signal is out. You could change the terminal settings, but then you would need to write custom key processing code. Shells don't provide access to signal blocking. You can however isolate subprocesses from receiving the signal automatically by running them in a separate process group. Interactive shells run background commands in a separate process group by default, but non-interactive shells run them in the same process group, and all processes in the foreground process group receive a signal from terminal events. To tell the shell to run background jobs in a separate process group, run set -m . Running setsid ping … is another way of forcing ping to run in a separate process group. set -minterrupted=trap 'echo Interrupted, but ping may still be running' INTset -mping … &while wait; [ $? -ge 128 ]; do echo "Waiting for background jobs"; doneecho ping has finished If you want Ctrl + Z to suspend a background process group, you'll need to propagate the signal from the shell. Controlling signals finely is a bit of a stretch for a shell script, and shells other than ATT ksh tend to be a little buggy when you reach the corner cases, so consider a language that gives you more control such as Perl, Python or Ruby. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156698",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83446/"
]
} |
156,700 | I understand that, for scheduling purposes, Linux processes have a "nice" value and a real-time priority value and that these can be explicitly altered with the nice and chrt commands. If the user does not explicitly set the real-time priority of a process, how is it set? | To quote Robert Love: The scheduler does not magically know whether a process is interactive. It requires some heuristic that is capable of accurately reflecting whether a task is I/O-bound or processor-bound. The most indicative metric is how long the task sleeps . If a task spends most of its time asleep it is I/O-bound . If a task spends more time runnable than sleeping, it is not interactive . This extends to the extreme; a task that spends nearly all the time sleeping is completely I/O-bound , whereas a task that spends nearly all its time runnable is completely processor-bound . To implement this heuristic, Linux keeps a running tab on how much time a process is spent sleeping versus how much time the process spends in a runnable state. This value is stored in the sleep_avg member of the task_struct . It ranges from zero to MAX_SLEEP_AVG , which defaults to 10 milliseconds. When a task becomes runnable after sleeping, sleep_avg is incremented by how long it slept, until the value reaches MAX_SLEEP_AVG . For every timer tick the task runs, sleep_avg is decremented until it reaches zero. So, I believe the kernel decides the scheduling policy based on the above heuristics. As far as I know, for real time processes, the scheduling policy could either be SCHED_FIFO or SCHED_RR . Both the policies are similar except that SCHED_RR has a time slice while SCHED_FIFO doesn't have any time slice. However, we can even change the real time process scheduling as well. You could refer to this question on how to change the real time process scheduling. References http://www.informit.com/articles/article.aspx?p=101760&seqNum=2 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156700",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15243/"
]
} |
156,707 | I have been working on my laptop a lot lately, and I am accidentally clicking while typing. I know I could remove set mouse=a in my .vimrc , but sometimes I like using the mouse. What can I do to create a toggle function to toggle mouse support? | You can retrieve the value of an option by using its name with a & prepended. So a simple toggle function for the mouse option would be: function! ToggleMouse() " check if mouse is enabled if &mouse == 'a' " disable mouse set mouse= else " enable mouse everywhere set mouse=a endifendfunc This toggles between "no mouse" and "mouse in all modes". You can use it via :call ToggleMouse() PS: don't use something like this for options that are boolean, for these :set option! can be used to invert them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156707",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43077/"
]
} |
156,719 | Is the ext2 filesystem good for /boot partition? I set ext4 for / root partition, but wasn't sure which filesystem to select for the /boot partition, and I just set ext2 . Does it matter in this case? | It only matters if you're going to use the ancient GRUB, ext4 is only supported by GRUB2. ext2 is simple, robust and well-supported, which makes it a good choice for /boot. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156719",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/487383/"
]
} |
156,794 | There are two server that I can access with 2 different VPN connections. I have managed to have both VPN working on the same time on my machine (a bit of routing rules). I want to do a scp <remote1>:some/file <remote2>:destination/folder from my laptop terminal. But when I try this, the scp command that is invoked on remote1 cannot find remote2 because they are not in the same network. Is it possible to force the scp command to pass through my laptop as a router? If I try with Nautilus (connect to server, both servers, then copy-paste) it works, but I'd like to do it from a terminal. | Newer versions of scp have the option -3 -3 Copies between two remote hosts are transferred through the local host. Without this option the data is copied directly between the two remote hosts | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/156794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59807/"
]
} |
156,797 | How do I find out what commands such as LS do? I have recently been trying to write my first code and have became stuck with command names. | Newer versions of scp have the option -3 -3 Copies between two remote hosts are transferred through the local host. Without this option the data is copied directly between the two remote hosts | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/156797",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84949/"
]
} |
156,859 | I read the following question ( Shell Script mktemp, what's the best method to create temporary named pipe? ) but I'm wondering whether it is preferable to use a temporary named pipe to transfer sensitive data between programs as opposed to an unnamed/anonymous shell pipe? Specifically I'm interested in whether this type of approach (from http://blog.kdecherf.com/2012/11/06/mount-a-luks-partition-with-a-password-protected-gpg-encrypted-key-using-systemd/ ) is safe: # Open the encrypted block devicegpg --batch --decrypt $key_file 2>/dev/null | sudo $CRYPTSETUP -d - luksOpen $mount_device $key >& /dev/null || exit 3 In which cases could the Luks Keyfile be hijacked? | The command line you are suggesting is secure. All other things being equal, "normal" anonymous pipes (created with the pipe(2) system call or the shell's familiar | syntax) are always going to be more secure than named pipes because there are fewer ways for something else outside the system to get ahold of either one of the ends of the pipe. For normal anonymous pipes, you can only read or write from the pipe if you already have in your possession a file descriptor for it, which means you must either be the process that created the pipe, or must have inherited it (directly or indirectly) from that process, or some process that had the file descriptor deliberately sent it to you through a socket. For named pipes, you can obtain a file descriptor to the pipe if you don't have one already by opening it by name. On operating systems like Linux that have /proc there is always the possibility that another process can peek into /proc/pid/fd an access file descriptors belonging to a different process, but this is nothing unique to pipes (of whatever kind), and for that matter they can peek into another process' memory space too. The "peeker" must be either running under the same user as the subject or root, so it's not a security problem. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/156859",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82034/"
]
} |
156,915 | I have a regular process that's not so important but will consume very much CPU power. I have another process which is really important, but it spends most of the time idle, but when it gets a job it really needs high computing power. I tried running with nice -20 ./low_priority_process and nice --20 ./high_priority_process but still the lower priority process consumes significant amount of CPU when the high priority process is in need. How can I run a process that will really yield or even auto-suspend when another process is using CPU power? | Have a look at cgroups , it should provide exactly what you need - CPU reservations (and more). I'd suggest reading controlling priority of applications using cgroups . That said, put the important yet often idle processes into group with allocated 95% of CPU and your other applications into another one with allocated 5% - you'll get (almost) all of the power for your jobs when needed, while the constantly power hungry process will only get 5% at most at those times. When the computational surges disappear all CPU performance will be thrown at the remaining processes. As a benefit, if you create a special cgroup (with minimal performance requirements) for processes like sshd , you'll be able to log in no matter what is trying to get all CPU it can - some CPU time will be reserved for sshd . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/156915",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/317/"
]
} |
156,931 | I would like to create an alias that does something like this: alias userYYY='sudo su userYYY; cd /a/path/that/only/userYYY/has/access' So then from my command line, I am logged in with a sudo user, and I would like to type the alias userYYY so that my shell is now logged with userYYY and pwd is /a/path/that/only/userYYY/has/access . How can I do that? This userYYY is for running some processes, and there must be anything in its home. Hence, I tried changing its $HOME using: sudo usermod -m -d /a/path/that/only/userYYY/has/access userYYY And then from my shell with my sudoer file I did sudo su userYYY . But that didn't work. The only that worked was sudo su -l userYYYY but that opened a new bash inside my original shell ( -bash-4.1$ .... ). In summary, what I want is to simply avoid having to write 2 lines in my shell: sudo su userYYYcd /a/path/that/only/userYYY/has/access Any ideas? | alias userYYY='sudo su userYYY -c "cd /a/path/that/only/userYYY/has/access; /bin/bash"' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/156931",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36945/"
]
} |
156,938 | I am learning Docker, and I quite like it. However, I don't understand, why does docker need root privileges for making containers, reading logs, and so on. I have read some articles like this one https://docs.docker.com/articles/security/ but all I see there is "docker needs root priviledges, because it can have access to root folders". Well, I wouldn't mind running dockers as non-root and give them access only to non-root user-owned folders in the outside system. Why is that a problem? | Some "cool" docker features like port binding, mounting filesystems etc. strictly requires docker.io daemon to be run with super-user privileges. However, you can use docker command line tool without root privileges if docker.io daemon is listening to a network port or it's unix socket is accessible for user to read and write. It's a GREAT security violation and should not normally be used. Additional details about Docker security: https://docs.docker.com/articles/security/ | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/156938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10393/"
]
} |
156,962 | I log into an system as root via ssh. How do I become the normal user or another user in the command-line? | As root, you may issue su - username You will not be prompted for a password. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/156962",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36440/"
]
} |
157,007 | I am a moderately new linux user. I changed my PC, and started using CentOS 7 from CentOS 6. So I attached my previous hard disk to my new pc to take backup of my files. Now, copying the files (and preserving the permissions and all), the files shows owner as 500 (I guess this is my previous UID). Is there any way I can change them to my new user name? I want to exclude the files which shows some other owners like 501. Edit: Example: ls -ltotal 3-rw-rw-r--. 1 500 500 210 Jan 10 2012 about.xmldrwxr-xr-x. 2 500 500 4096 May 15 2013 apachedrwxrwxr-x. 2 500 500 4096 Dec 9 2012 etc Now, I can do chown -R xyz:xyz . to make them look like: ls -ltotal 3-rw-rw-r--. 1 xyz xyz 210 Jan 10 2012 about.xmldrwxr-xr-x. 2 xyz xyz 4096 May 15 2013 apachedrwxrwxr-x. 2 xyz xyz 4096 Dec 9 2012 etc But I just want to know if there are some kind of commands which can map user 500 to user "xyz". Thank you. | If I understand you correctly, you want to change the owner of all files inside some directory (or the root) that are owned by user #500 to be owned by another user, without modifying files owned by any other user. You're in that situation because you've copied a whole directory tree from another machine, where files inside that tree were owned by many different users, but you're only interested in updating those that were owned by "your" user at the moment, and not any of the files that are owned by user #501 or any other. GNU chown supports an option --from=500 that you can use in combination with the -R recursive option to do this: chown -R --from=500 yourusername /path/here This will be the fastest option if you have GNU chown , which on CentOS you should. Alternatively can use find on any system: find /path/here -user 500 -exec chown yourusername '{}' '+' find will look at every file and directory recursively inside /path/here , matching all of those owned by user #500. With all of those files, it will execute chown yourusername file1 file2... as many times as required. After the command finishes, all files that were owned by user #500 will be owned by yourusername . You'll need to run that command as root to be able to change the file owners. You can check for any stragglers by running the same find command without a command to run: find /path/here -user 500 It should list no files at this point. An important caveat: if any of the files owned by user #500 are symlinks, chown will by default change the owner of the file the symlink points at, not the link itself. If you don't trust the files you're examining, this is a security hole. Use chown -h in that case. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/157007",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85078/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.