source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
178,337
I am having a weird problem. I am not able to ssh to docker container having ip address 172.17.0.61 . I am getting following error: $ ssh 172.17.0.61ssh: connect to host 172.17.0.61 port 22: Connection refused My Dockerfile does contain openssh-server installation step: RUN apt-get -y install curl runit openssh-server And also step to start ssh: RUN service ssh start What could be the issue? When I enter into container using nsenter and start ssh service then I am able to ssh. But while creating container ssh-server doesn't seems to start. What should I do?
Container vs. Image The RUN statement is used to run commands when building the docker image. With ENTRYPOINT and CMD you can define what to run when you start a container using that image. See Dockerfile Reference for explanations how to use them. Services There is not preinstalled init-system in the containers, so you cannot use service ... start in a container. Consider starting the process in the CMD statement as foreground process or use an init-system like Phusion or Supervisord.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/178337", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98632/" ] }
178,367
Since I keep my bash history under source control, I've noticed that sometimes sizable segments of the history end up repeated, sometimes hours or days after their original execution. I use Debian 7.7 and have the following config: shopt -s histappendexport HISTCONTROL=ignoreboth:erasedupsexport HISTSIZE=1000000export HISTFILESIZE=1000000 I suspect there is some interaction between multiple terminals, histappend , and erasedups . I'm answering this question myself but if someone disagrees or has more detail I would like other answers! Edit: I believe this is not a duplicate -- there are many questions asking how to ignore duplicate entries; I'm asking about getting rid of a buggy behavior around mistakenly duplicated history segments. (Whole chunks repeated that I had actually executed only once.)
Container vs. Image The RUN statement is used to run commands when building the docker image. With ENTRYPOINT and CMD you can define what to run when you start a container using that image. See Dockerfile Reference for explanations how to use them. Services There is not preinstalled init-system in the containers, so you cannot use service ... start in a container. Consider starting the process in the CMD statement as foreground process or use an init-system like Phusion or Supervisord.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/178367", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3071/" ] }
178,383
I'm starting to read Linux Systems Programming, 2nd Ed., and I was curious about the file table that is a "per-process list of open files." Is the file table like a table in a SQL db with the fds used as primary keys? If so, does this mean that the entries are repeated, or is it split into separate tables and normalized? Or does it work entirely differently since we're dealing with straight C/assembly? If so, what data structures are used? Where in the source is this subsystem defined? Most of the reason I'm doing this is to understand both C and Linux better. If I know where to find it, that would give me a better idea.
Since this is C, "table" is likely short for "array of structures". You probably want to read "Understanding the Linux Kernel" or "Linux Kernel Development" . Or do it the hard way and read the source; good places to start might be: include/linux/fdtable.h and include/linux/fs.h .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/178383", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53613/" ] }
178,411
I have an application which takes as an input attributes in double quotes embedded in single quotes. Take for example this right command: command -p 'cluster="cl1"' In order to automate it, I created a bash file using $CLUSTER as a variable.How should be my command? In other words, what should I put instead of cl1? Please note that, If I modified the above command, it won't be accepted. For instance: command -p "cluster=cl1" is not accepted
It looks like your command is maybe setting environment variables based on arguments given it on the command-line. It may be you can do: CLUSTER=cl1; cluster=$CLUSTER command ...and set its environment for it at invocation. Otherwise, shell quotes typically delimit arguments or escape other special shell characters from shell interpretation. You can contain (and therefore escape) different kinds of shell-quotes within other kinds based on various rules: "''''" - a soft-quoted string can contain any number of hard-quotes. "\"" - a \ backslash can escape a " soft-quote within a " soft-quoted string. In this context a \\ backslash also escapes itself, the \$ expansion token, and \n ewlines as noted below, but is otherwise treated literally. "${expand} and then some" - a soft-quoted string can contain an interpreted shell $ expansion. '"\' - a ' hard-quoted string can contain any character other than a ' hard-quote. \ - an unquoted backslash will escape any following character for literal interpretation - even another backslash - excepting a \n ewline. In a \\n ewline case both the \ backslash and the \n ewline are completely removed from the resulting interpreted command. ${parameter+expand "$parameter"} - quotes resulting from a shell expansion almost never serve as delimiter markers excepting a few special cases. I won't venture to describe these further here. I consider it odd that any application would interpret quotes in its command-line args. Such a practice doesn't make a lot of sense in that - for shells, at least - the primary purpose of a quote is generally to delimit an argument. At invocation, however, arguments are always already delimited with \0NUL characters and so a quote cannot serve much purpose. Even a shell will typically only bother to interpret quotes in one of its invocation arguments when it is called with a -c switch - which denotes that its first operand is actually a shell script that it should run upon invocation. This is a case of twice evaluated input. All that said, you can do a number of things to pass literal quotes via arguments on the command-line. For example: CLUSTER='"cl1"'; command -p "cluster=$CLUSTER" As I noted in a comment before, you can contain the " quotes within an expansion that is itself " quoted. CLUSTER=cl1; command -p "cluster=\"$CLUSTER\"" You can escape the " with a \ backslash within the " quoted string. CLUSTER=cl1; command -p cluster='"'"$CLUSTER"'"' You can alternate and concatenate quoting styles to arrive at your desired end result as @jimmij notes above . CLUSTER=cl1; ( set -f; IFS=; command -p cluster=\"$CLUSTER\" ) You can disable both file name generation and $IFS splitting - thereby avoiding the need to quote the $expansion at all - and so only quote the quotes. This is probably overkill. Last, there is another type of shell-quote that might be used. As I noted before the sh -c "$scriptlet" form of shell invocation is often used to provide a shell's script on the command-line. When $scriptlet gets complicated though - such as when quotes must contain other quotes - it can often be advantageous to use a here-document and sh -s instead - where the shell is specifically instructed to assign all following operands to the positional parameters as it would do in a -c case and yet to also take its script from stdin . If your command must interpret quotes in this way then I would consider it better if it could do so in a file input. For example: CLUSTER=cl1command --stdin <<-SCRIPT cluster="$CLUSTER"SCRIPT If you do not quote the delimiter of a <<here-document then all of its contents are treated almost exactly like they were " soft-quoted - except that " double-quotes themselves are not treated specially. And so if we run the above with cat instead: CLUSTER=cl1cat <<-SCRIPT cluster="$CLUSTER"SCRIPT ...it prints... cluster="cl1"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/178411", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98733/" ] }
178,421
I am running kali linux, which has gnome. I just updated with apt-get update && apt-get upgrade and that completed without incident, only a little while after did I realise that my bottom taskbar was missing(well technically just blank(black)). The one where running programs are displayed as well as the 4 buttons that allow switching between workspaces. I have searched around for the last hour, and tried a few things, such as gconftool --recursive-unset /apps/panel && killall gnome-panel and a couple of variations thereof. Nothing has worked so far. I also tried: apt-get build-dep gnome-panel which installed a ton of stuff to my surprise... but still, no fix. If I type gnome-panel into the terminal I get the message: Cannot register the panel shell: there is already one running. So clearly, it's not gone(anyway the top bar with menus and shutdown options etc. is still there), it's just the bottom bar... I also noticed that while I say it's missing, it's technically still there, albeit just a black unresponsive bar across the bottom of the screen. So I guess my real question is, why is it not showing anything? And can anybody please help me out?
It turns out that for some reason the settings for my bottom panel(taskbar) had been altered. To access the menu for the panels(top & bottom bars), I needed to hold Alt and then Right-click . This opened a little menu where I could select Add to panel... which opens another menu of things you can add to the taskbar. What I needed to add was 'Windows List' , which solved my problem.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/178421", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92090/" ] }
178,451
I am not a sysadmin and try to create a more or less secure web server (LAMP based CentOS 7 ). I read several tutorials about setup an initial CentOS 7 droplet and got everything running fine. However, I am struggling in understanding some basic concepts I read in the articles I read and need some input of more qualified people simply as I am unsure about side affects. Your input is much appreciated. Scenario: I am using cloud-init (User Data) on Digital Ocean to create & provision a new droplet.As far I understand, cloud-init runs as system/root, creating the below settings, in specific for ssh_config for root: to create a new user (lets call him admin for this scenario) in linewith a ssh key for that user (ssh-authorized-keys)a adding that user to groups wheel and set sudo: ['ALL=(ALL) NOPASSWD:ALL'] disable root login in /etc/ssh/sshd_config AllowUsers admin in /etc/ssh/sshd_config set PasswordAuthentication no in/etc/ssh/sshd_config set PubkeyAuthentication yes & RSAAuthentication yes in /etc/ssh/sshd_config configuring firewalld and performing some other tasks Questions: After connecting my droplet through ssh as user admin the related sshd_config is empty. I assume this is due to the fact that cloud-init runs as system/root when the droplet is created and cloudconfig runs. 1.1 Do I need to set the same settings here as I did upon droplet creation for the root user? 1.2 If so, what is the sense of multiple ssh_config files? Looking at the cloud-init log file I found a public/private rss key pair for root was created. The related password is mailed to me as I decided not to provide an initial ssh key for the root user instead (just for testing purposes). However, running ls -a on / etc/ssh of the newly created user admin shows: . moduli sshd_config ssh_host_dsa_key.pub ssh_host_ecdsa_key.pub ssh_host_rsa_key.pub.. ssh_config ssh_host_dsa_key ssh_host_ecdsa_key ssh_host_rsa_key Looking at ssh_host_rsa_key for example it contains the same ssh key that has been created for the root user. 2.1 Why and for what purpose that the newly created user I called admin (not root) holds the same keys as root in his ssh folder? 2.2 Is that because I added him to the groups wheel and sudo: ['ALL=(ALL) NOPASSWD:ALL‘]? Is that recommended? 2.3 What is the sense of disallowing root to remote login through ssh if another users account can get compromised and also holds all keys needed make a hacker a very lucky person? Is the purpose just to have a user who´s name is more hard to guess? I understood that some actions still require sudo / root privileges. If logged in as admin I can change to root using su root. 3.1 As I disabled root login in /etc/ssh/sshd_config (that is for ssh only, right?) but the new user created (admin) has the same rights and can easily switch to root I am asking myself what can be done to secure that password better or if there is something such as Two Factor Auth that would add a level of security? 3.2. On the other hand I don’t understand how that can be a better level of security if a hacker that successfully gained control over the admin account could easily read the ssh keys for root (see previous topic above) and bypass any security layer? In short:I liked a lot what I was reading, but looking into the filesyste, in specific, after finding the root ssh keys (private & public) in the users ssh folder I created using cloud-init, I am a bit concerned I misunderstood something. By the way: This is my cloud-init script: #cloud-config# log all cloud-init process output (info & errors) to a logfileoutput: {all: ">> /var/log/cloud-init-output.log"}# final_message written to log when cloud-init processes are finishedfinal_message: "System boot (via cloud-init) is COMPLETE, after $UPTIME seconds. Finished at $TIMESTAMP"package_upgrade: trueusers: - name: admin groups: wheel sudo: ['ALL=(ALL) NOPASSWD:ALL'] shell: /bin/bash ssh-authorized-keys: - ssh-dss AAAABBBBBCCCCDDDD...runcmd: - sed -i -e 's/#Protocol 2/Protocol 2/g' /etc/ssh/sshd_config - sed -i -e 's/#LoginGraceTime 2m/LoginGraceTime 2m/g' /etc/ssh/sshd_config - sed -i -e 's/#PermitRootLogin yes/PermitRootLogin no/g' /etc/ssh/sshd_config - sed -i -e 's/PasswordAuthentication yes/PasswordAuthentication no/g' /etc/ssh/sshd_config - sed -i -e 's/#RSAAuthentication yes/RSAAuthentication yes/g' /etc/ssh/sshd_config - sed -i -e 's/#PubkeyAuthentication yes/PubkeyAuthentication yes/g' /etc/ssh/sshd_config - sed -i -e '$aAllowUsers admin' /etc/ssh/sshd_config - service sshd restart #firewall - systemctl start firewalld - firewall-cmd --permanent --add-service=ssh - firewall-cmd --reload - systemctl enable firewalld
It turns out that for some reason the settings for my bottom panel(taskbar) had been altered. To access the menu for the panels(top & bottom bars), I needed to hold Alt and then Right-click . This opened a little menu where I could select Add to panel... which opens another menu of things you can add to the taskbar. What I needed to add was 'Windows List' , which solved my problem.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/178451", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/95506/" ] }
178,460
I have a huge log file and want to grep the first occurrence of a pattern, and then find another pattern right after this occurence. For example: 123XXY214ABC182558ABC856ABC In my example, I would like to find 182 and then find the next occurrence of ABC The first occurrence is simple: grep -n -m1 "182" /var/log/file This outputs: 5:182 How do I find the next occurrence of ABC? My idea was to tell grep to skip the first n lines (in the above example n=5 ), based on the line number of 182. But how do I do that?
With sed you can use a range and q uit input at a single completion: sed '/^182$/p;//,/^ABC$/!d;/^ABC$/!d;q' Similarly w/ GNU grep you can split the input between two grep s: { grep -nxF -m1 182; grep -nxF -m1 ABC; } <<\IN123XXY214ABC182558ABC856ABCIN ... which prints... 5:1822:ABC ... to signify that the first grep found a -F ixed-string literal, -x entire-line 182 match 5 lines from the start of its read, and the second found a similarly typed ABC match 2 lines from the start of its read - or 2 lines after the first grep quit reading at line 5. From man grep : -m NUM, --max-count=NUM Stop reading a file after NUM matching lines. If the input is standard input from a regular file, and NUM matching lines are output, grep ensures that the standard input is positioned to just after the last matching line before exiting, regardless of the presence of trailing context lines. This enables a calling process to resume a search. I used a here-document for the sake of reproducible demonstration, but you should probably do: { grep ...; grep ...; } </path/to/log.file It will also work with other shell compound-command constructs like: for p in 182 ABC; do grep -nxFm1 "$p"; done </path/to/log.file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/178460", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89041/" ] }
178,466
I have this: #!/bin/bashfor file in `find . -type d`do echo $filedone If I have just one directory called My Directory , the output is MyDirectory How do I fix this? The echo $file is just temporary. There will be other code in there operating on the directories.
Something like the following works ... find . -type d | while read dir; do echo $dir; done../my dir Depending on what you're doing, you might be better using find 's -print0 option and xargs -0 . The code you've got takes the unquoted output from find and uses it as a list of words (split on whitespace) for for to iterate over.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/178466", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61624/" ] }
178,467
People usually know after running man fdisk that they can search with: /foo (where foo is the string to be searched for). They can also use Up / Down and PgUp / PgDown to scroll up and down. What other hotkeys are supported by man ? Is there a list of them?
Something like the following works ... find . -type d | while read dir; do echo $dir; done../my dir Depending on what you're doing, you might be better using find 's -print0 option and xargs -0 . The code you've got takes the unquoted output from find and uses it as a list of words (split on whitespace) for for to iterate over.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/178467", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98473/" ] }
178,503
I just watched the trailer for the hobbit , and a trailer for the avengers which both feature an increased framerate. A lot of the comments state that this isn't "true" 60fps since it was not shot at 60fps, but actually a lower frame-rate that has been interpolated. If this is the case, is there any way that I can convert some of my existing media in Linux with ffmpeg or avconv in the same way in order to create this "illusion"? I can understand if higher framerates are not to other's tastes, but not the point of this post.
You can try ffmpeg -i source.mp4 -filter:v tblend -r 120 result.mp4 or this from https://superuser.com/users/114058/mulvya ffmpeg -i source.mp4 -filter:v minterpolate -r 120 result.mp4 There are filter for motion blur
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/178503", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64440/" ] }
178,520
So recently, some of my friends had been tampering with my files and data through terminal, so I decided to secure it by doing two things: First, I added the following to my ~/.bash_profile for ALL commands: alias <command>="sudo <command>" , to require a password to use any command. Second, I ran the command sudo visudo to edit the sudo settings and added Defaults:user_name timestamp_timeout=0 to the end of the file to make sudo be required instantly after every new command (for those who don't know, with default settings, if you enter your password once to unlock sudo, sudo doesn't require a password for the couple of minutes). Anyways, I did all of this to secure my file-system, but now newly-opened tabs in terminal require a password to get in, and once I enter the correct password, the tab doesn't unlock; I just get another password requirement. No matter how many times I enter my correct password, it keeps on asking again (with default timeout 0). Last login: Sat Jan 10 14:52:20 on ttys002Password:Password:Password:Password:Password: Essentially, I am locked out of my own terminal, unable to do anything. Also, I cannot edit the /etc/sudoers/ file because I do not have permission; I cannot even view my ~/.bash_profile because it is a hidden file. Is there any way to undo either of these two commands or to somehow access or unlock my terminal?
You can try ffmpeg -i source.mp4 -filter:v tblend -r 120 result.mp4 or this from https://superuser.com/users/114058/mulvya ffmpeg -i source.mp4 -filter:v minterpolate -r 120 result.mp4 There are filter for motion blur
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/178520", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98800/" ] }
178,539
So I have a little script for running some tests. javac *.java && java -ea Testrm -f *.class Now the problem with this is that when I run the script ./test , it will return a success exit code even if the test fails because rm -f *.class succeeds. The only way I could think of getting it to do what I want feels ugly to me: javac *.java && java -ea Testtest_exit_code=$?rm -f *.classif [ "$test_exit_code" != 0 ] ; then false; fi But this seems like something of a common problem -- perform a task, clean up, then return the exit code of the original task. What is the most idiomatic way of doing this (in bash or just shells in general)?
I'd go with: javac *.java && java -ea Testtest_exit_code=$?rm -f *.classexit "$test_exit_code" Why jump around when exit is available? You could use a trap : trap 'last_error_code=$?' ERR For example: $ trap 'last_error_code=$?' ERR$ false$ echo $?1$ echo $last_error_code $?1 0
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/178539", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14849/" ] }
178,564
I am using Arch Linux and I can do a system upgrade while surfing the net for example. How can the browser package be upgraded if the program itself is in use? Or the Kernel for example? Doesn't the executable need to be stopped in order to be replaced with a new one? Or does it happen at the next reboot?
The browser files on disc just get replaced. The running program (if not completely in memory) keeps the old executable files open until the program closes (but until then those are no longer the executables files you get via the directory entries). On the next restart of the browser you get the version. No reboot necessary except for the program that gets loaded at reboot (i.e. the kernel) There are programs that patch the kernel in place, not even requiring a reboot for that, but that is not the same thing.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/178564", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90714/" ] }
178,588
Is it possible to put more than 1 condition in an if statement? if [ "$name" != "$blank" && "$age" == "$blank" ]; then Is it possible? If not how am I supposed to do that?
With [ expression ] (POSIX Standard) syntax you can use the following: if [ "$name" != "$blank" ] && [ "$age" = "$blank" ]; then echo truefi But in [[ expression ]] syntax you can use both conditions: if [[ $name != "$blank" && $age == "$blank" ]]; then echo true!fi Two advantages of [[ over [ : No word splitting or glob expansion will be done for [[ , and therefore many arguments need not be quoted (with the exception of the right-hand side of == and != , which is interpreted as a pattern if it isn't quoted). [[ easier to use and less error-prone. Downside of [[ : it is only supported in ksh, bash and zsh, not in plain Bourne/POSIX sh. My reference and good page to comparing [[ and [ : bash FAQ Security implications of forgetting to quote a variable in bash/POSIX shells
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/178588", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98500/" ] }
178,626
By default unattended-upgrades runs with cron.daily, that is at most on a daily basis. This can be a lot of time for attackers. I'd like to run it every 4 hours, how can I do this?
Old question, but saying this for anyone who might have the same issue i had: On Ubuntu 16.04 (and probably other systemd systems) the unattended-upgrades is not triggered by cron anymore. Instead it uses systemd Timers. In order to modify the run time(s) and the randomized delay, you need to modify/override the timers. More information can be found here: https://github.com/systemd/systemd/issues/3233
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/178626", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98879/" ] }
178,638
I'm using Ubuntu 12.04, and when I rigth click on a my flash drive icon (in the Unity left bar) I get two options that have me confused: eject and safely remove . The closer I came to an answer was this forum thread , which concludes that (for a flash drive) they are both equal and also equivalent to use the umount command. However, this last assertion seems to be false. If I use umount from the console to unmount my flash dive, and then I use the command lsblk , I still see my device (with nothing under MOUNTPOINT, of course). On the other hand, if I eject or safely remove my flash drive, lsblk does not list it anymore. So, my question is, what would be the console command/commands that would really reproduce the behaviour of eject and safely remove ?
If you are using systemd then use udisksctl utility with power-off option: power-off Arranges for the drive to be safely removed and powered off. On the OS side this includes ensuring that no process is using the drive, then requesting that in-flight buffers and caches are committed to stable storage. I would recommend first to unmount all filesystems on that usb. This can be done also with udisksctl , so steps would be: udisksctl unmount -b /dev/sda1udisksctl power-off -b /dev/sda If you are not using systemd then old good udisks should work: udisks --unmount /dev/sda1udisks --detach /dev/sda
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/178638", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98894/" ] }
178,651
In Unix file systems directories are just special files with special directory structures that hold the child filename, filename size and inode reference number. The actual file metadata beyond this is normally stored in the inode itself. My question is. How does one read the actual special directory structure in its raw form instead of its interpreted form. Yes I know you can use ls to see the files there. That's not what I am looking for.
The simple answer is that what you want to do is to read the directory file,with a command like cat . , cat /etc , or cat mydir . Of course, since this is “raw” data,you’d want to use a program that’s better suitedto displaying non-ASCII data in a human-friendly way; e.g., hexdump or od . Unfortunately, as discussed in When did directories stop being readable as files? , most versions of Unixthat were released in the past two decades or so don’t allow this. So the answer to your question may be“find a version of Unix that still allows reading directories”. AIX, most versions of BSD,and all but the most recent versions of Solaris may qualify. Finding a Linux that allows it may require the use of a time machine.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/178651", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68215/" ] }
178,656
Sometimes I need to exec a single command which is in a shell script. I already know sed -n 'line_num p' can print that line. But how can I exec that printed out specific line as a command?
Try this: sed -n 'line_num p' | bash or, if the command does not contain whitespace, "$(sed -n 'line_num p')"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/178656", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74226/" ] }
178,677
Recently I'm echoing short sentences to a tree_hole file. I was using echo 'something' >> tree_hole to do this job. But I was always worried of what if I mis-input of > instead of >> , since I did this often. So I made a global bash func of my own in the bashrc: function th { echo "$1" >> /Users/zen1/zen/pythonstudy/tree_hole; }export -f th But I'm wondering if there is another simple way to append lines to the end of a file.Because I may need to use that often in other occasions. Is there any?
Set the shell's noclobber option: bash-3.2$ set -o noclobberbash-3.2$ echo hello >foobash-3.2$ echo hello >foobash: foo: cannot overwrite existing filebash-3.2$
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/178677", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74226/" ] }
178,752
I know that to capture a pipeline's contents at an intermediate stage of processing, we use tee as ls /bin /usr/bin | sort | uniq | tee abc.txt | grep out , but what if i don't want to redirect the contents after uniq to abc.txt but to screen (through stdout, ofcourse) so that as an end result , i'll have on screen, the intermediate contents after uniq as well as the contents after grep.
sometimes /dev/tty can be used for that... ls /bin /usr/bin | sort | uniq | tee /dev/tty | grep out | wc
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/178752", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/96181/" ] }
178,761
I am using if-else statement to search for keywords and displaying the results in the terminal, here's an example of my code. read findingif ["$finding" != "" ]; then grep $finding information.txtelse echo "No such information in database."fi But the terminal does not display anything if i key in information that does not exist. I started shell about a week back, might need more explanation on how certain code works.
Add space after [ (it is a command) Use -n to test if length of string is nonzero, or -z to test if it is zero Put double quotes around variables So: read findingif [ -z "$finding" ]; then echo "You didn't enter anything"else grep "$finding" information.txt if [ ! "$?" -eq 0 ]; then echo "No such information in database." fifi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/178761", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98500/" ] }
178,763
I have a script that uses diff -c then puts the output on a text file. What I want is to remove the line that does not have the "!" and display the lines with the exclamation mark. Is this possible? Can the cut command do the trick? I wanted to use diff -c because it separates the files from directory1 to directory2. example: *** 1,3 ****! 3856715355 /home/dir 4294967277 /home/dir/file1 <---remove this line! 154272340 /home/dir/file5--- 1,4 ----! 1765342654 /home/dir 4294967277 /home/dir/file1 <--- remove this line! 803775803 /home/dir/file4! 2580902204 /home/dir/file99
Add space after [ (it is a command) Use -n to test if length of string is nonzero, or -z to test if it is zero Put double quotes around variables So: read findingif [ -z "$finding" ]; then echo "You didn't enter anything"else grep "$finding" information.txt if [ ! "$?" -eq 0 ]; then echo "No such information in database." fifi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/178763", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98989/" ] }
178,768
I'm trying to follow an article to enable smart card login on my RHEL 6.6 desktop. When I attempt to install coolkey-1.1.0, it says ccid and pcsc-lite is needed. When attempt to install ccid-1.3.9.7, it says pcsc-lite is needed. When I attempt to install pcsc-lite-1.5.2, it says pcsc-ifd-handler is required. When I search Google to see what rpm I need to get pcsc-ifd-handler, it looks like pcsc-ifd-handler is included in the ccid or pcsc-lite-openct package . I already tried to install ccid and it needed pcsc-lite, so then I tried to install pcsc-lite-openct-0.6.19 and it also says it needs pcsc-lite. So I'm kind of stuck in a cycle where the package that has pcsc-ifd-handler is dependent on pcsc-lite being installed, but pcsc-lite can't install until the package that contains pcsc-ifd-handler is installed.
Circular dependencies are usually resolved by picking one of the RPM's in question and just doing a --nodeps --force on the install then proceeding onto the other one. I've read elsewhere that you can just give both files to rpm at the same time and that will work around it. Never done that myself, though.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/178768", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/94456/" ] }
178,770
I need to find all occurrences of AAsomeArbitraryStringBB and replace it with CCsomeArbitraryStringDD . So AAHelloBBTextAAByeByeBB becomes CCHelloDDTextCCByeByeDD. It's important to note that the replacement string contains part of the search string.
Circular dependencies are usually resolved by picking one of the RPM's in question and just doing a --nodeps --force on the install then proceeding onto the other one. I've read elsewhere that you can just give both files to rpm at the same time and that will work around it. Never done that myself, though.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/178770", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98995/" ] }
178,832
How add I add the condition in case whereby if it does not detect the required conditions, it will execute the command. My code: case $price in[0-9] | "." | "$") echo "Numbers, . , $ Only" ;;esac This command will execute if it detects numbers, "." and "$". How do change it in a sense if it does not detect those, the command will execute. Or are there other better commands to use to do this function.
Add a default case: case $price in[0-9] | "." | "$") true ;;*) do-something ;;esac
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/178832", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98500/" ] }
178,854
I want to list all the files whose name begins with uppercase: [root@localhost /]# ls /usr/bin/[[:upper:]]*/usr/bin/AtoB /usr/bin/GenIssuerAltNameExt /usr/bin/PKCS12Export/usr/bin/AuditVerify /usr/bin/GenSubjectAltNameExt /usr/bin/POST/usr/bin/BtoA /usr/bin/GET /usr/bin/PrettyPrintCert/usr/bin/CMCEnroll /usr/bin/HEAD /usr/bin/PrettyPrintCrl/usr/bin/CMCRequest /usr/bin/HtFileType /usr/bin/RSA_SecurID_getpasswd/usr/bin/CMCResponse /usr/bin/HttpClient /usr/bin/RunSimTest/usr/bin/CMCRevoke /usr/bin/IBMgtSim /usr/bin/TokenInfo/usr/bin/CRMFPopClient /usr/bin/Mail /usr/bin/X/usr/bin/ExtJoiner /usr/bin/OCSPClient /usr/bin/Xorg/usr/bin/GenExtKeyUsage /usr/bin/PKCS10Client It works OK, but when applied the current folder, it seem weird: [root@localhost /]# ls ./[[:upper:]]*snk321cq[root@localhost /]# ls -lt snk321cqls: cannot access snk321cq: No such file or directory[root@localhost /]# ls -lt ./snk321cqls: cannot access ./snk321cq: No such file or directory Why display snk321cq ? Actually there is no such a file.
This file is under a directory matching the pattern, use: ls -d ./[[:upper:]]* By default, when passed a directory name as argument, ls displays its content, not its name. The -d option is disabling this feature. When using the [[:upper:]]* pattern, the shell is expanding it to every filename starting with an uppercase letter so ls receives the expanded directory name.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/178854", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85056/" ] }
178,857
I am trying to run grep against a list of a few hundred files: $ head -n 3 <(cat files.txt)admin.phpajax/accept.phpajax/add_note.php However, even though I am grepping for a string that I know is found in the files, the following does not search the files: $ grep -i 'foo' <(cat files.txt)$ grep -i 'foo' admin.phpThe foo was found I am familiar with the -f flag which will read the patterns from a file. But how to read the input files ? I had considered the horrible workaround of copying the files to a temporary directory as cp seems to support the <(cat files.txt) format, and from there grepping the files. Shirley there is a better way.
You seem to be grepping the list of filenames, not the files themselves. <(cat files.txt) just lists the files. Try <(cat $(cat files.txt)) to actually concatenate them and search them as a single stream, or grep -i 'foo' $(cat files.txt) to give grep all the files. However, if there are too many files on the list, you may have problems with number of arguments. In that case I'd just write while read filename; do grep -Hi 'foo' "$filename"; done < files.txt
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/178857", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9760/" ] }
178,862
I have multiple files something like: (in reality i have 80) file1.dat 2 56 97 1 file2.dat 3 78 41 3 I want to end up with a file containing all of the second lines. i.e. output.dat 6 98 4 What i have so far loops though the file names but then over-writes the file before it. e.g. the output of the above files would just be 8 4 my shell script looks like this: post.sh TEND = 80TINDX = 0while [ $TINDX - lt $TEND]; doawk '{ print NR==2 "input-$TINDX.dat > output.datTINDX = $((TINDX+1))done
Remove while loop and make use of shell brace expansion and also FNR , a built-in awk variable: awk 'FNR==2{print $0 > "output.dat"}' file{1..80}.dat
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/178862", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99064/" ] }
178,875
I installed two JAVA JREs on my new CentOS since Cassandra needs java7u25 or later while iReport needs to work with 1.6. Now how do I launch each program from command line telling each program which version to use? Do I have to change the /etc/profile file? If so how?
There's no point in having them both in $PATH because only one will get used. You could symlink one to a different name -- e.g. java6 -- I've never tried this w/ java and not sure if it would work. The best way to do this would be to install one of them (presumably 1.6) in a location like /opt/java6 , leaving 1.7 as the default. Then when you want to use 6: export PATH=/opt/java6/bin:$PATH And start it from the command line. You could also put all that together in a script. Don't try to run Cassandra from the same shell after that unless you remove that from $PATH (easy way to check is echo $PATH ). To automate this for one specific application: #!/bin/shexport PATH=/opt/java6/bin:$PATHexec /path/to/application You can then put that somewhere in the regular $PATH (e.g., /usr/local/bin ), make sure it is executable ( chmod 755 whatever.sh ) and start the application that way. It will then not affect $PATH in the process which launches it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/178875", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98975/" ] }
178,881
Once, I was installing some kernel patches & something went wrong on a live server where we had hundreds of clients. Only one kernel was there in the system. So, the server was down for some time, and using a live CD, we got the system up & running & did the further repairing work. Now my question: Is it a good idea to have a 2 versions of the kernel, so that if the kernel is corrupted we can always reboot with another available kernel? Please let me know. Also, is it possible to have 2 versions of the same kernel? So that I can choose the another kernel when there is kernel corruption? Edited:My Server Details:2.6.32-431.el6.x86_64CentOS release 6.5 (Final) How can I have the same copy of this kernel, so that when my kernel corrupts, I can start the the backup kernel?
Both RedHat and Debian-based distribution keep several versions of Kernel when you install a new one using yum or apt-get by default. That is considered a good practice and is done exactly for the case you describe: if something goes wrong with the latest kernel you can always reboot and in GRUB choose to boot using one of the previous kernels. In RedHat distros you control number of the kernels to keep in /etc/yum.conf with installonly_limit setting. On my fresh CentOS 7 install it defaults to 5. Also if on RedHat you're installing new kernel from RPM package you should use rpm -ivh , not rpm -Uvh : the former will keep the older kernel in place while the later will replace it. Debian keeps old kernels but don't automatically removes them. If you need to free up your boot partition you have to remove old kernels manually (remember to leave at least one of the previous kerneles). To list all kernel-installing and kernel-headers packages use dpkg -l | egrep "linux-(im|he)" . Answering your question -- Also, Is it possible to have a 2 version of the same kernel ? -- Yes, it is possible. I can't check it on CentOS 6.5 right now, but on CentOS 7 I was able to yield the desired result by just duplicating kernel-related files of /boot directory and rebuilding the grub menu: cd /boot# Duplicate kernel files; # "3.10.0-123.el7" is a substring in the name of the current kernells -1 | grep "3.10.0-123.el7" | { while read i; \ do cp $i $(echo $i | sed 's/el7/el7.backup/'); done; }# Backup the grub configuration, just in casecp /boot/grub2/grub.cfg /boot/grub2/grub.cfg.backup# Rebuild grub configurationgrub2-mkconfig -o /boot/grub2/grub.cfg# At this point you can reboot and see that a new kernel is available # for you to choose in GRUB menu
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/178881", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/96015/" ] }
178,899
With many new hard drive disks the physical sector size is 4096. Would it be possible to make the system use a logical sector size of the same size, rather than the default logical sector size of 512? Will it speed up bulk reads and writes?Where can it be configured?
512 byte is not really the default sector size. It depends on your hardware. You can display what physical/logical sector sizes your disk reports via the /sys pseudo filesystem, for instance: # cat /sys/block/sda/queue/physical_block_size4096# cat /sys/block/sda/queue/logical_block_size512 What is the difference between those two values? The physical_block_size is the minimal size of a block the drive is able to write in an atomic operation. The logical_block_size is the smallest size the drive is able to write (cf. the linux kernel documentation). Thus, if you have a 4k drive it makes sense that your storage stack (filesystem etc.) uses something equal or greater than the physical sector size. Those values are also displayed in recent versions of fdisk , for instance: # fdisk -l /dev/sda[..]Sector size (logical/physical): 512 bytes / 4096 bytes On current linux distributions, programs (that should care about the optimal sector size) like mkfs.xfs will pick the optimal sector size by default (e.g. 4096 bytes). But you can also explicitly specify it via an option, for instance: # mkfs.xfs -f -s size=4096 /dev/sda Or: # mkfs.ext4 -F -b 4096 /dev/sda In any case, most mkfs variants will also display the used block size during execution. For an existing filesystem the block size can be determined with a command like: # xfs_info /mnt[..]meta-data= sectsz=4096data = bsize=4096naming =version 2 bsize=4096log =internal bsize=4096 = sectsz=4096realtime =none extsz=4096 Or: # tune2fs -l /dev/sdaBlock size: 4096Fragment size: 4096 Or: # btrfs inspect-internal dump-super /dev/sda | grep sizecsum_size 4sys_array_size 97sectorsize 4096nodesize 16384leafsize 16384stripesize 4096dev_item.sector_size 4096 When creating the filesystem on a partition, another thing to check then is if the partition start address is actually aligned to the physical block size. For example, look at the fdisk -l output, convert the start addresses into bytes, divide them by the physical block size - the reminder must be zero if the partitions are aligned.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/178899", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45821/" ] }
178,924
I have tried to extract environment variables in a Python process with help of env --null , which works even for environment variables containing newline character. But on some machines I have received an error: > env -0env: invalid option -- '0'> env --nullenv: unrecognized option '--null'> env --versionenv (GNU coreutils) 6.12Copyright (C) 2008 Free Software Foundation, Inc. In which version was the argument introduced? Is there any alternative command to extract the environment?
Option -0/--null was first introduce on 28-10-2009, and release with GNU coreutils version 8.1. If your coreutils is too old, you should upgrade. Or you can use perl : perl -e '$ENV{_}="/usr/bin/env"; print "$_ => $ENV{$_}\0" for keys %ENV' As @Stéphane Chazelas pointed out in his comment, the above approach doesn't include environment strings that don't contain = , duplicated environment variables or environment variables with null name. If you are in Linux, you can use (Thanks @Stéphane Chazelas again): cat /proc/self/environ
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/178924", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77811/" ] }
178,925
I am playing around with -exec flag of find command. I am trying to use the flag to print the extension name of files, using a fairly new Linux distribution release. Starting simple, this works: find . -type f -exec echo {} \; An attempt to use convenient Bash string feature fails: find . -type f -exec echo "${{}##*.}" \; (bad substitution) So what should be the correct way to do it?
If you want to use shell parameter expansion then run some shell with exec : find . -type f -exec sh -c 'echo "${0##*.}"' {} \;
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/178925", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22534/" ] }
178,949
On my current Linux system (Debian Jessie amd64), I'm getting different behavior for dd using /dev/urandom ( /dev/random behavior is properly documented). If I naively want 1G of random data: $ dd if=/dev/urandom of=random.raw bs=1G count=10+1 records in0+1 records out33554431 bytes (34 MB) copied, 2.2481 s, 14.9 MB/s$ echo $?0 In this case only 34MB of random data are stored, while if I use multiple reads: $ dd if=/dev/urandom of=random.raw bs=1M count=10001000+0 records in1000+0 records out1048576000 bytes (1.0 GB) copied, 70.4749 s, 14.9 MB/s then I properly get my 1G of random data. The documentation for /dev/urandom is rather elusive: A read from the /dev/urandom device will not block waiting for more entropy. As a result, if there is not sufficient entropy in the entropy pool, the returned values are theoretically vulnerable to a cryptographic attack on the algorithms used by the driver. Knowledge of how to do this is not available in the current unclassified literature, but it is theoretically possible that such an attack may exist. If this is a concern in your application, use /dev/random instead. I guess the documentation implies there is some sort of maximum read size for urandom . I'm also guessing that the size of the entropy pool is 34MB on my system, which would explain why the first read of 1G failed at about 34MB. But my question is how do I know the size of my entropy pool? Or is dd stopped by another factor (some kind of timing issue associated with urandom ?).
If you check Reading from /dev/urandom gives EOF after 33554431 bytes and follow the discussion, it points to another bug report where Ted Tso states... ...that commit 79a8468747c5 causes reads larger than 32MB results in a only 32MB to be returned by the read(2) system call. That is, it results in a short read. POSIX always allows for a short read(2), and any program MUST check for short reads. The problem with dd is that POSIX requires the count=X parameter, to be based on reads, not on bytes. This can be changed with iflag=fullblock. As per gnu dd manual : Note if the input may return short reads as could be the case when reading froma pipe for example, ‘iflag=fullblock’ will ensure that ‘count=’ corresponds tocomplete input blocks rather than the traditional POSIX specified behavior ofcounting input read operations. so if you add iflag=fullblock : dd if=/dev/urandom of=random.raw bs=1G count=1 iflag=fullblock1+0 records in1+0 records out1073741824 bytes (1.1 GB) copied, 65.3591 s, 16.4 MB/s This is actually confirmed by dd , if you omit iflag and increase the count to get 32 reads, i.e. 32 x 33554431 bytes = 1073741792 bytes which is roughly 1G (or 1.1GB as per dd man page section on multiplicative suffixes), it will output a short warning: dd if=/dev/urandom of=random.raw bs=1G count=32dd: warning: partial read (33554431 bytes); suggest iflag=fullblock0+32 records in0+32 records out1073741792 bytes (1.1 GB) copied, 59.6676 s, 18.0 MB/s
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/178949", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32896/" ] }
178,980
[root@notebook ~]# grep root /etc/sudoersroot ALL=(ALL) ALL Question: Why does the root user need sudo permissions? I've seen it on different UNIX OSes. Can someone please explain this?
So that they can (from the man page):- execute a command as another user sudo isn't limited to allowing regular users to execute a command as root. Root can run a command as another user with:- sudo -u bloggs <command> Note that root will not need to supply the user's password.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/178980", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98473/" ] }
178,989
Perhaps this has been answered previously, I would welcome a link to another answer... If I execute a shell command (in a bash shell) like the following: make Then while the output from make is scrolling by from the STDOUT of the make command, if I type make check and press enter before the first command is finished executing, when the make command finally finishes the next command make check will pick right up and run. My question(s) are simply: Is this dangerous to do? Are there any potential unexpected behaviors from this kind of rush typing? Why does this work the way it does?
It works the way it does because Unix is full-duplex. As Ritchie said in The UNIX Time-sharing System: A Retrospective : One thing that seems trivial, yet makes a surprising difference once one is used to it, is full-duplex terminal I/O together with read-ahead. Even though programs generally communicate with the user in terms of lines, rather than single characters, full-duplex terminal I/O means that the user can type at any time, even if the system is typing back, without fear of losing or garbling characters. With read-ahead, one need not wait for a response to every line. A good typist entering a document becomes incredibly frustrated at having to pause before starting each new line; for anyone who knows what he wants to say any slowness in response becomes psychologically magnified if the information must be entered bit-by-bit instead of at full speed. [end quote] That being said, there are some modern programs that eat up or discard any typeahead; ssh and apt-get are two examples. If you type ahead while they're running, you may find that the first part of your input has disappeared. That could conceivably be a problem. ssh remotehost do something that takes 20 secondsmail bobDan has retired. Feel free to save any important files and then do# ssh exits here, discarding the previous 2 linesrm -fr *.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/178989", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42920/" ] }
179,056
I have a 3 years old server with two same disks.I'm planning to replace both before they fails.Can I add two more new disks to the raid and (after it has been rebuild) eventually remove the two old ones?Or which is the best way to do this?Thank you
So assuming you are using mdadm you can do exactly what you suggest The only caveat is that the raid monitoring utility will generally only handle one disk at a time and normally when you have marked one as failed. Further you just need to ensure that it has completed copying the data before removing the old disks from the raid array otherwise you'll end up removing the "live" disks with nothing on the new ones and corrupt your array. Commands that you will find useful for doing this are as follows: To add a new disk to the array: # mdadm /dev/<mddevice> --add /dev/<newdisk> To see the status and recovery process: cat /proc/mdstat To mark the old disk as 'failed' and remove it from the array: # mdadm /dev/<mddevice> --fail /dev/<olddisk> --remove /dev/<olddisk> I would suggest doing one disk at a time the first time and checking the status of the raid array via mdstat as you go before removing the second (and potentially only viable disk) from the array. My only reason for suggesting this is experience teaches you to take several small steps rather than one large one and face total disaster recovery. Prevention is far better than cure.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179056", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83420/" ] }
179,093
I have a problem with a shell script. It is supposed to read arguments: index, date, time1 (the beginning of the interval), time2 (the end of the interval). It should count how many times the user(index) has logged on the given date in the time interval time1-time2. For example : 121212 "Jan 14" 00 12 This works, but I have a problem with the argument date. It doesn't recognize it as one argument. It splits it in Jan and 14" which is a big problem. I've been searching on the internet for a few hours, but I couldn't find solution anywhere. Here is my code: #!/bin/bashread user date time1 time2list=`last | grep ^$user.*$date | awk '{ print $7 }'`start=$time1end=$time2echo "start $start"echo "end $end"count=0for el in $list ;do login=$elecho "najava $najava"checkIf(){ current=$login [[ ($start = $current || $start < $current) && ($current = $end || $current < $end) ]]}if checkIf; then count=`expr $count + 1` ip=`last | grep ^$user.*$date.*$login | awk '{ print $3 }'` echo $ip >> address.txtelse continuefi done echo "The user has logged in $count times in the given time interval"
Try using $1, $2, $3, $4, ... for command line arguments (instead of using read) Call your script using: ./script.sh 121212 "Jan 14" 00 12
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179093", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99237/" ] }
179,100
On previous version of CentOS, I used the following command to get the current timezone in my bash script: timezone=$(sed 's/ZONE=//g' /etc/sysconfig/clock) Output is "America/New_York" How can I achieve an exact output on CentOS 7?
CentOS 7 specific: timedatectl | gawk -F': ' ' $1 ~ /Timezone/ {print $2}' And displaying just the timezone: timedatectl | gawk -F'[: ]+' ' $2 ~ /Timezone/ {print $3}' more generic: date +%Z
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179100", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99238/" ] }
179,144
I downloaded the debian amd 64-bit iso file (apx 650 mb) on a macbook w/retina (i.e. no cd drive) running OS X. I'm trying to dual boot and have already gotten rEFInd working. Set up partition for Debian with MS-DOS(FAT) format and blessed it. Now I'm trying to burn the .iso onto a usb to use for installation on the same computer. I'm using Terminal to convert .iso to .img with: hdiutil convert -format UDRW -o ./debian-7.8.0-amd64-CD-1\ \(1\).img ./debian-7.8.0-amd64-CD-1\ \(1\).iso but it keeps outputting hdiutil: convert failed - No such file or directory Not sure what I'm doing wrong. EDIT: I've succeeded in converting the .iso to .img and I'm attempting to unmount the disk partition I've made for debian via diskutil unmountDisk /dev/disk0s5 but I keep getting Unmount of disk0 failed: at least one volume could not be unmounted . I've verified this is the right disk using diskutil list . Any ideas what is wrong?
you can burn your iso file directly to usb by the command dd sudo dd if=<Your iso file location> of=/dev/<Your usb drive>(usually=/dev/sdb) note :use sudo fdisk -l to see what is your usb device name Example : i connect my usb thumb drive ,then i type sudo fdisk -l to show my device name , its (/dev/sdb) so i will type :sudo dd if=./debian-7.8.0-amd64-CD-1 of=/dev/sdb
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179144", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99284/" ] }
179,173
To my understanding man uses less as a pager, and when searching for keywords using less it "highlights" keywords with italics. I find that really inconvenient, so I'd like to change this to something like vim's set hlsearch where the found pattern has a different background. I attempted to run man -P vim systemd but that quit with error status 1, so it looks like I'm stuck with less . There was nothing that I was able to find in man less that helped (instead I found out that option -G will turn off highlighting all together which is even worse than italics). That being said does anyone know how to achieve search highlighting (change background color) in man pages? FYI I run Ubuntu 14.10 I came across this question seems to ask about the same thing but I am not sure if I follow how does this work ( LESS_TERMCAP_so ). The less man page does not mention this. (I get strange results with this solution)
Found an answer over on the superuser: https://superuser.com/questions/566082/less-doesnt-highlight-search Looks like it has to do with your TERM setting. For example, less highlighting acts normally (white background highlight) when in a normal gnome-terminal window, but when I'm in tmux, italics happens. The difference for me is that TERM is being set to "screen" when in tmux, but "xterm-256color" when not. When I set "TERM=xterm-256color" in the tmux window, highlighting in less goes back to background highlighting.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/179173", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87603/" ] }
179,174
Is there a way to change the icon of an application's window from the command line? For instance, I'd like to have separate icons for Firefox windows under different profiles (different processes), change the icon of the terminal if it runs tmux , etc. By 'icon' I mean the small picture shown by window switcher, typically invoked with Alt + Tab . In particular, I'm interested for this to work under xfwm4 , but a more general solution would only be welcome. Apparently, neither xdotool nor wmctrl are capable of this.
xseticon allows you to do exactly that.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179174", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3633/" ] }
179,178
I'm trying to get Bumblebee up and running on an Acer Aspire V3-572G-55FT laptop. So far I've installed xorg-server, xorg-xinit, xorg-utils, xorg-server-utils, mesa, xorg-twm, xterm, xorg-xclock, and all required dependencies. When installing mesa for the first time, I opted to install nvidia-libgl instead of mesa-libgl. I'm now, as per the Arch wiki, running the following command: sudo pacman -S mesa xf86-video-intel bumblebee nvidia bbswitch primus mesa-demos I get the following output: :: bumblebee and nvidia-libgl are in conflict. Remove nvidia-libgl? [y/N] What's the correct course of action to take here? Am I doing the right thing so far? Any help is appreciated!
xseticon allows you to do exactly that.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179178", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99312/" ] }
179,205
I need to send the generated CSV files on regular intervals using script. I am using UUENCODE and mailx for the same. But i need to know that is there any method/way to know that email sent successfully? Any kind of acknowledgement or feedback or something??? It is likely to report for any error. Also the file is confidential and is not intended to deviate to some foreign path. Edit : Code being used for mailing. subject="Something happened"to="[email protected]"body="Attachment Test"attachment=/home/iv315/timelog_file_150111.csv(cat test_msg.txt; uuencode $attachment somefile.csv) | mailx -s "$subject" "$to"
Email was designed back when computers did not have a permanent, fast network connection to each other, on the model of postal mail. When you send an email, it gets sent to a server, which sends it to another server, and so on until the email reaches its destination. The oldest mail systems had local delivery , then there were systems where the email had to specify the list of relays until the destination , and nowadays the emails are routed automatically over networks where pretty much all computers can reach each other most of the time. Still, email remains a mail service, not an instant message service. If email is delayed on the way, for example because of a temporary network outage, the intermediate server will keep the email in reserve until the link is restored. Due to this design, email is asynchronous. All the mailx command does is to transmit the email to a local MTA . A return code from mailx indicating success indicates that the local MTA has accepted the job of delivering the email. At that point, the email has been sent successfully. After that, it's the MTA's job to send the email to its destination. If the MTA is unable to make good on its promise to deliver, it is supposed to send a bounce message to the user who sent the email. You cannot know for sure whether the email has been delivered to the recipient's inbox, and even that isn't useful (for example, what if the email is successfully delivered, then the computer where the inbox is stored burns in a fire?). If you need to know whether the recipient received the email, the only sure-fire way is to include human-readable instructions to acknowledge the email. (There are ways to automatically send a receipt when the email is opened in certain software, but they only work in compatible software, and they aren't reliable either, e.g. if the recipient's computer crashed immediately after opening the email.) Knowing whether the email has been delivered doesn't tell you anything about whether other people have been able to read it. Unlike physical objects, electronic messages don't really “deviate”: they are copied, and if there are extra copies around, this cannot be detected. If the email needs to be confidential, encrypt it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179205", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98920/" ] }
179,238
I'm currently sifting through a lot of unfamiliar logs looking for some issues. The first file I look at is Events.log, and I get at least three pages in less which appear to display the same event at different times – an event that appears to be fairly benign.I would like to filter this event out, and currently I quit less and do something like grep -v "event text" Events.log | less This now brings a number of other common, uninteresting events that I would also like to filter out. Is there a way I can grep -v inside of less ? Rather than having to do egrep -v "event text|something else|the other thing|foo|bar" Events.log | less It strikes me as a useful feature when looking at any kind of log file – and if less isn't the tool, is there another with the qualities I seek? Just a less -style viewer with built in grep .
less has very powerful pattern matching.  From the man page : & pattern Display only lines which match the pattern ; lines which do not match the pattern are not displayed.  If pattern is empty (if you type & immediately followed by ENTER ), any filtering is turned off, and all lines are displayed.  While filtering is in effect, an ampersand is displayed at the beginning of the prompt, as a reminder that some lines in the file may be hidden. Certain characters are special as in the / command † : ^N or ! Display only lines which do NOT match the pattern . ^R Don't interpret regular expression metacharacters; that is, do a simple textual comparison. ____________ † Certain characters are special if entered at the beginning of the pattern ; they modify the type of search rather than become part of the pattern . (Of course ^N and ^R represent Ctrl + N and Ctrl + R , respectively.) So, for example, &dns will display only lines that match the pattern dns ,and &!dns will filter out (exclude) those lines,displaying only lines that don't match the pattern. It is noted in the description of the / command that The pattern is a regular expression, as recognized by the regular expression library supplied by your system. So &eth[01] will display lines containing eth0 or eth1 &arp.*eth0 will display lines containing arp followed by eth0 &arp|dns will display lines containing arp or dns And the ! can invert any of the above. So the command you would want to use for the example in your question is: &!event text|something else|the other thing|foo|bar Also use / pattern and ? pattern to search (and n / N to go to next/previous).
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/179238", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67170/" ] }
179,270
I'm trying to set up an ssh tunnel layout where: client A (not ssh-server enabled) initiates ssh connection to server S socks server is opened on server S:yyyy that tunnels all data via client A client B connects socks server on server S, and tcp data routes via client A to the Internet A possible solution would be to add a proxy server on Client A (binded to localhost:xxxx), and then run on client A ssh -R yyyy:localhost:xxxx Server . That would achieve the goal. But that's not as clean as using just ssh. Is it possible to achieve this with just the ssh client on A and ssh-server on S? it's like reverse-dynamic-port-forwarding on ssh - creating ssh -D from A to S, and then somehow setup on this tunnel a second tunnel of ssh -D from S to A. Somewhat confusing, and not sure if possible.
OpenSSH 7.6 introduced reverse dynamic proxy as a native option. It is implemented entirely in the client, so the server does not need to be updated. ssh -R 1080 server
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/179270", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99373/" ] }
179,286
I am trying to force the capslock led on. xset does not work for me, so I am trying to use setleds . In a graphical console, this command returns: > LANG=C setleds -L +capsKDGKBLED: Inappropriate ioctl for deviceError reading current flags setting. Maybe you are not on the console? In a virtual terminal, it works, however the effect is local to that virtual terminal. From what I understand, running > setleds -L +caps < /dev/tty1 from a virtual terminal (my X server is sitting on tty1) should work. However, this requires root access. Is there a way to send a command to the console underlying a X server, be it from the said xserver or from another VT, without root? Edit: From a suggestion from Mark Plotnik, and based on code found here , I wrote and compiled the following: #include <X11/Xlib.h>#include <X11/XKBlib.h>#define SCROLLLOCK 1#define CAPSLOCK 2#define NUMLOCK 16void setLeds(int leds) { Display *dpy = XOpenDisplay(0); XKeyboardControl values; values.led_mode = leds & SCROLLLOCK ? LedModeOn : LedModeOff; values.led = 3; XChangeKeyboardControl(dpy, KBLedMode, &values); XkbLockModifiers(dpy, XkbUseCoreKbd, CAPSLOCK | NUMLOCK, leds & (CAPSLOCK | NUMLOCK) ); XFlush(dpy); XCloseDisplay(dpy);}int main() { setLeds(CAPSLOCK); return 0;} From what Gilles wrote about xset , I did not expect it to work, but it does... in some sense: it sets the led, but it also sets the capslock status. I do not fully understand all the code above, so I may have done a silly mistake. Apparently, the line XChangeKeyboardControl... does not change the behavior of the program, and XkbLockModifiers is what sets the led and the capslock status.
In principle, you should be able to do it with the venerable xset command. xset led named 'Caps Lock' or xset led 4 to set LED number 4, if your system doesn't recognize the LEDs by name. However, this doesn't seem to work reliably. On my machine, I can only set Scroll Lock this way, and I'm not the only one . This seems to be a matter of XKB configuration . The following user-level work-around should work (for the most part): Extract your current xkb configuration: xkbcomp $DISPLAY myconf.xkb Edit the file myconf.xkb , replacing !allowExplicit with allowExplicit in the relevant blocks: indicator "Caps Lock" { allowExplicit; whichModState= locked; modifiers= Lock;};indicator "Num Lock" { allowExplicit; whichModState= locked; modifiers= NumLock;}; Load the new file xkbcomp myconf.xkb $DISPLAY Now setting the leds on and off with xset should work. According to the bug report, you will not be able to switch the leds off when they are supposed to be on (for example if CapsLock is enabled).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179286", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47331/" ] }
179,288
My understanding is that bash -c file means the same thing as being in an interactive bash shell and calling file , where bash file means to interpret the file using bash (as if it were a shell script). Is this accurate? Is this the reason you cannot run bash <executable> because it will try to interpret the file as a shell script instead of forking and running exec file?
First, from bash documentation : -c string Read and execute commands from string after processing the options, then exit. Any remaining arguments are assigned to the positional parameters, starting with $0. So when you supply -c option, bash treat string after -c as a sequence of commands, then execute those commands in the child process environment. So when you call bash -c file , bash treat file as a command, find it by looking through PATH environment variable. If file is found, then execute it, otherwise command not found error will be raised. When you called bash file , bash simply treat file as a shell script , read and execute commands from file , then exit. Again, from bash documentation : If arguments remain after option processing, and neither the -c nor the -s option has been supplied, the first argument is assumed to be the name of a file containing shell commands (see Shell Scripts). When Bash is invoked in this fashion, $0 is set to the name of the file, and the positional parameters are set to the remaining arguments. Bash reads and executes commands from this file, then exits. Bash’s exit status is the exit status of the last command executed in the script. If no commands are executed, the exit status is 0. So, your understanding is right.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179288", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43342/" ] }
179,291
My ~/.bashrc contains exactly one line: source my_config/my_actual_bashrc.sh Is there an equivalent with .inputrc , so my customizations can be in a separate location, and "called" by ~/.inputrc ?
According to man readline : $include This directive takes a single filename as an argument and reads commands and bindings from that file. For example, the following directive would read /etc/inputrc : $include /etc/inputrc
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/179291", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80154/" ] }
179,303
When it comes to CD's you have virtual CD software in which you load an .iso and it works like a CD-Rom. But when it comes to USB is there something similar? Is it possible to use a directory to simulate a USB storage device? Like mounting/unmounting that directory to simulate a plug/unplug of a USB storage device? Purpose: to read(using an application) music or video files from a USB. The application reacts only when a USB is inserted/removed. Or any other way could help. But files seem to be the linux way.Or if there are no tools yet for this: how feasible it would be to write one?
According to man readline : $include This directive takes a single filename as an argument and reads commands and bindings from that file. For example, the following directive would read /etc/inputrc : $include /etc/inputrc
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/179303", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54297/" ] }
179,318
I wand to build multiple .deb packages from same source for different versions and distros.Even if the source code is exactly same, some files in debian folder can not be shared because different dependency and distro name. So, I want to make multiple 'debian' directories for each version/distro and specify where to search it when build package.Is it possible? For your information, I'm using debuild command to build .deb package.
Using different branches is one approach, and I can suggest edits for @mestia’s answer if it seems appropriate (but read on...). Another approach is to keep different files side-by-side; see Solaar for an example of this. But both of these approaches have a significant shortcoming: they’re unsuitable for packages in Debian or Ubuntu (or probably other derivatives). If you intend on getting your package in a distribution some day, you should package it in such a way that the same set of files produces the correct result in the various distributions. For an example of this, have a look at the Debian packaging for Solaar (full disclosure: I did the packaging). The general idea is to ask dpkg-vendor what the distribution is; so for Solaar, which has different dependencies in Debian and Ubuntu, debian/rules has derives_from_ubuntu := $(shell (dpkg-vendor --derives-from Ubuntu && echo "yes") || echo "no") and further down an override for dh_gencontrol to fill in “substvars” as appropriate: override_dh_gencontrol:ifeq ($(derives_from_ubuntu),yes) dh_gencontrol -- '-Vsolaar:Desktop-Icon-Theme=gnome-icon-theme-full | oxygen-icon-theme-complete' -Vsolaar:Gnome-Icon-Theme=gnome-icon-theme-fullelse dh_gencontrol -- '-Vsolaar:Desktop-Icon-Theme=gnome-icon-theme | oxygen-icon-theme' -Vsolaar:Gnome-Icon-Theme=gnome-icon-themeendif This fills in the appropriate variables in debian/control : Package: solaarArchitecture: allDepends: ${misc:Depends}, ${debconf:Depends}, udev (>= 175), passwd | adduser, ${python:Depends}, python-pyudev (>= 0.13), python-gi (>= 3.2), gir1.2-gtk-3.0 (>= 3.4), ${solaar:Desktop-Icon-Theme} and Package: solaar-gnome3Architecture: allSection: gnomeDepends: ${misc:Depends}, solaar (= ${source:Version}), gir1.2-appindicator3-0.1, gnome-shell (>= 3.4) | unity (>= 5.10), ${solaar:Gnome-Icon-Theme} You can use the test in debian/rules to control any action you can do in a makefile, which means you can combine this with alternative files and, for example, link the appropriate files just before they’re used in the package build.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179318", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99402/" ] }
179,327
I am trying to ls some files with a pattern in a directory. I only want to scan the first level not recursive. My script: for i in $(ls $INCOMINGDIR/*$BUSSINESSDATE*)do echo $i;done Above command scan recursively. How can make it only to scan the first level directory?
Don't parse ls . Also don't use ALL_CAPS_VARS for i in "$incoming_dir"/*"$business_date"*; do Interactively, ls has a -d option that prevents descending into subdirectories: ls -d $INCOMINGDIR/*$BUSSINESSDATE*
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/179327", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72369/" ] }
179,353
I am using Fedora 20 in a VM and trying to learn to use containers. I have created a container but can't start it. Here is the terminal output: [root@localhost home]# lxc-start -n testlxc-start: conf.c: instantiate_veth: 2978 failed to attach 'veth87VSIJ' to the bridge 'virbr0': No such devicelxc-start: conf.c: lxc_create_network: 3261 failed to create netdevlxc-start: start.c: lxc_spawn: 826 failed to create the networklxc-start: start.c: __lxc_start: 1080 failed to spawn 'test'lxc-start: lxc_start.c: main: 342 The container failed to start.lxc-start: lxc_start.c: main: 346 Additional information can be obtained by setting the --logfile and --logpriority options.[root@localhost home]#
Make sure libvirtd is installed and running (via the libvirt package). e.g.: $ yum install -y libvirt$ systemctl start libvirtd$ brctl showbridge name bridge id STP enabled interfacesvirbr0 8000.fea2866efadb yes veth7ATCJK
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179353", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99429/" ] }
179,364
I would like to write a small part of a script that saves the error status, executes some other code, and sets the error status to the original error status. In Bash it looks like this: << some command >>; _EXIT=$?;( <<other code here>> ;exit $_EXIT) But I need code that will run no matter if it is being run under bash, zsh, csh or tcsh. I do not know which shell will be used in advance, because it is decided by the user. The user also decides << some command >>. It is safe to assume that << other code >> will work in all shells, but it is not safe to assume that you can write a file (so putting it into a file will not work). Background GNU Parallel executes commands given by the user in the shell decided by the user. When the command is finished, GNU Parallel has some cleanup to do. At the same time GNU Parallel needs to remember the exit value of the command given by the user. The code run before the snippet above is the user given command. << other code >> is the cleanup code.
Make sure libvirtd is installed and running (via the libvirt package). e.g.: $ yum install -y libvirt$ systemctl start libvirtd$ brctl showbridge name bridge id STP enabled interfacesvirbr0 8000.fea2866efadb yes veth7ATCJK
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179364", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2972/" ] }
179,365
I wish move to virtual Windows on my corporate laptop, rather to keep it as host OS.I will use this VM Windows on few laptops, but because this will be fully original OS version with Office and serial number, I want to make sure that running VM on first, second and then again on first one, won't lead to a need of activating again this VM Windows. My question is as follows: If I run VM Windows copy on multiple laptops, can I be sure that this virtual Windows will alway have the same hardware ? I mean running it on laptop with i5, then on i7 host won't lead to some "hardware changes" in virtual machine ?
Make sure libvirtd is installed and running (via the libvirt package). e.g.: $ yum install -y libvirt$ systemctl start libvirtd$ brctl showbridge name bridge id STP enabled interfacesvirbr0 8000.fea2866efadb yes veth7ATCJK
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179365", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99427/" ] }
179,396
Where can i check/verify which config file sshd is using? I know you can change the config file to use by using -f command, but is there a way to echo which config file is currently being used or is there a file I can view to check this?
Based on @Hauke Laging's comment. When you run strace on the sshd binary it outputs debugging information on how the program starts and what files it tries to access. From which we can use grep to list the /etc/ files which it tries to access. $ sudo strace -e trace=file /usr/sbin/sshd |& grep '^open('|grep '/etc/'open("/etc/ld.so.cache", O_RDONLY) = 3open("/etc/ssh/sshd_config", O_RDONLY|O_LARGEFILE) = 3open("/etc/gai.conf", O_RDONLY) = 3open("/etc/nsswitch.conf", O_RDONLY) = 3open("/etc/ld.so.cache", O_RDONLY) = 3open("/etc/ld.so.cache", O_RDONLY) = 3open("/etc/passwd", O_RDONLY|O_CLOEXEC) = 3open("/etc/ssh/ssh_host_rsa_key", O_RDONLY|O_LARGEFILE) = 3open("/etc/ssh/blacklist.RSA-2048", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)open("/etc/ssh/ssh_host_dsa_key", O_RDONLY|O_LARGEFILE) = 3open("/etc/ssh/blacklist.DSA-1024", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)open("/etc/ssh/ssh_host_ecdsa_key", O_RDONLY|O_LARGEFILE) = 3open("/etc/ssh/blacklist.ECDSA-256", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) From the above strace output /etc/ssh/sshd_config is used as ssh configuration.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179396", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77418/" ] }
179,397
I carry all of my music around on a 1TB USB drive and have a udev rule to symlink it to $HOME/Music/ when I plug it in to one of my laptops, which it does. The issue I have is that this works fine where the directory does not exist on the laptop, but it doesn't create the requisite tree where there is a pre-existing directory of the same name (Artist/Album/*.flac) on the laptop. The script I currently run is this one: #!/usr/bin/env bash# repopulate music links when drive plugged inshopt -s nullglobexport DISPLAY=:0export XAUTHORITY=/home/jason/.Xauthoritymusic=(/media/Apollo/Music/*)find /home/jason/Music -type l -exec rm {} \;for dirs in "${music[@]}"; do ln -s "$dirs" /home/jason/Music/ 2>/dev/nulldonestatus1=$?mpc update &>/dev/nullstatus2=$?if [[ "$status1" -eq 0 && "$status2" -eq 0 ]]; then printf "%s\n" "Music directory updated" | dzen2 -p 3fi How can I ensure that where a directory exisits on both the laptop and the USB drive, but the contents are slightly different, the files are correctly symlinked? For example: USB drive : Music -- Matthew Shipp -- Patoral Composure -- Track 1 -- Track 2 etc... -- Strata -- Track 1 -- Track 2 etc... -- Equilibrium -- Track 1 -- Track 2 etc... Laptop : Music -- Matthew Shipp -- Patoral Composure -- Track 1 -- Track 2 etc... In this case, no symlinks to the albums Strata or Equilibrium will be created, presumably because the parent directory (Matthew Shipp) exists. I would prefer not to use rsync to copy the actual data across as I have limited space on the laptops and with mpd able to follow symlinks, I have no need to copy the files across. Is it possible to tweak my script to propagate symlinks into pre-exisiting directories on the laptop?
Since your primary aim is to have a combined view of your local and external Music folder, I think a union mount via overlayfs could be used, especially if the files are not being written to. The basic command is, in older kernel versions (<3.18): mount -t overlayfs -o lowerdir=/read/only/directory,upperdir=/writeable/directory overlayfs /mount/point For example: $ ls Documents374620-63301.pdf My Kindle Content scan0005.jpgBPMN2_0_Poster_EN.pdf scan0003.jpg StrongDC++$ ls develcse ossec ubuntu-14.04-desktop-amd64-ssh.isonexus scripts zsh-syntax-highlighting$ sudo mount -t overlayfs -o lowerdir=$PWD/Documents,upperdir=$PWD/devel overlayfs ~/Documents$ ls Documents374620-63301.pdf scan0003.jpg BPMN2_0_Poster_EN.pdf scan0005.jpg cse scriptsMy Kindle Content StrongDC++nexus ubuntu-14.04-desktop-amd64-ssh.isoossec zsh-syntax-highlighting One drawback is the need for sudo , which can perhaps be taken care of using a careful NOPASSWD rule. In light of Jason's blog post , the mount command for newer kernels changes to using overlay as the filesystem, instead of overlayfs , and using an additional workdir . The kernel documentation now codifies this: At mount time, the two directories given as mount options "lowerdir" and "upperdir" are combined into a merged directory: mount -t overlay overlay -olowerdir=/lower,upperdir=/upper,\ workdir=/work /merged The "workdir" needs to be an empty directory on the same filesystem as upperdir.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179397", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6761/" ] }
179,403
Is this safe? echo "Defaults insults" >> /etc/sudoers If yes, can I do this? echo "## First line" >> /etc/sudoersecho "### Second line" >> /etc/sudoersecho "Defaults insults" >> /etc/sudoersecho "### Totally the last line" >> /etc/sudoers Is there a better way to do this incorporating visudo ? I'm making a bash script, this bit needs to turn insults on and off.
There are at least 3 ways in which it can be dangerous: If /etc/sudoers doesn't end in a newline character (which sudo and visudo allow), for instance, if it ends in a non-terminated #includedir /etc/sudoers.d line, your command will make it: #includedir /etc/sudoers.dDefaults insults which will break it and render sudo unusable. echo may fail to write the full string, for instance if the file system is full. For instance, it may just be able to write Defaults in . Which again will break your sudoers file. On a machine with multiple admins, if both attempt to modify /etc/sudoers at the same time, the data they write may be interlaced. visudo avoids these problems because it lets you edit a temporary file instead ( /etc/sudoers.tmp ), detects if the file was modified (unfortunately not if the file was successfully modified as it doesn't seem to be checking the editor's exit status), checks the syntax, and does a rename (an atomic operation) to move the new file in place. So it will either successfully update the file (provided your editor also leaves the file unmodified if it fails to write the new one) or fail if it can't or the syntax is invalid. visudo also guards against several persons editing the sudoers files at the same time. Now, reliably using visudo in an automatic fashion is tricky as well. There are several problems with that: You can specify an editor command for visudo with the VISUAL environment variable (takes precedence over EDITOR ), but only if the env_editor option has not been disabled. my version of visudo at least, under some conditions, edits all of /etc/sudoers and all the files it includes (runs $VISUAL for all of them). So you have to make sure your $VISUAL only modifies /etc/sudoers . as seen above, it doesn't check the exit status of the editor. So you need to make sure the file your editor saves is either successfully written or not modified at all. It prompts the user in case of problem. Addressing all those is a bit tricky. Here is how you could do it: NEW_TEXT='Defaults insults' \ CODE=' if [ "$2" = /etc/sudoers.tmp ]; then printf >&2 "Editing %s\n" "$2" umask 077 { cat /etc/sudoers.tmp && printf "\n%s\n" "$NEW_TEXT" } > /etc/sudoers.tmp.tmp && mv -f /etc/sudoers.tmp.tmp /etc/sudoers.tmp else printf >&2 "Skipping %s\n" "$2" fi' \ VISUAL='sh -fc IFS=:;$1 sh eval:eval:"$CODE"' visudo < /dev/null Won't work if env_editor is unset. On a GNU system, a better alternative would be to use sed -i which should leave sudoers.tmp unmodified if it fails to write the newer version: Add insults : SED_CODE=' /^[[:blank:]]*Defaults.*insults/,${ /^[[:blank:]]*Default/s/!*\(insults\)/\1/g $q } $a\Defaults insults' \CODE=' if [ "$2" = /etc/sudoers.tmp ]; then printf >&2 "Editing %s\n" "$2" sed -i -- "$SED_CODE" "$2" else printf >&2 "Skipping %s\n" "$2" fi' \VISUAL='sh -fc IFS=:;$1 sh eval:eval:"$CODE"' visudo < /dev/null Remove insults: SED_CODE=' /^[[:blank:]]*Defaults.*insults/,${ /^[[:blank:]]*Defaults/s/!*\(insults\)/!\1/g $q } $a\Defaults !insults' \CODE=' if [ "$2" = /etc/sudoers.tmp ]; then printf >&2 "Editing %s\n" "$2" sed -i -- "$SED_CODE" "$2" else printf >&2 "Skipping %s\n" "$2" fi' \VISUAL='sh -fc IFS=:;$1 sh eval:eval:"$CODE"' visudo < /dev/null
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/179403", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91046/" ] }
179,451
I have a folder of around 180 GBs, I need to zip it like: zip -p password /Volumes/GGZ/faster/mybigfolder/* /Volumes/Storage\ 4/archive.zip But it says: zip warning: name not matched: /Volumes/Storage 4/archive.zip So how do I do this? On another note, archive.zip does not exist, but I'm trying to create it.
This error can also be caused by symbolic links in the directory tree being compressed. If these don't have correct destinations (perhaps because the directory has been moved or copied from elsewhere), zip will attempt to follow the symlink to archive the target file. You can avoid this (and also get the effect you probably want anyway, which is not to archive multiple copies of the file) by using the -y (or --symlinks ) option.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/179451", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79979/" ] }
179,477
I know how to user fail2ban and how to configure a jail, but I'm not comfortable about how it actually works. The thing is, there's a particular jail option that pique my curiosity: findtime . When I configure a filter, it is necessary to use the HOST keyword (match IP address), so that fail2ban can know the IP to compare and ban if necessary. Alright. But there's no such thing for time: fail2ban can't know the exact time a line was added to a log file, because there's no TIME keyword, right? Actually, it can scan files without any time on any line and it will still work. I guess it means fail2ban is scanning files periodically: it set a scan time internally so it can handle options like findtime by comparing its own scan dates. First, am I right? If so, what is the scan frequency? Can't it be a bottleneck if there are lots of big log files to scan often? Then, what happened if the scan frequency is superior to the findtime options? Does it means fail2ban adapts to the minimal findtime option it found to set its minimal scan frequency?
First off. This is (perhaps) not an answer but perhaps better then a comment (and a bit long for it). Time stamps Find your statement: Actually, it can scan files without any time on any line and it will still work. conflicting with the documentation. What do you mean by work ? The manual#filters (v 0.8) states : If you're creating your own failregex expressions, here are some things you should know: [...] In order for a log line to match your failregex, it actually has to match in two parts: the beginning of the line has to match a timestamp pattern or regex , and the remainder of the line has to match your failregex. If the failregex is anchored with a leading ^, then the anchor refers to the start of the remainder of the line, after the timestamp and intervening whitespace. The pattern or regex to match the time stamp is currently not documented, and not available for users to read or set. See Debian bug #491253 . This is a problem if your log has a timestamp format that fail2ban doesn't expect, since it will then fail to match any lines. Because of this, you should test any new failregex against a sample log line, as in the examples below, to be sure that it will match. If fail2ban doesn't recognize your log timestamp, then you have two options: either reconfigure your daemon to log with a timestamp in a more common format, such as in the example log line above; or file a bug report asking to have your timestamp format included. Note here that log files can be configured to include time stamps as well as the format of the time stamps. (That include dmesg as mentioned in comment.) Also see this thread, Message #14 and #19 in particular: fail2ban: time pattern match is undocumented and unavailable to users Two examples: Note that you can also test with commands like: fail2ban-regex /var/log/auth.log /etc/fail2ban/filter.d/sshd.conf 1 No time stamp: $ fail2ban-regex ' [1.2.3.4] authentication failed' '\[<HOST>\] authentication failed'Running tests=============Use failregex line : \[<HOST>\] authentication failedUse single line : [1.2.3.4] authentication failedResults=======Failregex: 0 totalIgnoreregex: 0 totalDate template hits:Lines: 1 lines, 0 ignored, 0 matched, 1 missed|- Missed line(s):| [1.2.3.4] authentication failed`- 2 With time stamp: $ fail2ban-regex 'Jul 18 12:13:01 [1.2.3.4] authentication failed' '\[<HOST>\] authentication failed'Running tests=============Use failregex line : \[<HOST>\] authentication failedUse single line : Jul 18 12:13:01 [1.2.3.4] authentication failedResults=======Failregex: 1 total|- #) [# of hits] regular expression| 1) [1] \[<HOST>\] authentication failed`-Ignoreregex: 0 totalDate template hits:|- [# of hits] date format| [1] MONTH Day Hour:Minute:Second`-Lines: 1 lines, 0 ignored, 1 matched, 0 missed Scan times Manual#Reaction time : It is quite difficult to evaluate the reaction time. Fail2ban waits 1 second before checking for new logs to be scanned. This should be fine in most cases. However, it is possible to get more login failures than specified by maxretry. In that regard also see this thread: Re: Bug#481265: fail2ban: Poll interval is not configurable . But under optional but recommended software one find Gamin. Gamin is a file alteration monitor. Gamin greatly benefits from a "inotify"-enabled kernel. Thus, active polling is no longer required to get the file modifications. If Gamin is installed and backend in jail.conf is set to auto (or gamin ) - Gamin will be used.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179477", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98553/" ] }
179,521
There are a number of questions similar to this on StackExchange but none address this issue. I want to download all the pdf files in the 2007 directory at http://www3.cs.stonybrook.edu/~algorith/video-lectures/ . So I want wget to parse the html file available at the above link and only follow links that go to pdf files in the 2007 directory. I used the following but it didn't work: wget -r -A pdf -I /2007 'http://www3.cs.stonybrook.edu/~algorith/video-lectures/' Can you also explain why the above does not work?
As noted by Anthon the -I option does not work that way . But, as you have a reference point - namely ~algorith/video-lectures/ with a listing of files there are some options. One is to parse index with other tools and re-run wget. Another is to use --accept-regex : it matches for accept on the complete URL . From man: --accept-regex urlregex--reject-regex urlregex Specify a regular expression to accept or reject the complete URL. This should do what you want: wget -r -nd -A pdf --accept-regex "2007/.*\.pdf" 'http://www3.cs.stonybrook.edu/~algorith/video-lectures/' Remove -nd if you actually want the directories. Edit (to address comment) accept vs. accept-regex This is somewhat cumbersome for me to explain, but I'll give it a try. First off, if you really want to read the manual, then use info . As stated in man (this is from GNU wget) - (easy to overlook): SEE ALSO This is not the complete manual for GNU Wget. For more complete information, including more detailed explanations of some of the options, and a number of commands available for use with .wgetrc files and the -e option, see the GNU Info entry for wget. In this case i.e.: $ info wget "Following Links" "Types of Files" or online . Here we find, emphasize mine: Finally, it’s worth noting that the accept/reject lists are matched twice against downloaded files: once against the URL’s filename portion, to determine if the file should be downloaded in the first place; then , after it has been accepted and successfully downloaded, the local file’s name is also checked against the accept/reject lists to see if it should be removed. Further it continues to explain that the rationale behind this is that .htm and .html files are always downloaded regardless of accept/reject rules. They should be removed after being downloaded and scanned for links, if they did match the accept/reject lists. Thus: HTML files are always downloaded. After it is downloaded the match is only done against the file name. Not sure how much this helped. If you read the info page it might be more clear. It is a bit of complexity with chicken and egg things etc. in the mix here.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179521", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59902/" ] }
179,534
On my CentOS machine I have to create a new partition/file system, not sure which one I need to be honest I am a little bit confused about it all. If someone could explain it and walk me through it that would be great. The current size of my Harddisk is 190GB, but I am only using 100GB of that. When I do df -H I get the following output (these are file systems if my understanding is correct?): Filesystem Size Used Avail Use% Mounted on/dev/sda1 94G 65G 25G 73% /tmpfs 938M 1.1M 937M 1% /tmptmpfs 1.0G 18M 1007M 2% /var/log/httpd When I launch parted and use the print command, I see the following output (these are partitions if I understand this correctly?): GNU Parted 2.1Using /dev/sdaWelcome to GNU Parted! Type 'help' to view a list of commands.(parted) printModel: ATA QEMU HARDDISK (scsi)Disk /dev/sda: 193GBSector size (logical/physical): 512B/512BPartition Table: msdosNumber Start End Size Type File system Flags 1 65.5kB 193GB 193GB primary ext3 boot How can I create a new file system to fill up the remaining space in the partition? I would like to mount this file system on a directory "/home/admin/admin_backups/", so that I can store all my backups on this new file system. Alternatively it would also be a solution to resize the current file system, what are the advantages/disadvantages of using one or the other approach?
Since your partition seems larger than your filesystem, try growing the filesystem: resize2fs /dev/sda1
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179534", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/94272/" ] }
179,543
I am going to write a simple replacement for Ubunutu's NetworkManager. Is there any place where the Wifi network passwords would be stored in Linux? I know about /etc/NetworkManager/nm-system-settings.conf If not, can I store them safely somewhere using some builtin OS utilities?
Ubuntu (and most likely many flavors of Debian) stores the information at /etc/NetworkManager/system-connections . Each of the connections has its own file entry. The files are secured with file mode 600 and owned by root. The files in this directory are not limited to wireless connections; there are files for the wired connections, too.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/179543", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20334/" ] }
179,554
I tried several things: setxkbmap -option caps: returnsetxkbmap -option caps: enter I also tried to modify the file /usr/share/X11/xkb/symbols/pc by: "Key <CAPS> {[Enter]};" But nothing worked.
Not sure if it helps (as not purely in setxkbmap ), but: setxkbmap -option caps:nonexmodmap -e "keycode 66 = Linefeed" Change back: setxkbmap -optionxmodmap -e "keycode 66 = Caps_Lock" You can check with something like: xev | sed -ne '/^KeyPress/,/^$/p' to get keycodes.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179554", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/94260/" ] }
179,563
A fixed-length progress bar, a file or byte count, or better yet a timer showing the estimated time remaining would be ideal. zip 's standard behavior seems to be to print a line for every file processed, but I don't want that information overload when I zip thousands of files. I want a guesstimate how long it's going to take. I tried the -q ( --quiet ) option in combination with -dg ( --display-globaldots ) but that just floods stdout with multiple lines of dots and gives no useful indication. I also tried -qdgds 10m as mentioned in the man page, but got the same result. I then tried -db ( --display-bytes ) and -dc ( --display-counts ) but there doesn't seem to be a global option, so it again prints it for every filename. Lastly, I tried it together with -q like -qdbdc , but that just outputs nothing. Funnily enough, I found a man page on the info-zip site that mentions a -de ( --display-est-to-go ) option which should "Display an estimate of the time to finish the archiving operation." That sounds exactly like what I want, but the problem is that my version of zip does not have that feature. I'm using Ubuntu 14.04.1 64bit, bash-4.3.30(1) and zip-3.00. According to Wikipedia, this is zip's latest stable release. There are unreleased beta versions on the info-zip sourceforge page, but I'd rather not entrust my data to a beta release.
zip can compress data to standard output. Hence, you can combine it with other tools like pv : zip -qr - [folder] | pv -bep -s $(du -bs [folder] | awk '{print $1}') > [file.zip] Remove one of the -bep options as your convenience.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/179563", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22310/" ] }
179,567
I'd like to remove all carriage returns followed by line feeds (CRLF), such as \r\n in a file. How can I do that? I can't use dos2unix because that replaces CRLF with LF. And I can't use tr because that will also replace any \n that aren't preceded by \r . How can I do this?
sed ":a;/\r$/{N;s/\r\n//;b a}" This will match all lines that have '\r' at the end (followed by '\n' ). On these lines it will first append the next line of input (while putting the '\n separator in between), then replace the resulting "\r\n" with an empty string, and then goes back to the beginning to see, whether the new contents of pattern space doesn't by chance happen to match again. Following the comment: if you wanted to strip any additional '\r' from the file as well, just add it after stripping the CRLF combos: sed ":a;/\r$/{$!N;s/\r\n//;t a};s/\r//g"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179567", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99621/" ] }
179,577
Suppose I have a native (i.e. coming from manufacturers) windows 7 installation on a laptop (with an SSD device, BIOS/MBR partition table if that matters).The partition on the device is completely allocated and dedicated to windows. I now want to install a linux system alongside windows, and to do that I need to first shrink the windows partition. While I can find ways to do that from within windows or using gparted, how can I do this using only command line programs, like parted or fdisk?
GParted is often worth using because it helps avoid several nasty mistakes. I guess the main advantage of command-line tools here is to have more visibility of details. This can be useful in unexpectedly fragile situations (at least once it's broken, the details might help you realize why). However I wouldn't recommend using them to others unless they want to be able to learn from mistakes up to "my disk is now full of zeros and I need to start from scratch". Also a desktop Linux install process should provide a user-friendly tool for resizing the Windows partition. (Or official documentation). It's the common case. This would be my first recommendation in general. All of these options will recommend making backups in case of any error . Confusingly you should not use the parted command-line tool. It used to be a convenient option, but the developers no longer support resizing filesystems with it. Otherwise, you use ntfsresize , then delete and re-create the partition ( fdisk ) with the same details except for the size. BEWARE UNITS - SOME TOOLS USE MB; OTHERS MAY SAY MB BUT MEAN MiB. fdisk uses MiB and ntfsresize uses MB. The lazy way is to ntfsresize to much smaller than you need (e.g. 2x), then after recreating the partition you run ntfsresize a second time with no explicit size. For the hard way, to convert units, you can run numeric expressions in bash. E.g. to see 10GiB in bytes: echo $((10 * 1024 * 1024 * 1024)) . You can use those expressions as arguments to command-line tools like ntfsresize . The partition name for ntfsresize will look like /dev/sda1 . lsblk -f will list all partitions (including your boot disc) with their size, and tell you about the filesystem. fdisk will want the name of the disk, like /dev/sda . For MBR, the partition details to recreate are: partition type and "active"/bootable flag, as well as starting offset.[1] fdisk should show the partition offset in sectors by default. (If not, there may be fractions which are not shown - possibly indicated by a + on the end, but there might be a trap there - you should be sure to always use fdisk in sectors mode). To avoid typing errors inside fdisk , I sometimes select numbers + paste them with the middle mouse button. That requires either X Windows, or in text mode you need gpm . I think it's less common to provide gpm on the console by default now, but it's there when I use Clonezilla Live. It's convenient, but you could probably lose the number first. So you should probably write the original partition offset down before you delete it. [1] GPT uses a different format for the type, adds some more flags and a partition UUID. I don't think they'd usually be important; flags wouldn't apply to the main Windows partition and the partition UUID isn't used by much yet.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179577", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82565/" ] }
179,580
Here is my code: function update_profile{ echo "1. Update Name" echo "2. Update Age" echo "3. Update Gender" echo "Enter option: " read option case $option in 1) update_name ;; 2) update_age ;; 3) update_gender ;; esac function update_name { echo "Enter new name: " read name }} Just want to make sure if it's possible to do this way. I do know that I can throw all the codes into the case, but it will be messy, so I was thinking to create a stand alone function, within a function, and to be called when needed to perform its commands.
Yes, it's possible. It is even possible to nest a function within another function, although this is not very useful. f1 (){ f2 () # nested { echo "Function \"f2\", inside \"f1\"." }} f2 # Gives an error message. # Even a preceding "declare -f f2" wouldn't help.echo f1 # Does nothing, since calling "f1" does not automatically call "f2".f2 # Now, it's all right to call "f2", #+ since its definition has been made visible by calling "f1". # Thanks, S.C. Source: The Linux Documentation Project
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/179580", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98500/" ] }
179,604
I don't have much experience, just trying to get involved into the processes how do they interpret to hardware from user level. So when a command is fired from a shell, fork() inherits a child process of it and exec() loads the child process to the memory and executes. If the child process contains all the attributes of the parent process (which is the original process), then what is the need of this child process? The original process also could have been loaded to the memory. Does this fork and exec concept apply to all the executable program in UNIX? Like for shell script also or only for commands? Does it also apply for shell builtin commands? When is the copy on write concept used if I'll execute a command/script? Sorry for asking many questions at a time, but all these questions come to my mind at once when I think about any command execution.
So when a command is fired from a shell, fork() inherits a child process of it and exec() loads the child process to the memory and executes. Not quite. fork() clones the current process, creating an identical child. exec() loads a new program into the current process, replacing the existing one. My qs is: If the child process contains all the attributes of the parent process(which is the original process), then what is the need of this child process? The original process also could have been loaded to the memory. The need is because the parent process does not want to terminate yet; it wants a new process to go off and do something at the same time that it continues to execute as well. Does this fork and exec concept apply to all the executable program in UNIX?Like for shell script also or only for commands? Does it also apply for shell builtin commands? For external commands, the shell does a fork() so that the command runs in a new process. Builtins are just run by the shell directly. Another notable command is exec , which tells the shell to exec() the external program without first fork() ing. This means that the shell itself is replaced with the new program, and so is no longer there for that program to return to when it exits. If you say, exec true , then /bin/true will replace your shell, and immediately exit, leaving nothing running in your terminal anymore, so it will close. when copy on write concept is used if I'll execute a command/script? Back in the stone age, fork() actually had to copy all of the memory in the calling process to the new process. Copy on Write is an optimization where the page tables are set up so that the two processes start off sharing all of the same memory, and only the pages that are written to by either process are copied when needed.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/179604", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99546/" ] }
179,621
I currently have LVM-cache set up on my Ubuntu install as described in https://rwmj.wordpress.com/2014/05/22/using-lvms-new-cache-feature/ . (I did have to install some of the vivid/proposed packages to get it to work, but I managed.) I was able to successfully convert one of my logical volumes into a cached volume, via: # lvconvert --type cache --cachepool anson-TA75MH2/lv_cache anson-TA75MH2/root Logical volume anson-TA75MH2/root is now cached. However, after doing this, I am unable to resize the cached partition. When I try to extend the cached partition (in this case named root , since it is going to be the root of my filesystem), I get an error message: # lvextend anson-TA75MH2/root -L +250G Unable to resize logical volumes of cache type. How can I turn the caching back off, so that I can resize it? For reference: sda is my main 1TB hard drive, containing a large LVM partition and a shrunken ext4 partition that I plan to move into lvm. sdb is a cheap 32GB SSD, with a 500MB ext2 /boot partition, a big lvm partition, and 8GB of swap. # vgs VG #PV #LV #SN Attr VSize VFree anson-TA75MH2 2 3 0 wz--n- 803.46g 499.96g# pvs PV VG Fmt Attr PSize PFree /dev/sda1 anson-TA75MH2 lvm2 a-- 782.47g 499.96g /dev/sdb2 anson-TA75MH2 lvm2 a-- 21.00g 0 # lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home anson-TA75MH2 -wi-ao---- 250.47g lv_cache anson-TA75MH2 Cwi---C--- 20.96g root anson-TA75MH2 Cwi-aoC--- 32.00g lv_cache [root_corig] Alternately, if there is a way to instead cache more than one LV using the same cache, that would be preferred (although I would still like to know how to turn it off). However, when I try it, it refuses: # lvconvert --type cache --cachepool anson-TA75MH2/lv_cache anson-TA75MH2/home lv_cache is already in use by root
lvconvert --uncache anson-TA75MH2/root seems less susceptible to catastrophic typos than lvremove anson-TA75MH2/lv_cache but like the man page said those are the main options.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/179621", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29620/" ] }
179,665
I wanted to extract some text with regex in bash, so I decided to try the following simple example out. echo "abc def ghi" | grep -Po " \K(.*?) " I was expecting to get a "def" , but to my surprise a "def " (with a final extra space) was what I got. I'm interested in understanding why grep also includes the extra space at the end and how to get rid of it. I know I could post-process the result with another line but I'm interested in solving this with grep.
In short: \K causes grep to keep everything prior to the \K and not include it in the match. It does not affect what comes after the \K() . This might be enough: " \K(.+)(?= )" Where (?= ) is a non capturing group. or perhaps better: " \K([^ ]+)(?= )"" \K(\w+)(?= )" or similar.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179665", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99700/" ] }
179,671
Like jumping to the end of a line is Ctrl + E , where E can be thought of as end, why does it jump to the start using A ?
There are two sides to the question, the technical side and the historical side. The technical answer is because bash uses GNU Readline . In readline Control-a is bound to the function beginning-of-line , you can show this with: $ bind -q beginning-of-linebeginning-of-line can be invoked via "\C-a", "\M-OH", "\M-[1~", "\M-[7~", "\M-[H". where \C-a means "Control-a". bind -p will show all bindings (be careful using bind , it's easy to break your keyboard if you accidentally provide additional options or arguments). Some of the above bindings are added by default, others I have added (via .inputrc ) for various terminals I have used. Since bash-2.0, if the terminal termcap contains the capabilities kh , and kH then Home and End will be set to beginning-of-line and end-of-line . Both bash and readline are developed by Chet Ramey , an Emacs user and also the developer of ce an Emacs clone. (Please note, this endeavours to summarise many years of history from many decades ago, and glosses over some details.) Now, why is it Control-a in particular? Readline uses by default Emacs-like bindings . Control-a in GNU Emacs invokes move-beginning-of-line , what we consider to be the "home" function now. Stallman and Steel's original EMACS was inspired by Fred Wright's E editor (an early WYSIWYG editor) and TECO (a cryptic modal editor/language) -- EMACS was a set of macros for TECO. See Essential E [PDF] (from SAIL , 1980). E however used Control-Form for "beginning of line", this was on the "DataDisc" keyboard which had a Control key, and a Form key. The space-cadet keyboard of the time (lacking a Home key by the way, though it had an End ) is commonly blamed for the Emacs keyboard interface. One of the desirable features of EMACS was its use of TECO's Control-R "real-time" line editing mode (TECO predates CRT/keyboard terminals), you can see the key bindings on page 6 of the MIT AI Lab 1978 ITS Introduction to the EMACS editor [scanned PDF], where ┌ is used to denote Control. In this mode, the key bindings were all control sequences, largely mnemonic: Control-E End of this line , Control-P move to previous line , Control-N move to next line , Control-B backward one character , and not least Control-A move to beginning of this line , Costas' suggestion of "first letter of the alphabet" for this is as good as any. (A similar key-binding is in the tvlib macro package which aimed to make EMACS behave like the TVEDIT editor, binding control A and E to backward and forward sentence , but used different sequences for beginning and end of line.) The Control-A/Control-E bindings in "^R mode" were implemented directly in the ITS TECO (1983, version 1208, see the _teco_.tgz archive at the nocrew PDP10/ITS site, or on Github ), though I cannot determine more accurately when they first appeared, and the TECO source doesn't indicate why any particular bindings were chosen. The 1978 MIT EMACS document above implies that in 1978 EMACS did not use TECO native Control-A/Control-E, it's possible that the scrlin macro package (screen line) implemented these. To recap: bash uses readline readline key bindings follow Emacs/EMACS the original EMACS was created with TECO, inheriting many features TECO's interactive mode macros used (mostly) mnemonic control key bindings, and "start of line" ended up assigned to Control-A See also: http://www.gnu.org/gnu/rms-lisp.html http://xahlee.info/kbd/keyboard_hardware_and_key_choices.html http://blog.djmnet.org/2008/08/05/origin-of-emacs/ http://www.jwz.org/doc/emacs-timeline.html http://www.multicians.org/mepap.html *
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/179671", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80904/" ] }
179,756
Trying to use cryptsetup to mount a drive encrypted with truecrypt. Doing this: sudo cryptsetup open --type tcrypt --readonly /dev/sdc1 encrypted_drive and then typing the passphrase gives me: Activation is not supported for 4096 sector size. What does this error mean, and how can I mount my truecrypt volume? Useful information: The drive was encrypted with truecrypt 7.1a The machine trying to do this is booted into a live USB version of ubuntu, specifically ubuntu 14.04.01, i386 desktop version. cryptsetup --version yields cryptsetup 1.6.1 removing the --readonly option produces no change
cryptsetup expects the sector size to be 512 , but in your case it seems to be 4096 , since that is what truecrypt does for devices with physical/logical sector size of 4096 . This information is stored in the TrueCrypt header, you can also see it with cryptsetup tcryptDump . The Linux version of truecrypt mounts such containers fine like so: truecrypt /dev/sdc1 /mnt/somewhere According to dmsetup it still uses regular encryption regardless of sector size, so this is a limitation of cryptsetup itself. You could open an issue for it on the cryptsetup issue tracker: https://code.google.com/p/cryptsetup/issues/list
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179756", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73165/" ] }
179,759
Let's suppose we have to users: alice and bob . Now Bob wants to move Alice's ~/Documents directory into his home folder. What's the best workflow to do that, updating the permissions (from Alice to Bob)? That means that all the rights Alice has on the /home/alice/Documents/ (directories and files, recursively) to be added to Bob /home/bob/Documents/ (directories and files, recursively), and Alice's rights will be removed from /home/bob/Documents .
If you change the file owner using chown , the permissions for alice would be transferred to bob. So here's the flow: sudo mv ~bob/Documents ~bob/Documents.origsudo mv ~alice/Documents/ ~bob/Documentssudo chown -PR bob ~bob/Documents Edit: In case you want to overwrite the group as well, use sudo chown -PR bob:bob ~bob/Documents Or: sudo chown -PR bob: ~bob/Documents to use bob's primary group. However, beware that this could be problematic in case ~alice/Documents had non-default group permissions. In that case it might be better to use something like sudo find ~bob/Documents -group alice -exec chown -h bob: {} + If ACLs are in use, you may want to check those as well.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179759", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45370/" ] }
179,785
I am copying text from a pdf, and when I paste it into a text editor it comes out like this: The text does not extend to the right margin but looks like a column, and there's a space between the lines. I'd like the text to extend to the right margin and no spaces between lines. I can format this manually, but it's very time consuming. Is there a program which will allow me to automate this?
grep . removes all blank lines. You can pipe the result into fmt to reformat the text to a width of your choice. If you have the text in the X clipboard, xsel -b will get it from there. xsel -b | grep . | fmt -w 80 >reformatted.txt If you don't want line breaks at all, you can replace newlines by spaces, but add a newline at the end. xsel -b | grep . | tr '\n' ' '; echo The output won't be very good, because according to your image, hyphens are lost, so “vul-/gar” comes out as “vul gar”, “Thanks-/giving” as “Thanksgiving”, etc. grep . collapses all paragraphs into one. You can avoid this only if there is some way in which paragraphs are marked in your text. If there is a single blank lines between lines of the same paragraph and at least two blank lines between paragraph, you can remove line breaks and preserve paragraph breaks like this: awk 'length {if (previous < NR-2) print ""; previous = NR; print}' You can try running pdftotext on the PDF directly. This won't reformat the text and may or may not include the blank lines (it depends how the PDF was made).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179785", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77038/" ] }
179,851
I find it very convenient to install packages on a new machine through package files like brewfiles, caskfiles, dockerfiles, package.json etc. Is there an alternative to this for apt-get since I still just use it through commandline with apt-get install pkg1 pkg2 pkg3… ?
As specified in the comments of your question, you can write a simple text file, listing the packages to install: iceweaselterminatorvim Assuming this is stored in packages.txt , then run the following command: xargs sudo apt-get install <packages.txt xargs is used to pass the package names from the packages.txt file to the command line. From the xargs manual: xargs readsitemsfrom the standard input, delimited by blanks (which can be protectedwith double or single quotes or a backslash) or newlines, and executesthe command (default is /bin/echo ) one or more times with any initialarguments followed by items read from standard input.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/179851", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99807/" ] }
179,854
Example: I type man ls , than I want to get man only. By using !! I can get man ls but how do I get man ?
You can select particular word from last typed command with !!: and a word designator. As a word designator you need 0 . You may find ^ and $ useful too. From man bash : Word Designators 0 (zero) The zeroth word. For the shell, this is the command word. ^ The first argument. That is, word 1. $ The last argument. So in your case try: echo !!:0
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179854", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20334/" ] }
179,882
Out of rage, I quit vim by using :wq!!! . This created a file named !! . Given that !! references the previous command, attempting to interact with it yields interesting results. I tried rm ./!! and rm -- !! . Both would pull in the previous command (as it should). An easy solution is to simply start a shell that doesn't treat !! like anything special, but that's too easy. How can I properly interact with the file in bash?
You can remove file with name like !! , just escape it: rm \!\! or just rm !<TAB> -> rm \!\!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179882", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14860/" ] }
179,894
To make a long story short, my (CentOS 7) server's /boot is too small (100MiB) to hold 2 kernels plus the automatically generated rescue image. I want to avoid the hassle of repartitioning and reinstalling my server by preventing the rescue image from being generated. This would leave enough space for at least 2 kernels, and I can still use my hoster's netboot rescue solution should it be needed. (I know the only 'right' way to deal with this is to fix my partition scheme, but considering the downtime involved with that I wanted to try a more pragmatic solution first)
Open the file /usr/lib/dracut/dracut.conf.d/02-rescue.conf and change dracut_rescue_image="yes" to dracut_rescue_image="no" This seems to be the only way for CentOS 7.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/179894", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99815/" ] }
179,954
I am running Ubuntu 12.04 on my laptop using VMware Player. I am not sure why but I have an account called "User Account" in addition to my account that I usually login to use Ubuntu. Well that was just a side comment but basically all I am trying to do is install the ncurses library on Ubuntu. I have tried installing ncurses using the following command lines: sudo apt-get install libncurses5-devsudo apt-get install ncurses-dev When I tried installing ncurses twice using the above commands I received the following prompt in the terminal: [sudo] password for username When I type in my password I receive the following message: username is not in the sudoers file. This incident will be reported. So far I have tried enabling the root user ("Super User") account by following these instructions . Here are some of the things the link suggested to do: Allow an other user to run sudo. Type the following in the command line: sudo adduser username sudo Or sudo adduser username sudo logging in as another user. Type the following in the command line: sudo -i -u username Enabling the root account. Type the following in the command line: sudo -i Or sudo passwd root I have tried all of the above command lines and after typing in each command I was prompted for my password. After I entered my password I received the same message as when I tried to install ncurses: fsolano is not in the sudoers file. This incident will be reported.
When this happened to me all I had to do to fix it was: Step 1. Open a terminal window, CTRL + ALT + T on my system (Debian KDE after setting up as hotkey) Step 2. Entered root using command su root Step 3. Input root password Step 4. Input command apt-get install sudo -y to install sudo Step 5. Add user to sudoers file by inputting adduser username sudo , put your username in place of username Step 6. Set the correct permissions for sudoers file by inputting chmod 0440 /etc/sudoers Step 7. Type exit and hit Enter until you close your terminal window. Shutdown your system completely and reboot. Step 8. Open another terminal window. Step 9. Try any sudo command to check if your username is correctly added to sudoers file. I used sudo echo "Hello World!" . If your username has been correctly added to the sudoers list then you will get Hello World! as the terminal response!
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/179954", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99906/" ] }
180,008
My colleague is generating log files with a preceding date format like 2015120 , which represent January as 1 instead of 01 . The usual way I'm using to deal with this kind of issue is using date command.Like date +'%Y%m%d' . But I maned date command, it turns out they didn't mention represent January without a preceding 0. So I'm wondering is there an another way to represent date like 2015120 in Linux?
With GNU, FreeBSD or OS/X date (or date implementations that use the system's libc 's strftime() where that is the GNU libc ), adding hyphen - after % prevents numeric fields from being padded with zeroes: $ date +'%Y%-m%d'2015120 From man date on a GNU system: By default, date pads numeric fields with zeroes. The following optional flags may follow `%': - (hyphen) do not pad the field If your system date does not support that, you can use perl : $ perl -MTime::Piece -e ' $t = localtime; print $t->year, $t->mon, $t->mday;'2015122
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/180008", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74226/" ] }
180,030
I want to run a command such as this in a bash script: freebcp <authentication and other parameters> -t "," -r "\r\n" When run directly on the command line, the command works as expected. But when placed in a variable in a bash script it returns an error such as this: Msg 20104, Level 3Unexpected EOF encountered in bcp datafileMsg 20074, Level 11Attempt to bulk copy an oversized row to the serverbcp copy in failed When command is placed in a variable and double quotes are escaped: cmd="freebcp ${db_name}.dbo.${table_name} in ${p_file} -S ${server_name} -U ${username} -P ${password} -t \",\" -r \"\r\n\" -c"`$cmd` Note : Putting the following in the script works as expected: `freebcp ${db_name}.dbo.${table_name} in ${p_file} -S ${server_name} -U ${username} -P ${password} -t "," -r "\r\n" -c` So I know there's some quoting/escaping/expansion problems but I can't figure out how to fix it. Note 2 : Single quoting -t -r parameters doesn't work either
Short answer: see BashFAQ #50: I'm trying to put a command in a variable, but the complex cases always fail! . Long answer: the shell does variable expansion partway through the process of parsing a command line -- notably, after it processes quotes and escapes. As a result, putting quotes and escapes in a variable doesn't do the same thing as having them directly on the command line. The solution in your answer (doubling the escape characters) will work (in most cases), but not for the reason you think it's working, and that makes me rather nervous. The command: cmd="freebcp ... -t "," -r "\\r\\n" -c" Gets parsed into the double-quotesd string freebcp ... -t , followed by the unquoted string , followed by the double-quoted string -r , followed by the unquoted string '\\r\\n' (the fact that it's unquoted is why you needed to double the escapes), followed by the double-quoted string ' -c'. The double-quotes you meant to be part of the string aren't treated as part of the string, they're treated as delimiters that change how different parts of the string are parsed (and actually have pretty much the reverse of the intended effect). The reason this works is that the double-quotes actually weren't having much effect in the original command, so reversing their effect didn't do much. It would actually be better to remove them (just the internal ones, though), because it'd be less misleading about what's really going on. That'd work, but it'd be fragile -- the only reason it works is that you didn't really need the double-quotes to begin with, and if you had a situation (say, a password or filename with a space in it) where you actually needed quotes, you'd be in trouble. There are several better options: Don't store the command in a variable at all, just execute it directly. Storing commands is tricky (as you're finding), and if you don't really need to, just don't. Use a function. If you're doing something like executing the same command over & over, define it as a function and use that: loaddb() { freebcp "${db_name}.dbo.${table_name}" in "${p_file}" -S "${server_name}" -U "${username}" -P "${password}" -t \",\" -r \"\r\n\" -c"}loaddb Note that I used double-quotes around all of the variable references -- this is generally good scripting hygiene, in case any of them contain whitespace, wildcards, or anything else that the shell does parse in variable values. Use an array instead of a plain variable. If you do this properly, each command argument gets stored as a separate element of the array, and you can then expand it with the idiom "${arrayname[@]}" to get it out intact: cmdarray=(freebcp "${db_name}.dbo.${table_name}" in "${p_file}" -S "${server_name}" -U "${username}" -P "${password}" -t \",\" -r \"\r\n\" -c")"${cmdarray[@]}" Again, note the prolific use of double-quotes; here they're being used to make sure the array elements are defined properly, not as part of the values stored in the array. Also, note that arrays aren't available in all shells; make sure you're using bash or zsh or something similar. A couple of final notes: when you use something like: `somecommand` the backquotes aren't doing what you seem to think they are, and in fact they're potentially dangerous. What they do is execute the command, capture its output, and try to execute that output as another command. If the command doesn't print anything, this doesn't matter; but if it does print something, it's unlikely the output will be a valid command. Just lose the backquotes. Lastly, giving a password as a command argument is insecure -- command arguments published in the process table (for example, the ps command can see them), and publishing passwords in public locations is a really bad idea. freebcp doesn't seem to have any alternative way to do this, but I found a patch that'll let it read the password from stdin ( echo "$password" | freebcp -P - ... -- note that echo is a shell builtin, so its arguments don't show up in the process table). I make no claims about the correctness, safety, etc of the patch (especially since it's rather old), but I'd check it out if I were you.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180030", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38654/" ] }
180,036
I have tried the following with both LinuxMint 13 Cinnamon 32 bit and Trisquel 7.0 Gnome 32 bit and get the same error message: $ yes | sudo e2fsck /dev/sdaxe2fsck 1.42 (29-Nov-2011)e2fsck: need terminal for interactive repairs In each case, the partition /dev/sdax was not mounted. Is it not possible to use yes with e2fsck?
If you are sceptical whether e2fsck -y works for you next time, coins might help. I used to use this trick when there were no yes or -y or equivalent options in MS-DOS.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180036", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98407/" ] }
180,066
I have some files in a directory. And I want to add some lines at top and end of file using awk . Example: My awk command: awk 'BEGIN { print "line1\nline2" } { print $0 } END { print "line3\nline4" }' file |tee file By using above command I can add line1 & line2 at the top & line3 , line4 at the end of file Now I want to do same action for all files that are exist in current directory. If I use : awk 'BEGIN { print "line1\nline2" } { print $0 } END { print "line3\nline4" }' * Then I get output on terminal screen but I can't redirect to (or overwrite) all files . So, I tried following (To find + awk ): find -type f -exec awk 'BEGIN { print "line1\nline2" } { print $0 } END { print "line3\nline4" }' '{}' \; By using above command I can print output on screen and hence to overwrite files , I've tried following (To find + awk + overwrite with tee ), but it getting error: $ find -type f -exec awk 'BEGIN { print "line1\nline2" } { print $0 } END { print "line3\nline4" }' '{}' | tee '{}' \;find: missing argument to `-exec' Hence, How can I use awk to overwrite (i.e: with |tee or something else) for all files in current directory by command?
With GNU awk 4.1 or above: find . -type f -exec awk ' @load "inplace" BEGINFILE { inplace_begin(FILENAME, "") print "line1\nline2" } {print} ENDFILE { print "line3\nline4" inplace_end(FILENAME, "") }' {} +
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180066", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66803/" ] }
180,075
I want to speed up creation of user accounts on some Linux VMs that I am creating, and wondered if I could simplify the process of writing to the new user's ~/.ssh/authorized_keys or ~/.ssh/authorized_keys2 file. My manual process is, approximately (logged in as the new user): ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsatouch ~/.ssh/authorized_keyschmod go-rwx ~/.ssh/authorized_keysecho '... me@machine' >> ~/.ssh/authorized_keys Is there any way, with a Bash command, a standard GNU command, or any program easily installed on Ubuntu, to condense the touch , chmod , and echo into one command? Part of the reason I would like to reduce it to one command is so that I can make a shell script that I can run as the initial sudo-capable user on the VM. Something like: sudo su - me -c 'ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa'echo '... me@machine' | sudo su - me -c 'xyz 0600 ~/.ssh/authorized_keys' Where xyz is the hypothetical command that creates, sets permissions, and writes the file all in one fell swoop.
Note that touch ~/.ssh/authorized_keyschmod go-rwx ~/.ssh/authorized_keysecho '... me@machine' >> ~/.ssh/authorized_keys offers no benefit over: echo '... me@machine' >> ~/.ssh/authorized_keyschmod go-rwx ~/.ssh/authorized_keys Permissions are checked upon opening a file, not upon reading or writing to it. So it doesn't matter whether you do the chmod before or after adding the content. Someone could still open the file before you do the chmod but wait for you to add content before reading it. Here you want to make sure the file has the right permissions from the start: sudo -Hu user sh -c ' umask 077 && printf "%s\n" "$1" >> ~/.ssh/authorized_keys' sh "$key user@host" -H forces $HOME to be set to user 's home directory (even on systems where sudo is configured not to do that by default), so that the sh it spawns expands ~ to that directory. You could also use ~user instead of ~ .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/180075", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38382/" ] }
180,077
What is the smallest interval for the watch command? The man page and Google searches do not indicate what the smallest interval lower limit is. I found through experimentation it can be smaller than 1 second. To test, I ran this command run on a firewall: watch -n 0.1 cat /sys/class/net/eth1/statistics/rx_bytes It clearly updates faster than one second, but it is not clear if it is really doing 100ms updates.
What platform are you on? On my Linux (Ubuntu 14.10) the man page says: -n, --interval seconds Specify update interval. The command will not allow quicker than 0.1 second interval, in which the smaller values are con‐ verted. I just tested this with a script calling a C-program that prints the timestamp with microseconds and it works.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/180077", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99984/" ] }
180,080
(I have read many of the questions on this site that look related and I believe this is a genuinely new question.) I have lots of keys on lots of servers and they're all protected with passphrases. I like entering passphrases about as much as I like entering passwords - it's a real productivity drain. ssh-agent + ssh-add commands can be used on a login shell to mean you only have to enter your passphrase once at login keychain can be used to hold an ssh-agent alive beyond logout, so for example you can have it so you only have to enter the passphrase once at boot, or you can have it keep it alive for an hour or so. The problem I have is that both of these solutions typically get initiated in a shell login (e.g. .zshrc ) rely on me entering my passphrase when I log in, even if I'm not going to need it unlocked. (I'm not happy with keychain keeping an agent alive indefinitely.) What I would like is to be prompted for a passphrase (for an agent) only when needed . So I can log in to server A, do some stuff, then ssh to server B and at that point be asked for the passphrase. Do some stuff on server B, log out. Back on A, do some more stuff, ssh to B again and not need my passphrase (it's held by an agent). I note that this is possible on graphical desktops like Gnome - you get a pop-up asking for the passphrase to unlock your private key as soon as you try to ssh. So this is what I'm after but from a console.
Don't add anything to any of your shell startup scripts, this is unnecessary hackery. Instead, add AddKeysToAgent yes to your .ssh/config Like this, ssh-add is run automatically the first time you ssh into another box. You only have to re-enter your key when it expires from ssh-agent or after you reboot.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/180080", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23542/" ] }
180,087
xterm : $ echo $TERMxterm-256color$ stty -aspeed 38400 baud; rows 52; columns 91; line = 0;intr = ^C; quit = ^\; erase = ^H; kill = ^U; eof = ^D; eol = <undef>; eol2 = <undef>;swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V;flush = ^O; min = 1; time = 0;-parenb -parodd -cmspar cs8 -hupcl -cstopb cread -clocal -crtscts-ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff -iuclc -ixany-imaxbel iutf8opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke gnome-terminal : $ echo $TERMxterm-256color$ stty -aspeed 38400 baud; rows 57; columns 100; line = 0;intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = M-^?; eol2 = M-^?; swtch = M-^?;start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0;-parenb -parodd -cmspar cs8 hupcl -cstopb cread -clocal -crtscts-ignbrk brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff -iuclc ixany imaxbeliutf8opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke When outside tmux , Ctrl - v Ctrl - h outputs ^H . Inside tmux , I start getting ^? if run from xterm . Inside screen run from xterm it still outputs ^H . What's the reason behind this? Should it output ^H or ^? ? How to remedy this?
The reason is that in your xterm, ^H is the erase character, and tmux apparently translates the erase character to the corresponding control character ( ^? ) for the terminal it emulates, so that erasing works as expected in cooked mode (for instance, what happens when you just type cat ). The translation is needed in case you use a terminal with ^? as the erase character (generated by the Backspace key), then resume the session with a terminal that uses ^H as the erase character (generated by the Backspace key). Unfortunately this has visible side effects in some cases, e.g. if you type Ctrl + H . The only good remedy is to make sure that all your terminals (real or in tmux) use the same erase character, which should be ^? (this is standard nowadays). It seems that your xterm is badly configured. This is not the default configuration, AFAIK. In any case, you need to make sure to use a TERM value for which kbs=\177 . However this is not the case for xterm-256color from the official ncurses. So, you either need to select a different TERM value or you need to fix the kbs entry for xterm-256color (this can be done by the end user with: infocmp > file , modify file , then tic file ). Some Linux distributions do not have this problem; for instance, Debian has fixed this problem via a debian/xterm.ti file in its ncurses source package, giving: $ infocmp xterm-256color | grep kbs kbs=\177, kcbt=\E[Z, kcub1=\EOD, kcud1=\EOB, kcuf1=\EOC, You should also have: $ appres XTerm | grep backarrowKeyIsErase:*backarrowKeyIsErase: true Note that you can do stty erase '^?' in xterm (before doing anything else), but this is just a workaround (and it may break the behavior of the Backspace key). You should actually have erase = ^? (as shown by stty -a ) by default! In case problems with Backspace and/or Delete remain, I recommend the Consistent BackSpace and Delete Configuration document by Anne Baretta.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180087", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29867/" ] }
180,123
Oracle Linux 5.10 I can manually run the script "tblspc_usage.sh" successfully as the oracle user. I know this because it emails me a report. But it not run at all from cron. It doesn't generate a log file. [oracle@dub-ImrORA3 scripts]$ crontab -l# user /bin/shSHELL=/bin/sh# mail to oracle userMAILTO=oracle# run at 9:45 AM monday thur friday45 09 * * 1-5 /home/oracle/scripts/tblspc_usage.sh 2>&1 /home/oracle/scripts/tblspc_usage.log[oracle@dub-ImrORA3 scripts]$ ls -al tblspc_usage.sh-rwxrwxr-- 1 oracle oinstall 2013 Jan 20 09:12 tblspc_usage.sh Okay, here is the email in /var/mail/oracle/home/oracle/scripts/tblspc_usage.sh: line 15: sqlplus: command not foundgrep: body.log: No such file or directory Here is my shell script: #!/bin/sh## tblspc_usage.sh#=======================================## Checks for tablespace usage exceeding 90% and email the details# does not check undo or temp tablespaces#=======================================#ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 ; export ORACLE_HOMEORACLE_SID=IMR1 ; export ORACLE_SIDuser="system"pass="letmein" # bogus passwdsqlplus -S $user/$pass <<EOF column "TOTAL ALLOC (MB)" format 9,999,990.00 column "TOTAL PHYS ALLOC (MB)" format 9,999,990.00 column "USED (MB)" format 9,999,990.00 column "FREE (MB)" format 9,999,990.00 column "% USED" format 990.00 set echo off spool body.log select a.tablespace_name, a.bytes_alloc/(1024*1024) "TOTAL ALLOC (MB)", a.physical_bytes/(1024*1024) "TOTAL PHYS ALLOC (MB)", nvl(b.tot_used,0)/(1024*1024) "USED (MB)", (nvl(b.tot_used,0)/a.bytes_alloc)*100 "USED %" from (select tablespace_name, sum(bytes) physical_bytes, sum(decode(autoextensible,'NO',bytes,'YES',maxbytes)) bytes_alloc from dba_data_files group by tablespace_name ) a, (select tablespace_name, sum(bytes) tot_used from dba_segments group by tablespace_name ) b where a.tablespace_name = b.tablespace_name (+) and a.tablespace_name not in (select distinct tablespace_name from dba_temp_files ) and a.tablespace_name not like 'UNDO%' and ( nvl(b.tot_used,0)/a.bytes_alloc)*100 >= 90.00 order by 1; spool off exitEOF # if the word "TABLESPACE" exists in the spool file # then at least one tablespace has usage over 90% if grep -q TABLESPACE "body.log"; then cat /home/oracle/scripts/body.log | mailx -s "ORA3 - Tablespace(s) 90% Full" \ [email protected] fi
The reason is that in your xterm, ^H is the erase character, and tmux apparently translates the erase character to the corresponding control character ( ^? ) for the terminal it emulates, so that erasing works as expected in cooked mode (for instance, what happens when you just type cat ). The translation is needed in case you use a terminal with ^? as the erase character (generated by the Backspace key), then resume the session with a terminal that uses ^H as the erase character (generated by the Backspace key). Unfortunately this has visible side effects in some cases, e.g. if you type Ctrl + H . The only good remedy is to make sure that all your terminals (real or in tmux) use the same erase character, which should be ^? (this is standard nowadays). It seems that your xterm is badly configured. This is not the default configuration, AFAIK. In any case, you need to make sure to use a TERM value for which kbs=\177 . However this is not the case for xterm-256color from the official ncurses. So, you either need to select a different TERM value or you need to fix the kbs entry for xterm-256color (this can be done by the end user with: infocmp > file , modify file , then tic file ). Some Linux distributions do not have this problem; for instance, Debian has fixed this problem via a debian/xterm.ti file in its ncurses source package, giving: $ infocmp xterm-256color | grep kbs kbs=\177, kcbt=\E[Z, kcub1=\EOD, kcud1=\EOB, kcuf1=\EOC, You should also have: $ appres XTerm | grep backarrowKeyIsErase:*backarrowKeyIsErase: true Note that you can do stty erase '^?' in xterm (before doing anything else), but this is just a workaround (and it may break the behavior of the Backspace key). You should actually have erase = ^? (as shown by stty -a ) by default! In case problems with Backspace and/or Delete remain, I recommend the Consistent BackSpace and Delete Configuration document by Anne Baretta.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180123", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88123/" ] }
180,153
I would like to append the contents of a multi-line text file after a particular line in a string. For example, if the file file.txt contains line 1line 2 I'd like do so something like printf "s1random stuff\ns2 more random stuff\ns1 final random stuff\n" | sed "/(^s2.+)/a $(<file.txt)" To get the output: s1 random stuffs2 more random stuffline 1line 2s1 final random stuff I've tried various combinations of quotes and escape characters, but nothing really seems to work. In my particular use case, the string will be a bash variable, so if there's some esoteric thing that that makes that easier it'd be good to know. What I've got right now that works is writing the string to a file, using grep to find the line I'd like to append after and then using a combination of head, printf, and tail to squish the file together. It just seems like I shouldn't have to write the text to a file to make this work.
Note that you don't have to read the file beforehand, sed has the r command that can read a file: $ printf -v var "%s\n" "s1random stuff" "s2 more random stuff" "s1 final random stuff"$ echo "$var"s1random stuffs2 more random stuffs1 final random stuff$ sed '/^s2/r file.txt' <<< "$var"s1random stuffs2 more random stuffline 1line 2s1 final random stuff
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180153", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41847/" ] }
180,247
I am learning Linux and I was trying the gzip command. I tried it on a folder which has a hierarchy like Personal/Folder1/file1.amrPersonal/Folder2/file2.amrPersonal/Folder3/file3.amrPersonal/Folder4/file4.amr I ran "gzip -r Personal"and now its like Personal/Folder1/file1.amr.gzPersonal/Folder2/file2.amr.gzPersonal/Folder3/file3.amr.gzPersonal/Folder4/file4.amr.gz How do I go back?
You can use gunzip -r Personal which works the same as gzip -d -r Personal If gzip on your system does not have the -r option (e.g. busybox 's gzip) , you can use find Personal -name "*.gz" -type f -print0 | xargs -0 gunzip
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180247", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100089/" ] }
180,271
I have a couple of files with ".old" extension. How can I remove the ".old" extension without remove the file? I can do it manually but with more work: mv file1.key.old file1.keymv file2.pub.old file2.pubmv file3.jpg.old file3.jpgmv file4.jpg.old file4.jpg(etc...) The command will work with other extensions too? example: mv file1.MOV.mov file1.MOVmv file2.MOV.mov file2.MOVmv file3.MOV.mov file3.MOV(etc...) or better: mv file1.MOV.mov file1.movmv file2.MOV.mov file2.movmv file3.MOV.mov file3.mov(etc...)
Use bash's parameter substitution mechanism to remove matching suffix pattern: for file in *.old; do mv -- "$file" "${file%%.old}"done
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/180271", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99659/" ] }
180,306
I'd like to use fgrep to handle searching literal words with periods and other meta-characters in grep , but I need to ensure the word is at the beginning of the line. For example, fgrep 'miss.' will match miss. exactly which is what I want, but also admiss. or co. miss. which I don't want. I might be able to escape meta-characters, e.g. grep '^miss\.' , but the source is so large, I'm bound to miss something, and then need to run it again (will take the whole night). And in some cases, e.g. \1 , the escaped code is the one with "meta-meaning". Any way around this?
With GNU grep if built with PCRE support and assuming $string doesn't contain \E , you can do: grep -P "^\Q$string" With perl 's rindex : perl -sne 'print if rindex($_, $string, 0) == 0' -- -string="$string" With awk : S=$string awk 'index($0, ENVIRON["S"]) == 1'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180306", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40428/" ] }
180,330
I'm overwriting my hard drive with random data using the good old dd : dd if=/dev/urandom of=/dev/disk/by-uuid/etc bs=512 It's a 2TB array and my MacBook (running Linux, ok?) can only write data at around 3.7MB/s, which is pretty pathetic as I've seen my desktop at home do 20MB/s. When I go home tonight, I'd like to stop the dd run here, take it home, and see what kind of progress can be made overnight with a more powerful machine. I've been monitoring the progress using a simple loop: while true; do kill -USR1 $PID ; sleep 10 ; done The output looks like this: 464938971+7 records in464938971+7 records out238048755782 bytes (238 GB) copied, 64559.6 s, 3.7 MB/s If I were to resume the dd pass at home, how would I restart it? I'm aware of the seek parameter, but what do I point it to, the record number or the byte count?
As @don_crissti already commented, just use seek= to resume. dd if=/dev/urandom of=/dev/disk/by-uuid/etc bs=512 seek=464938971 GNU dd also supports seeking in bytes, so you can resume exactly, regardless of blocksize: dd if=/dev/urandom of=/dev/disk/by-uuid/etc bs=1M \ seek=238048755782 oflag=seek_bytes A larger blocksize should help with speeds even for a slow device like /dev/urandom . If you are looking for faster alternatives, you could cryptsetup plainOpen with a random key and zero that, it should beat /dev/urandom by an order of magnitude (without AES-NI) or even run at full speed (with AES-NI). You could also use shred -n 1 if pseudorandom data is good enough for your use case. shred should be able to utilize the full disk speed, even on a very slow machine.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180330", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
180,351
It is my understanding that the more modern $(...) command substitution syntax is preferred over the old ` -based syntax, due to easier and less error-prone nesting and escaping syntax. Further, it seems that most /bin/sh -style shells in modern use support $(…) : bash ash (and therefore BusyBox, so most embedded Linux) dash FreeBSD /bin/sh And $(…) is specified by IEEE 1003.1. So I have 2 very related questions: Is there any reason to use ` in new development of shell scripts unless you know of a specific old system that the script will need to run on? Is there any reason not to teach UNIX programming students just to write $(...) , and discuss ` only as an obsolete variant that they will likely encounter if they are reading other developers' shell scripts (and may need if they are working with a really old system or nonstandard for some reason)?
Since back-ticks are often used, it makes sense to teach this syntactic construct. Of course, $() style command substitution should be emphasized as the default style (and standard conforming construct). Why are back-ticks still popular? Because they save one character in typing, and they are arguably less heavy on the eye.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/180351", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2538/" ] }
180,383
I need to run a script by piping it through bash with wget (rather than running it directly with bash). $ wget -O - http://example.com/my-script.sh | bash It's not working because my script has read statements in it. For some reason these don't work when piping to bash: # Piping to bash works in general$ echo 'hi'hi$ echo "echo 'hi'" | bashhi# `read` works directly$ read -p "input: " varinput: <prompt># But not when piping - returns immediately$ echo 'read -p "input: " var' | bash$ Instead of prompting input: and asking for a value as it should, the read command just gets passed over by bash . Does anyone know how I can pipe a script with read to bash ?
read reads from standard input. But the standard input of the bash process is already taken by the script. Depending on the shell, either read won't read anything because the shell has already read and parsed the whole script, or read will consume unpredictable lines in the script. Simple solution: bash -c "$(wget -O - http://example.com/my-script.sh)" More complex solution, more for education purposes than to illustrate a good solution for this particular scenario: echo '{ exec </dev/tty; wget -O - http://example.com/my-script.sh; }' | bash
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/180383", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10039/" ] }
180,400
I don't need the manpages and documentations on my debian server. Is it save to empty that folder completely to free up some disk-space, by replacing all files in that folder with empty dummy files. Or is there a better way to uninstall all manpages and documentations? So far I installed localepurge which already uninstalled all unused locales and could also uninstall my german locales but I would like to keep some German localisation. With "safe" I mean not totally safe, but the same "safeness" like I have using localepurge (which never caused any problem so far)
It should be fine to delete files in /usr/share/doc on Debian-based systems. The Debian policy explicitly specifies in section 12.3: Packages must not require the existence of any files in/usr/share/doc/ in order to function. [...] The system administrator should be able to delete files in/usr/share/doc/ without causing any programs to break. As the package manager is also a program, it should handle this situation (missing files) properly. It could be needed after updates to purge /usr/share/doc by hand again. The answers to this Ubuntu question explain, how disk space can be saved and the package manager can be configured properly in Debian-based systems. As copyright files are also stored in /use/share/doc, such modified systems are normally not allowed to distribute if copyright files are not bundled otherwisely.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/180400", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20661/" ] }
180,413
Each time, when the command ls -l /proc/self is executed, the link points to process who's PID keeps increasing. Why is this so ? Is it the PID of the ls command ?
Yes, that's the PID of ls . POSIX defined ls as an external command, so anytime you run ls , the shell must create new process and run ls in that process. To do that, the shell will call execve() system call: $ strace ls -l /proc/selfexecve("/bin/ls", ["ls", "-l", "/proc/self"], [/* 76 vars */]) = 0 You can see, after new process was created, /proc/self belongs to context of that process, so it was expanded to the PID of ls .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180413", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39667/" ] }
180,456
In Linux distributions, some packages create user accounts. How can I determine which package created a given user? I want to know specifically for Fedora and Ubuntu, but answers for other distributions are welcome.
On Debian-based systems (including Ubuntu), packages create users using maintainer scripts , usually postinst . Therefore one way could be to grep through these scripts: grep -R --include='*.postinst' -e useradd -e adduser /var/lib/dpkg/info/ This assumes, of course, that the postinst script hasn't been deleted (either manually or because you uninstalled the package in question). Debian policy seems to favour postinst : [Y]ou must arrange for your package to create the user or group if necessary using adduser in the preinst or postinst script (again, the latter is to be preferred if it is possible). The package maintainer can use preinst as well, as long as adduser is a pre-dependency. The policy also leads us to the other source of accounts: the base-passwd package, as it states in the preceding paragraph: If you need a statically allocated id, you must ask for a user or group id from the base-passwd maintainer, and must not release the package until you have been allocated one. Once you have been allocated one you must either make the package depend on a version of the base-passwd package with the id present in /etc/passwd or /etc/group , or arrange for your package to create the user or group itself with the correct id (using adduser ) in its preinst or postinst . (Doing it in the postinst is to be preferred if it is possible, otherwise a pre-dependency will be needed on the adduser package.) The base-passwd documentation ( /usr/share/doc/base-passwd/users-and-groups.txt.gz or /usr/share/doc/base-passwd/users-and-groups.html ) says: The Debian base-passwd package contains the master versions of /etc/passwd and/etc/group. The update-passwd tool keeps the entries in these master files insync on all Debian systems. They comprise only "global static" ids: that is,those which are reserved globally for the benefit of packages which need toinclude files owned by those users or groups, or need the ids compiled intobinaries. The users/groups included are (grepped out from /usr/share/doc/base-passwd/users-and-groups.txt.gz ): Users (usually with corresponding groups) root man majordom irc gdmdaemon lp postgres gnats sanedbin mail www-data nobody klogsys news backup messagebus syslogsync uucp operator postfixgames proxy list haldaemon Groups (without corresponding users) adm fax audio staff sshdtty voice src users fetchmaildisk cdrom shadow lpadmin cupsyskmem floppy utmp sasl nogroupdialout tape video scannerdip sudo plugdev ssh The package README ( /usr/share/doc/base-passwd/README ) also lists out some users with UIDs in the 60000-64999 range, and states that these are created by the respective packages.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180456", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28487/" ] }
180,472
"Enter test: "read testif [[ $test == "a" ]]; then echo "worked"else echo "failed"fi It's a simple illustration of test I'm doing, but if I enter "A", it will fail. Is there anything I can do at the variable stage to change it all to small case, so that the test will match?
There are several useful ways to achieve this (in bash ): two checks echo -n "Enter test: "read testif [[ $test == "a" || $test == "A" ]]; then echo "worked"else echo "failed"fi make the input lower case echo -n "Enter test: "read testtest="${test,,}"if [[ $test == "a" ]]; then echo "worked"else echo "failed"fi regex for both cases echo -n "Enter test: "read testif [[ $test =~ ^[aA]$ ]]; then echo "worked"else echo "failed"fi make the shell ignore the case echo -n "Enter test: "read testshopt -s nocasematchif [[ $test == a ]]; then echo "worked"else echo "failed"fi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/180472", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98500/" ] }