source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
206,323
I have a list that i've copy and i want to paste it to the shell to give me just the repeated lines. 1 1 3 2 As i read about bash commands i've maded this: cat > /tmp/sortme ; diff <(sort /tmp/sortme) <(sort -u /tmp/sortme) When i write the above command i paste my list and press CTRL+Z to stop cat and it shows me the repeated lines. I don't want to compare files, just pasted input of several rows. Now to the question:Is there any way to turn that command into script? Because when i try to make it as a script and CTRL+Z stops it. PS: Please don't laugh. This is my firs time trying. Till now just reading. :)
This question is already answered in man 7 file-hierarchy which comes with systemd (there is also online version ): /etc System-specific configuration. (…) VENDOR-SUPPLIED OPERATING SYSTEM RESOURCES /usr Vendor-supplied operating system resources. Usually read-only, but this is not required. Possibly shared between multiple hosts. This directory should not be modified by the administrator, except when installing or removing vendor-supplied packages. Basically, files that ships in packages downloaded from distribution repository go into /usr/lib/systemd/ . Modifications done by system administrator (user) go into /etc/systemd/system/ . System-specific units override units supplied by vendors. Using drop-ins, you can override only specific parts of unit files, leaving the rest to vendor (drop-ins are available since the very beginning of systemd, but were properly documented only in v219; see man systemd.unit ).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/206323", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114652/" ] }
206,350
In shell scripts one specifies language interpreter on shebang( #! ) line. As far as I know, it is recommended to use #!/usr/bin/env bash because env is always located in /usr/bin directory while location of bash may vary from system to system. However, are there any technical differences if bash is started directly with /bin/bash or through env utility? In addition, am I correct that if I do not specify any variables for env , the bash is started in unmodified environment?
In one sense, using env could be considered "portable" in that the path to bash is not relevant ( /bin/bash , /usr/bin/bash , /usr/local/bin/bash , ~/bin/bash , or whatever path) because it is specified in the environment. In this way, a script author could make his script easier to run on many different systems. In another sense, using env to find bash or any other shell or command interpreter is considered a security risk because an unknown binary (malware) might be used to execute the script. In these environments, and sometimes by managerial policy, the path is specified explicitly with a full path: #!/bin/bash . In general, use env unless you know you are writing in one of these environments that scrutinize the minute details of risk. When Ubuntu first started using dash , some time in 2011, many scripts were broken by that action. There was discussion about it on askubuntu.com. Most scripts were written #!/bin/sh which was a link to /bin/bash . The consensus was this: the script writer is responsible for specifying the interpreter. Therefore, if your script should always be invoked with BASH, specify it from the environment. This saves you having to guess the path, which is different on various Unix/Linux systems. In addition, it will work if tomorrow /bin/sh becomes a link to some other shell like /bin/newsh . Another difference is that the env method won't allow the passing of arguments to the interpreter.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/206350", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33060/" ] }
206,386
When I run netstat --protocol unix or lsof -U I see that some unix socket paths are prepended with @ symbol, for example, @/tmp/dbus-qj8V39Yrpa . Then when I run ls -l /tmp I don't see file named dbus-qj8V39Yrpa there. The question is what does that prepended @ symbol denote? And second related question, is -- where can I actually find that unix socket file ( @/tmp/dbus-qj8V39Yrpa ) on the filesystem?
The @ probably indicates a socket held in an abstract namespace which doesn't belong to a file in the filesystem. Quoting from The Linux Programming Interface by Michael Kerrisk : 57.6 The Linux Abstract Socket Namespace The so-called abstract namespace is a Linux-specific feature that allows us to bind a UNIX domain socket to a name without that name being created in the file system. This provides a few potential advantages: We don’t need to worry about possible collisions with existing names in the file system. It is not necessary to unlink the socket pathname when we have finished using the socket. The abstract name is automatically removed when the socket is closed. We don’t need to create a file-system pathname for the socket. This may be useful in a chroot environment, or if we don’t have write access to a file system. To create an abstract binding, we specify the first byte of the sun_path field as a null byte (\0). [...] Displaying a leading null byte to denote such type of a socket may be difficult, so that is maybe the reason for the leading @ sign.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/206386", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/58428/" ] }
206,405
I want to execute a Bash function at a scheduled time. I think the right tool for this is the at command. However, it doesn't seem to be working. function stupid () { date}export -f stupidcat << EOM | at now + 1 minutestupidEOM After waiting the minute, I check my mail and this is the error I see: sh: line 41: syntax error near unexpected token `=\(\)\ {\ \ date""}'sh: line 41: `"}; export BASH_FUNC_stupid()' I don't understand what's going wrong here, especially since I know the function works. $ stupidFri May 29 21:05:38 UTC 2015 Looking at the error, I think the incorrect shell is being used to execute the function ( sh as opposed to bash ), but if I check $SHELL I see it points to /bin/bash , and the man page for at says: $ echo $SHELL/bin/bash$ man at ... SHELL The value of the SHELL environment variable at the time of at invocation will determine which shell is used to execute the at job commands. So Bash should be the shell running my function. What going on here? Is there a way to get my Bash function to run with at ?
Bash functions are exported via the environment. The at command makes the environment, the umask and the current directory of the calling process available to the script by generating shell code that reproduces the environment. The script executed by your at job is something like this: #!/bin/bashumask 022cd /home/nickPATH=/usr/local/bin:/usr/bin:/bin; export PATHHOME=/home/nick; export HOME…stupid Under older versions of bash, functions were exported as a variable with the name of the function and a value starting with () and consisting of code to define the function, e.g. stupid="() { date}"; export stupid This made many scenarios vulnerable to a security hole, the Shellshock bug (found by Stéphane Chazelas ), which allowed anyone able to inject the content of an environment variable under any name to execute arbitrary code in a bash script. Versions of bash where with a Shellshock fix use a different way: they store the function definition in a variable whose name contains characters that are not found in environment variables and that shells do not parse as assignments. BASH_FUNC_stupid%%="() { date}"; export stupid Due to the % , this is not valid sh syntax, not even in bash, so the at job fails, whether it even attempts to use the function or not. The Debian version of at , which is used in many Linux distributions, was changed in version 3.16 to export only variables that have valid names in shell scripts. Thus newer versions of at don't pass post-Shellshock bash exported functions through, whereas older ones error out. Even with pre-Shellshock versions of bash, the exported function only works in bash scripts launched from the at job, not in the at job itself. In the at job itself, even if it's executed by bash, stupid is just another environment variable; bash only imports functions when it starts. To export functions to an at job regardless of the bash or at version, put them in a file and source that file from your job, or include them directly in your job. To print out all defined functions in a format that can be read back, use declare -f . { declare -f; cat << EOM; } | at now + 1 minutestupidEOM
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/206405", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70630/" ] }
206,406
I just switched to Fedora 22 and decided to go for Gnome3 this time instead of XFCE. I like the Thunar file manager so I downloaded it. I've had trouble setting it as the default file manager though. What I tried: I ran exo-preferred-applications . Under Utilities -> File Manager I selected Thunar . The problem: It's not acting as the default file manager. When I open my Downloads folder from my browser I still get Nautilus . Same when I click on my "Places" .
You could try editing /usr/share/applications/defaults.list and changing the line inode/directory=nemo.desktop;caja.desktop;nautilus.desktop;Thunar.desktop;kde4-dolphin.desktop to inode/directory=Thunar.desktop; or some order that suits your needs.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/206406", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89986/" ] }
206,415
I was wondering if there is a way to use Samba to send items to a client machine via the command line (I need to send the files from the Samba server). I know I could always use scp but first I was wondering if there is a way to do it with Samba. Thanks!
Use smbclient , a program that comes with Samba: $ smbclient //server/share -c 'cd c:/remote/path ; put local-file' There are many flags, such as -U to allow the remote user name to be different from the local one. On systems that split Samba into multiple binary packages, you may have the Samba servers installed yet still be missing smbclient . In such a case, check your package repository for a package named smbclient , samba-client , or similar.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/206415", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/96115/" ] }
206,418
By default bash loads lines from ~/.bash_history to history. Is it possible to add custom file to be also loaded? I want to inject file containing commands I frequently use and access them it via built-in search.
I keep it simple with an alias h='history|grep' loaded into my Bash profile. So my workflow is h command , for example: h hpssa # h hpssa 202 05-28-2015 11:54:33 hpssacli 217 05-28-2015 11:54:33 hpssa -start 225 05-28-2015 11:54:33 hpssacli -stop 226 05-28-2015 11:54:33 hpssa -stop 228 05-28-2015 11:54:33 hpssa -start If I want to run " hpssa -stop ", I'd simply type !226 This is just my approach, but maybe you could modify how you're recalling history items. I don't think it makes sense to actually inject data into the history file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/206418", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67717/" ] }
206,441
$ grep "apple" fruits.txtapple$ if [ $? == 0 ] ; then echo "A"; else echo "B"; fiA When I execute the above commands it works fine but when I run these commands as a shell script it throws error and prints B. Why is it so ? $ sh temp.sh appletemp.sh: 3: [: 0: unexpected operatorB
The problem is that you are using == as the equality operator (which bash supports) but then running the script under sh which doesn't support it. The solution is to replace: if [ $? == 0 ] ; then echo "A"; else echo "B"; fi With: if [ $? = 0 ] ; then echo "A"; else echo "B"; fi This will work under both bash and plain sh . While it may make no difference in this particular case, = is a string comparison. For testing for numeric equality, one should use -eq : if [ $? -eq 0 ] ; then echo "A"; else echo "B"; fi Example with == Consider this script file: $ cat script.shgrep "apple" fruits.txtif [ $? == 0 ] ; then echo "A"; else echo "B"; fi It runs under bash: $ bash script.shappleA It fails under dash (dash is the default sh on debian-like systems): $ dash script.shapplescript.sh: 2: [: 0: unexpected operatorB Except for the line number, you can see that this error message matches what you are seeing. Simplifications There is no need to access $? directly. The code can be simplified to: $ if grep "apple" fruits.txt; then echo "A"; else echo "B"; fiappleA Or, using the logical-and, && , and logical-or, || , operators: grep "apple" fruits.txt && echo A || echo B The above works because echo A , if executed, will always return success.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/206441", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30355/" ] }
206,449
I'm customizing my zsh PROMPT and calling a function that may or may not echo a string based on the state of an environment variable: function my_info { [[ -n "$ENV_VAR"]] && echo "Some useful information\n"}local my_info='$(my_info)'PROMPT="${my_info}My awesome prompt $>" I would like the info to end on a trailing newline, so that if it is set, it appears on its own line: Some useful informationMy awesome prompt $> However, if it's not set, I want the prompt to be on a single line, avoiding an empty line caused by an unconditional newline in my prompt: PROMPT="${my_info} # <= Don't want that :)My awesome prompt $>" Currently I work around the $(command substitution) removing my newline by suffixing it with a non-printing character, so the newline isn't trailing anymore: [[ -n "$ENV_VAR"]] && echo "Some useful information\n\r" This is obviously a hack. Is there a clean way to return a string that ends on a newline? Edit: I understand what causes the loss of the trailing newline and why that happens , but in this question I would specifically like to know how to prevent that behaviour (and I don't think this workaround applies in my case, since I'm looking for a "conditional" newline). Edit: I stand corrected: the referenced workaround might actually be a rather nice solution (since prefixing strings in comparisons is a common and somewhat similar pattern), except I can't get it to work properly: echo "Some useful information\n"x [...]PROMPT="${my_info%x}My awesome prompt $>" does not strip the trailing x for me. Edit: Adjusting the proposed workaround for the weirdness that is prompt expansion, this worked for me: function my_info { [[ -n "$ENV_VAR"]] && echo "Some useful information\n"x}local my_info='${$(my_info)%x}'PROMPT="$my_info My awesome prompt $>" You be the judge if this is a better solution than the original one. It's a tad more explicit, I think, but it also feels a bit less readable.
Final newlines are removed from command substitutions. Even zsh doesn't provide an option to avoid this. So if you want to preserve final newlines, you need to arrange for them not to be final newlines. The easiest way to do this is to print an extra character (other than a newline) after the data that you want to obtain exactly, and remove that final extra character from the result of the command substitution. You can optionally put a newline after that extra character, it'll be removed anyway. In zsh, you can combine the command substitution with the string manipulation to remove the extra character. my_info='${$(my_info; echo .)%.}'PROMPT="${my_info}My awesome prompt $>" In your scenario, take care that my_info is not the output of the command, it's a shell snippet to get the output, which will be evaluated when the prompt is expanded. PROMPT=${my_info%x}… didn't work because that tries to remove a final x from the value of the my_info variable, but it ends with ) . In other shells, this needs to be done in two steps: output=$(my_info; echo .)output=${output%.} In bash, you wouldn't be able to call my_info directly from PS1 ; instead you'd need to call it from PROMPT_COMMAND . PROMPT_COMMAND='my_info=$(my_info; echo .)'PS1='${my_info%.}…'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/206449", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/116887/" ] }
206,456
What process do I use to extract a Debian binary package from a Debian/Ubuntu based distribution's ISO image?
Yes, It is possible to get/extract deb directly from distro's iso image (ISO of an installation disk). Follow the steps to accomplish this:- Mounting iso (live cd version) as virtual system (root / ): mount iso to /media/cdrom sudo mkdir /media/cdromsudo mount -o loop /path/to/iso /media/cdrom mount filesystem.squashfs to /mnt : sudo mount -o loop /media/cdrom/casper/filesystem.squashfs /mnt Now virtual system from iso image is mounted (as read only) and rooted on /mnt Get required deb(s) by dpkg-repack command:- Suppose I want to get package foo from recently mounted system then run: dpkg-repack --root=/mnt foo In which --root=/mnt says system is rooted on /mnt --root=dir Take package from filesystem rooted on <dir>. This is useful if, for example, you have another computer nfs mounted on /mnt, then you can use --root=/mnt to reassemble packages from that computer. Example of Utilization:- Suppose I am running Ubuntu 14.04 LTS and I've iso images of Xubuntu, Kubuntu etc then I can get xfce or kde applications (that are pre-installed in accordingly derivative) directly from iso. Also I can get whole desktop environment like xubuntu-desktop from Xubuntu's iso image! Another example: Trisquel 7.0 LTS is derivative of Ubuntu 14.04 LTS which is completely free distro with some helful packages pre-installed like gimp . If you have iso image of Trisquel then you can get gimp directly from iso image and can install in Ubuntu! Important notes that may helpful how to get required debs (example of getting gimp from iso):- Use sudo apt-get install -s gimp | grep Inst | awk '{print $2}' > pkgreq to list required packages And finally run cat pkgreq | xargs sudo dpkg-repack --root=/mnt to get those deb(s)! Note:- This will helps successfully on Same version of derivatives and distribution (like Trisquel 7.0 and Ubuntu 14.04, same version of Ubuntu’s derivative like xfce,kde etc), for different version and/or derivative there may/must dependency issue have to be solved.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/206456", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66803/" ] }
206,513
I was running a Python script that malfunctioned and used sudo to create a file named > . How can I get rid of this file? Of course, when I try sudo rm > , I get the error bash: syntax error near unexpected token 'newline' , because it thinks I'm trying to redirect the output of rm . Its permissions are -rw-r--r-- .
Any of these should work: sudo rm \>sudo rm '>'sudo rm ">"sudo find . -name '>' -deletesudo find . -name '>' -exec rm {} + Note that the last two commands, those using find , will find all files or directories named > in the current folder and all its subfolders. To avoid that, use GNU find: sudo find . -maxdepth 1 -name '>' -deletesudo find . -maxdepth 1 -name '>' -exec rm {} +
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/206513", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117416/" ] }
206,520
Suppose I have a pipe separated file like: |Sr|Fruits|Colors||1 |apple |red||2 |orange |orange|3 |grapes |purple| Here it is evident using awk that, $2 is Fruits and $3 is the colors column. In future if the order of the columns change, is it possible to determine the column number using the string? I.e Colors is $3 and Fruits is $2 ?
You can try: $ awk -F'|' '{ for(i=1;i<=NF;i++) { if($i == "Fruits") printf("Column %d is Fruits\n", i-1) if($i == "Colors") printf("Column %d is Colors\n", i-1) } exit 0}' fileColumn 2 is FruitsColumn 3 is Colors A note that the actually column for Fruits and Colors are $3 and $4 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/206520", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/116995/" ] }
206,540
I built Alpine Linux in a Docker container with the following Dockerfile: FROM alpine:3.2RUN apk add --update jq curl && rm -rf /var/cache/apk/* the build run successfully: $ docker build -t collector .Sending build context to Docker daemon 2.048 kBSending build context to Docker daemon Step 0 : FROM alpine:3.23.2: Pulling from alpine8697b6cc1f48: Already exists alpine:3.2: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.Digest: sha256:eb84cc74347e4d7c484d566dec8a5eef82bab1b78308b92cda559bcff29c27ccStatus: Downloaded newer image for alpine:3.2 ---> 8697b6cc1f48Step 1 : RUN apk add --update jq curl && rm -rf /var/cache/apk/* ---> Running in 888571296e79fetch http://dl-4.alpinelinux.org/alpine/v3.2/main/x86_64/APKINDEX.tar.gz(1/11) Installing run-parts (4.4-r0)(2/11) Installing openssl (1.0.2a-r1)(3/11) Installing lua5.2-libs (5.2.4-r0)(4/11) Installing lua5.2 (5.2.4-r0)(5/11) Installing ncurses-terminfo-base (5.9-r3)(6/11) Installing ncurses-widec-libs (5.9-r3)(7/11) Installing lua5.2-posix (33.3.1-r2)(8/11) Installing ca-certificates (20141019-r2)(9/11) Installing libssh2 (1.5.0-r0)(10/11) Installing curl (7.42.1-r0)(11/11) Installing jq (1.4-r0)Executing busybox-1.23.2-r0.triggerExecuting ca-certificates-20141019-r2.triggerOK: 9 MiB in 26 packages ---> 7625779b773dRemoving intermediate container 888571296e79Successfully built 7625779b773d anyway when I run date -d it fails: $ docker run -i -t collector sh/ # date -d yesterdaydate: invalid date 'yesterday'/ # date -d nowdate: invalid date 'now'/ # date -d next-monthdate: invalid date 'next-month' while the rest of the options seem running ok: / # date Sat May 30 18:57:24 UTC 2015/ # date +"%A"Saturday/ # date +"%Y-%m-%dT%H:%M:%SZ"2015-05-30T19:00:38Z
BusyBox/Alpine version of date doesn't support -d options, even if the help is exatly the same in the Ubuntu version as well as in others more fat distros. Also the "containerization" doesn't miss anything here. To work with -d options you just need to add coreutils package: $ cat Dockerfile.alpine-coreutilsFROM alpine:3.2RUN apk add --update coreutils && rm -rf /var/cache/apk/*$ docker build -t alpine-coreutils - < Dockerfile.alpine-coreutilsSending build context to Docker daemon 2.048 kBSending build context to Docker daemon Step 0 : FROM alpine:3.23.2: Pulling from alpine8697b6cc1f48: Already exists alpine:3.2: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.Digest: sha256:eb84cc74347e4d7c484d566dec8a5eef82bab1b78308b92cda559bcff29c27ccStatus: Downloaded newer image for alpine:3.2 ---> 8697b6cc1f48Step 1 : RUN apk add --update coreutils && rm -rf /var/cache/apk/* ---> Running in 694fa5cb271cfetch http://dl-4.alpinelinux.org/alpine/v3.2/main/x86_64/APKINDEX.tar.gz(1/3) Installing libattr (2.4.47-r3)(2/3) Installing libacl (2.2.52-r2)(3/3) Installing coreutils (8.23-r0)Executing busybox-1.23.2-r0.triggerOK: 12 MiB in 18 packages ---> a7d9116a00eeRemoving intermediate container 694fa5cb271cSuccessfully built a7d9116a00ee$ docker run -i -t alpine-coreutils sh/ # date -d last-weekSun May 24 09:19:34 UTC 2015/ # date -d yesterday Sat May 30 09:19:46 UTC 2015/ # date Sun May 31 09:19:50 UTC 2015 The image size will double but is till 11.47 MB, more than an order of siZe less, compared to Debian standard : $ docker imagesREPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZEalpine-coreutils latest a7d9116a00ee 2 minutes ago 11.47 MBalpine 3.2 8697b6cc1f48 2 days ago 5.242 MBdebian latest df2a0347c9d0 11 days ago 125.2 MB Thanks to Andy Shinn: https://github.com/gliderlabs/docker-alpine/issues/40#issuecomment-107122371 And to Christopher Horrell: https://github.com/docker-library/official-images/issues/771#issuecomment-107101595
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/206540", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8947/" ] }
206,542
I want to know of a command that will show me how (currently) busy is the file system. I'm assuming there exists such a command that will show me this. With such command, are there specific arguments that I should know about? Also, is there a separate command that will tell me what the load average is? How do I do this (using Linux)?
vmstat 1 will poll overall information every second, including IO load (see the bi and bo columns for input and output). iostat 1 will provide information more directly focused on IO. iotop will provide this information on a per-process level, assuming a modern kernel with appropriate configuration ( see home page ). dstat is a swiss-army-knife tool combining information available from many of the above.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/206542", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117437/" ] }
206,556
Each line in a comma-separated file has 5 fields. a,b,c,d,ef,g,c,i,j,k,c,m,no,p,c,r,st,u,c,w,x,y,z,aa,bb How can I extract the lines which have c in the 3rd field and their 5th field is NOT empty? The result would be: a,b,c,d,ej,k,c,m,no,p,c,r,s
Possible solution with awk : awk -F',' '$3 == "c" && $5' file Depending on actual data this may not work as desired as mentioned in comments (thanks Janis for pointing this: it will miss f,g,c,i,0 e.g 5th field is 0) so you can do following: awk -F',' '$3 == "c" && $5 != ""' file And as this is the accepted answer I am adding not so obvious forcing 5th field to string (as in cuonglm(+1) solution): awk -F',' '$3 == "c" && $5""' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/206556", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117446/" ] }
206,594
I have directory exam with 2 files in it. I need to delete files but permission is denied. Even rm -rf command can't delete these files. I logged in as a root user.
From root user check attributes of files # lsattr if you notice i (immutable) or a (append-only), remove those attributes: # man chattr# chattr -i [filename]# chattr -a [filename]
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/206594", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
206,657
The command below works for deleting the first 3 lines: sed -i -e 1,3d t.txt So I tried substituting the 3 with a variable in a script as below NrLines=$(wc -l t.txt)sed -i -e 1,"$NrLines{d}" t.txt and get following error: sed: -e expression #1, char 13: unexpected `}' What am I doing wrong?
It seems command substitution and braces are misused. NrLines=$(wc -l < t.txt)sed -i -e 1,"${NrLines}d" t.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/206657", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117503/" ] }
206,677
I have a file1.txt USA Joe 123.123.123Russia Marry 458.786.892Canada Greg 151.844.165Latvia Grace 125.895.688 and file2.txt 1 123.123.1232 151.844.1653 465.879.515 and I want to create a new file result.txt where I print my only those lines that adresses (xxx.xxx.xxx) are both in file1 and file2 so my result should be USA Joe 123.123.123Canada Greg 151.844.165 I need to use awk, but how I need to use it for both files?
You can try: awk 'FNR==NR{a[$2];next};$NF in a' file2.txt file1.txt > result.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/206677", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111696/" ] }
206,730
I am running through some system upgrades and my package manager is showing changes between the upstream /etc/shadow and mine. I would like to put some comments in the file for next time this happens. How would I accomplish putting comments in the /etc/shadow file without breaking things. I am thinking default "#" would likely do it, but if I get this wrong the reboot won't be that enjoyable.
On Linux systems using GNU libc, lines starting with # are ignored in /etc/shadow . The parsing is done by __fgetspent_r() , and its source code explicitly handles (and documents) this behaviour. So on the vast majority of Linux systems you can comment lines in /etc/shadow with # without causing problems. Unfortunately comments are dropped when /etc/shadow is updated, e.g. by passwd ; so storing comments isn't actually safe (from the comments' point of view). This means you need to find somewhere else to store your comments: two good suggestions are dr01 's idea of using /etc/shadow.README , or better yet Gilles ' idea of using commit messages with etckeeper .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/206730", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32340/" ] }
206,733
I'm about to compress a large directory and I want to know how large, exactly, the resulting file will be. I've tried using du : $ tar -cv dir | du -h -du: cannot access '-': No such file or directory Then I tried using the file version of '-': $ tar -cv dir | du -h /dev/stdin1.0K I'm certain this number isn't accurate. How can I get the size of stdin?
tl;dr : tar -cv dir | wc -c - | cut -d' ' -f 1 | awk '{print $1/1000"K"}' du doesn't actually count the size of the file itself. It just asks the kernel to query the filesystem, which already keeps track of file size. This is why it's so fast. Because of that, and the fact that you're counting a stream, not a file, du doesn't work. My guess is that 1.0K is a hardcoded size for /dev/std* in the kernel. The solution is to use wc -c , which counts bytes itself instead of querying the kernel: $ tar -cv dir | wc -c If you want output similar to du -h : $ tar -cv dir | wc -c | awk '{print $1/1000"K"}' The awk turns the number into a human-readable result.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/206733", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29146/" ] }
206,761
I have two computers. From the first I created the account admin1 with password admin1 . Then I logged in as a root user and used the ssh-keygen -t rsa command. I did not give a password and hit Enter three times. In the next step I typed ssh root@remoteuser's ip . Then I logged out and tried to connect to the second computerm but it asked for the password again... Next, I used the commands: #useradd admin1#passwd admin1#ssh-keygen -t rsa#ssh root@remoteuser's ip Why does it ask for the password again?
Generating a key doesn't automatically allow you to log in with it to remote machines. You need to copy the corresponding public key to the machines you want to access, like this: ssh-copy-id [email protected] This operation will ask you for user 's password on remote.machi.ne , but after that you'll be able to ssh with your key: ssh -l user remote.machi.ne If you didn't set a password for your key, ssh will no longer ask you to enter one. On a side note: it looks like you had an old ssh key that you just overwrote.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/206761", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
206,779
I've developed an application that uses NTP to change the network time, to sync two of my computers.It runs as root , since only the latter is allowed to change the time and date on Linux(I guess). Now, I want to run it as a user. But, I need to access the time. Is it a good practice to run a daemon under a non-root user account? Shall I give my application a capability such as CAP_SYS_TIME ? Does it not introduce a security vulnerability? Is there a better way?
Is it a good practice to run a daemon under a non-root user account? Yes, and this is common. For instance, Apache start as root and then forks new process as www-data (by default). As said before, if your program is hacked (ex: code injection), the attacker will not gain a root access, but will be limited to the privileges you gave to this specific user. Shall I give a "Capability" such as "CAP_SYS_TIME"? It is a good idea since you avoid using setuid , and limit permissions to this very specific capability. Shall I use another way to do so that would be considered "Good Practice"? You can increase security, for instance: Run the service as unprivileged user, with no shell. Use chroot to lock the user in it's home directory.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/206779", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81372/" ] }
206,786
I'd like to make a bash script output additional information to file descriptors (FDs) greater than or equal to 3, when they are open. To test whether an FD is open, I devised the following trick: if (printf '' 1>&3) 2>&-; then # File descriptor 3 is openelse # File descriptor 3 is not openfi This is sufficient for my needs, but I'm curious as to whether there is a more idiomatic way of testing if an FD is valid. I'm especially interested about whether there exists a mapping of the fcntl(1) syscall to a shell command, which would allow the retrieval of FD flags ( O_WRONLY and O_RDWR to test whether the FD is writable, and O_RDONLY and O_RDWR to test whether the FD is readable).
In ksh (both AT&T and pdksh variants) or zsh , you can do: if print -nu3; then echo fd 3 is writeablefi They won't write anything on that fd, but still check if the fd is writable (using fcntl(3, F_GETFL) ) and report an error otherwise: $ ksh -c 'print -nu3' 3< /dev/nullksh: print: -u: 3: fd not open for writing (which you can redirect to /dev/null ). With bash , I think your only option is to check if a dup() succeeds like in your approach, though that won't guarantee that the fd is writable (or call an external utility ( zsh / perl ...) to do the fcntl() ). Note that in bash (like most shells), if you use (...) instead of {...;} , that will fork an extra process. You can use: if { true >&3; } 2> /dev/null instead to avoid the fork (except in the Bourne shell where redirecting compound commands always causes a subshell). Don't use : instead of true as that's a special builtin, so would cause the shell to exit when bash is in POSIX compliance mode. You could however shorten it to: if { >&3; } 2> /dev/null
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/206786", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42056/" ] }
206,795
Is it possible to execute one script which reads through a file containing a list of packages/applications, tests if each entry is already installed and if not, proceed to install? I'm trying to find an approach which I can use in installing applications on a number of virgin servers autonomously.
In ksh (both AT&T and pdksh variants) or zsh , you can do: if print -nu3; then echo fd 3 is writeablefi They won't write anything on that fd, but still check if the fd is writable (using fcntl(3, F_GETFL) ) and report an error otherwise: $ ksh -c 'print -nu3' 3< /dev/nullksh: print: -u: 3: fd not open for writing (which you can redirect to /dev/null ). With bash , I think your only option is to check if a dup() succeeds like in your approach, though that won't guarantee that the fd is writable (or call an external utility ( zsh / perl ...) to do the fcntl() ). Note that in bash (like most shells), if you use (...) instead of {...;} , that will fork an extra process. You can use: if { true >&3; } 2> /dev/null instead to avoid the fork (except in the Bourne shell where redirecting compound commands always causes a subshell). Don't use : instead of true as that's a special builtin, so would cause the shell to exit when bash is in POSIX compliance mode. You could however shorten it to: if { >&3; } 2> /dev/null
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/206795", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16154/" ] }
206,798
I have a network with 20 raspberries (clients) and 1 server.I'd like to check MAC of client request, is that possible ? Further, possible without being known (nor visible) by the client ? NB : I aim to check if a client is not a fake one/imitation.NB2 : I already run a VPN option. But I want to check MAC anyway.
In ksh (both AT&T and pdksh variants) or zsh , you can do: if print -nu3; then echo fd 3 is writeablefi They won't write anything on that fd, but still check if the fd is writable (using fcntl(3, F_GETFL) ) and report an error otherwise: $ ksh -c 'print -nu3' 3< /dev/nullksh: print: -u: 3: fd not open for writing (which you can redirect to /dev/null ). With bash , I think your only option is to check if a dup() succeeds like in your approach, though that won't guarantee that the fd is writable (or call an external utility ( zsh / perl ...) to do the fcntl() ). Note that in bash (like most shells), if you use (...) instead of {...;} , that will fork an extra process. You can use: if { true >&3; } 2> /dev/null instead to avoid the fork (except in the Bourne shell where redirecting compound commands always causes a subshell). Don't use : instead of true as that's a special builtin, so would cause the shell to exit when bash is in POSIX compliance mode. You could however shorten it to: if { >&3; } 2> /dev/null
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/206798", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/116392/" ] }
206,817
I have a script from other person which has a look (note: it's a single file): #!/bin/bashsome commandssome commands#!/bin/bashsome commandssome commands#!/bin/bashsome commandssome commands I wondering what is the purpose of second and third shebangs? Is it by mistake or on purpose?
If these lines are not the beginning of included shell scripts to be built, i.e. inside a scheme of the form: cat <<end_of_shell_script >dynamically_built_shell#!/bin/bash[...]end_of_shell_script Then the repeated construct you found is the result of many copy - paste of full shell scripts but without enough care and understanding of what is the use of these very special comment on line 1 of scripts, starting with #! . Be careful before using such a shell script (no sudo , no su :) ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/206817", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/109131/" ] }
206,823
The standard way of making new processes in Linux is that the memory footprint of the parent process is copied and that becomes the environment of the child process until execv is called. What memory footprint are we talking about, the virtual (what the process requested) or the resident one (what is actually being used)? Motivation: I have a device with limited swap space and an application with a big difference between virtual and resident memory footprint. The application can't fork due to lack of memory and would like to see if trying to reduce the virtual footprint size would help.
In modern systems none of the memory is actually copied just because a fork system call is used. It is all marked read only in the page table such that on first attempt to write a trap into kernel code will happen. Only once the first process attempt to write will the copying happen. This is known as copy-on-write. However it may be necessary to keep track of committed address space as well. If no memory or swap is available at the time the kernel has to copy a page, it has to kill some process to free memory. This is not always desirable, so it is possible to keep track of how much memory the kernel has committed to. If the kernel would commit to more than the available memory + swap, it can give an error code on attempt to call fork. If enough is available the kernel will commit to the full virtual size of the parent for both processes after the fork.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/206823", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29436/" ] }
206,877
So I am using TIMESTAMP=$( stat -c "%y" $JARNAME.jar ) print $TIMESTAMP and its output is something like 2015-04-24 17:23:03.000000000 -0500 I need only the 2015-04-24 17:23:03 part
Since you seem to be using the Linux version of stat(1) , I'll assume you also have the GNU coreutils version of date(1) : TIMESTAMP=$( date +'%Y-%m-%d %H:%M:%S' -r "$JARNAME".jar )
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/206877", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117383/" ] }
206,886
I want to print previous line,every time a match was found. I know about grep -A and -B options. But my Solaris 5.10 machine doesn't support those options. I want solution using only sed . Foo.txt : Name is : saraage is : 10Name is : johnage is : 20Name is : Ronage is : 10Name is : peggyage is : 30 Out.txt : Name is : saraName is : Ron Pattern I am trying to match was age is : 10 . My environment, Solaris 5.10.
$ sed -n '/age is : 10/{x;p;d;}; x' Foo.txt Name is : saraName is : Ron The above was tested on GNU sed. If Solaris' sed does not support chaining commands together with semicolons, try: $ sed -n -e '/age is : 10/{x;p;d;}' -e x Foo.txt Name is : saraName is : Ron How it works sed has a hold space and a pattern space. Newlines are read into the pattern space. The idea of this script is that the previous line is saved in the hold space. /age is : 10/{x;p;d;} If the current line contains age is : 10 , then do: x : swap the pattern and hold space so that the prior line is in the pattern space p : print the prior line d : delete the pattern space and start processing next line x This is executed only on lines which do not contain age is : 10 . In this case, it saves the current line in the hold space. Doing the opposite Suppose that we want to print the names for people whose age is not 10: $ sed -n -e '/age is : 10/{x;d}' -e '/age is :/{x;p;d;}' -e x Foo.txt Name is : johnName is : peggy The above adds a command to the beginning, /age is : 10/{x;d} , to ignore any age-10 people. The command which follows, /age is :/{x;p;d;} , now accepts all the remaining ages.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/206886", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115560/" ] }
206,897
I wanted to do a range limited search in sed so I tried sed -n '195,210 /add/ p' <file> Which gave sed: -e expression #1, char 9: unknown command: `/' Purely as a guess, I tried adding curly braces: sed -n '195,210{/add/p}' <file> And it was successful, but Im trying to understand why it worked with the braces were added. I kind of get, that in bash, braces can delimit an expression (and possibly increase its order of execution), but in the case of sed what are these braces doing? Relevant info $ sed --versionsed (GNU sed) 4.2.2$ bash --versionGNU bash, version 4.3.11
Per the POSIX standard's page on sed : The script shall consist of editing commands of the following form: [ address [, address ]] function where function represents a single-character command verb from the list in Editing Commands in sed , followed by any applicable arguments. So the first non-blank character after the address is taken as a command verb - in your particular case it's / hence the error: char 9: unknown command: '/' . The braces are referenced further down: [ 2addr ] { editing command editing command ...} Execute a list of `sed` editing commands only when the pattern space is selected.  … [ 2addr ] is an indicator that the maximum number of permissible addresses is two. To clarify a point made above, the Addresses section of sed(1) says: Sed commands can be given with no addresses, in which case the command will be executed for all input lines; with one address, in which case the command will be executed only for input lines which match that address; or with two addresses, in which case the command will be executed for all input lines which match the inclusive range of lines starting from the first address and continuing to the second address.  Three things to note about address ranges: the syntax is addr1 , addr2 (i.e., the addresses are separated by a comma); … (and other stuff not relevant to this discussion) … The gnu info page ( info sed ) has a similar description of { and } ,under "3.4 Often-Used Commands": { COMMANDS } A group of commands may be enclosed between { and } characters. This is particularly useful when you want a group of commands to be triggered by a single address (or address-range) match. Otherwise said, braces are used to apply multiple commands at the same address or to nest addresses. The standard isn't very explicit here 1 but the left brace { is actually a command that starts a group of other sed commands (the group ends with a right brace } ). And, at the risk of really going TL;DR , the issue isthat 195 , 210 , and /add/ are all addresses . No sed commands can be invoked with three addresses. So the way to make your command workis to invoke the { command on the address range 195,210 ,and then (within that range) invoke the p command on the address /add/ . 1: though if you read the entire page it is mentioned that: Command verbs other than { , a , b , c , i , r , t , w , : , and # can be followed by...
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/206897", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106525/" ] }
206,903
I have a text file: deiauk 1611516 afsdf 765minkra 18415151 asdsf 4152linkra sfsfdsfs sdfss 4555deiauk1 sdfsfdsfs 1561 51deiauk2 115151 5454 4deiauk 1611516 afsdf ddfgfgdluktol1 4545 4 9luktol 1 and I want to match exactly deiauk . When I do this: grep "deiauk" file.txt I get this result: deiauk 1611516 afsdf 765deiauk1 sdfsfdsfs 1561 51deiauk2 115151 5454 4 but I only need this: deiauk 1611516 afsdf 765deiauk 1611516 afsdf ddfgfgd I know there's a -w option, but then my string has to mach whole line.
Try one of: grep -w "deiauk" textfilegrep "\<deiauk\>" textfile
{ "score": 9, "source": [ "https://unix.stackexchange.com/questions/206903", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111696/" ] }
206,909
If the ls -l command gives me a permission string like rwsr-s--x What does the 's' mean? The only sources I found mention that it can be present sometimes but do not elaborate. What does a '+' instead of a '-' mean? I have found mentions of 'extended permission' but nothing clear.
As explained by the very good and comprehensive wikipedia page on the subject : + (plus) suffix indicates an access control list that can grant additional permissions. Details are available with man getfacl . Furthermore, there are three permission triads : First triad : what the owner can do Second triad : what the group members can do Third triad : what other users can do As for the characters of the triad : First character r : readable Second character w : writable Third character x : executable s or t : executable and setuid/setgid/sticky S or T : setuid/setgid or sticky, but not executable The setuid/setgid basically means that, if you have the permission to run the program, you will run it as if you were the owning user and/or of the owning group of that program. This is helpful when you need to run a program which needs root access but also needs to work for non-root users (to change your password, for example). The sticky bit might have different meaning depending on the system or flavor you are running and how old it is, but on linux , the wiki page states that : [...] the Linux kernel ignores the sticky bit on files. [...] When the sticky bit is set on a directory, files in that directory may only be unlinked or renamed by root or the directory owner or the file owner.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/206909", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117658/" ] }
206,922
I'm looking for a way to only execute replacement when the last character is a newline, using sed . For instance: lettersAtEndOfLine is replaced, but this is not: lettersWithCharacterAfter& Since sed does not work well with newlines, it is not as simple as $ sed -E "s/[a-zA-Z]*\n/replace/" file.txt How can this be accomplished?
With standard sed , you will never see a newline in the text read from a file. This is because sed reads line by line, and there is therefore no newline at the end of the text of the current line in sed 's pattern space. In other words, sed reads newline-delimited data, and the delimiters are not part of what a sed script sees. Regular expressions can be anchored at the end of the line using $ (or at the beginning, using ^ ). Anchoring an expression at the start/end of a line forces it to match exactly there, and not just anywhere on the line. If you want to replace anything matching the pattern [A-Za-z]* at the end of the line with something, then anchor the pattern like this: [A-Za-z]*$ ...will force it to match at the end of the line and nowhere else. However, since [A-Za-z]*$ also matches nothing (for example, the empty string present at the end of every line), you need to force the matching of something , e.g. by specifying [A-Za-z][A-Za-z]*$ or [A-Za-z]\{1,\}$ So, your sed command line will thus be $ sed 's/[A-Za-z]\{1,\}$/replace/' file.txt I did not use the non-standard -E option here because it's not strictly needed. With it, you could have written $ sed -E 's/[A-Za-z]+$/replace/' file.txt It's a matter of taste.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/206922", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45038/" ] }
206,973
Do the following paths point to the same disk location? /home/username/app/home/username
Change directory to each one in turn and look at the output of pwd -P . With the -P flag, pwd will display the physical current working directory with all symbolic links resolved.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/206973", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83565/" ] }
206,995
The formatting character %s makes stat print the filesize in bytes # stat -c'%A %h %U %G %s %n' /bin/foo-rw-r--r-- 1 root root 45112 /bin/foo ls can be configured to print the byte size number with "thousand-separator", i.e. 45,112 instead of the usual 45112 . # BLOCK_SIZE="'1" ls -lA -rw-r--r-- 1 root root 45,112 Nov 15 2014 Can I format the output of stat similarly, so that the file size has thousand-separator? The reason why I am using stat in the first place is, I need to output like ls , but without time, therefore -c'%A %h %U %G %s %n' . Or is there some other way to print the ls -like output without the time?
Specify the date format, but leave it empty eg. ls -lh --time-style="+" Produces -rwxrwxr-x 1 christian christian 8.5K a.outdrwxrwxr-x 2 christian christian 4.0K sock-rw-rw-r-- 1 christian christian 183 t2.c
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/206995", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43007/" ] }
207,012
I have to make a configuration file available to guest OS running on top of KVM hyper-visor. I have already read about folder sharing options between host and guest in KVM with 'qemu' and 9P virtio support. I would like to know about any simple procedure which can help in one time file transfer from host to guest. Please let me know, how to transfer file while guest OS is running as well as a possible way to make the file available to guest OS by the time it starts running(like packaging the file and integrating with the disk-image if possible). Host OS will be linux.
Just hit upon two different ways: Transfer files via network. For example you can run httpd on the host and use any web browser or wget / curl to download files. Probably most easy and handy. Build ISO image on the host with files you want to transfer. Then attach it to the guest's CD drive. genisoimage -o image.iso -r /path/to/dirvirsh attach-disk guest image.iso hdc --driver file --type cdrom --mode readonly You can use mkisofs instead of genisoimage . You can use GUI like virt-manager instead of virsh CUI to attach an ISO image to the guest. You need to create a VM beforehand, supply that VM's ID as guest . You can see existing VMs by virsh list --all .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207012", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31061/" ] }
207,039
My input file layout is: mm/dd/yyyy,hh:mm,other fields I need to format it as: yyyy-mm-dd hh:mm:00,other fields sample input: 01/02/1998,09:30,0.4571,0.4613,0.4529,0.4592,604217501/02/1998,09:45,0.4592,0.4613,0.4529,0.4571,995602301/02/1998,10:00,0.4571,0.4613,0.455,0.4613,893955501/02/1998,10:15,0.4613,0.4697,0.4571,0.4697,1282362701/02/1998,10:30,0.4676,0.4969,0.4613,0.4906,28145145 sample output: 1998-01-02 09:30:00,0.4571,0.4613,0.4529,0.4592,6042175etc... I tried to use: sed -r 's/\(^[0-9][0-9])\(\/[0-9][0-9]\/)\(\/[0-9][0-9][0-9][0-9],)/\3\1\2/g
sed -e 's/\(..\)\/\(..\)\/\(....\),\(.....\),\(.*\)/\3-\1-\2 \4:00,\5/' Edited to include the input from the comments below: sed -e 's#\(..\).\(..\).\(....\),\(.....\),#\3-\1-\2 \4:00,#'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207039", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117737/" ] }
207,051
From some foggy memories I thought I would "improve" the default settings when creating a Linux partition and increased the inode size to 1024, and also turned on -O bigalloc ("This ext4 feature enables clustered block allocation"). Now, though, I can't find any concrete benefits to these settings cited on the net, and I see that with 20% disk usage I'm already using 15% of the inodes. So should I simply reformat the partition, or is there a positive to look on (or to use as justification)? E.g. for directories with lots of files?
Larger inodes are useful if you have many files with a large amount of metadata. The smallest inode size has room for classical metadata: permissions, timestamps, etc., as well as the address of a few blocks for regular files, or the target of short symbolic links. Larger inodes can store extended attributes such as access control lists and SELinux contexts . If there is not enough room for the extended attributes in the inode, they have to be stored in a separate block, which makes opening the file or reading its metadata slower. Hence you should use a larger inode size if you're planning on having large amounts of extended attributes such as complex ACLs, or if you're using SELinux. SELinux is the primary motivation for larger inodes.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207051", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117741/" ] }
207,053
In a BASH script I need to check if gcc, g++, cpp, make, libpng devel, zlib devel, git, Java (including devel files), ant and pkg-config are available on Mac OS X and if not, I need to prompt user to install them. This is easy task on Linux as it has package management tools, but I have no idea how to do this in Mac OS X shell. E.g. on openSUSE I can use rpm -q and zypper in , od Debian-based distros dpkg-query and apt-get install . But how to do it on a Mac?
Apple's package management system is often subject to criticism. The utility pkgutil can be used to list and query the package receipts. List all the packages installed with Apple's installer pkgutil --pkgs Regex for a package id pkgutil --pkgs=.\+Xcode.\+ List all the files in a package id pkgutil --only-files --files com.apple.pkg.update.devtools.3.2.6.XcodeUpdate Then again you could use lsbom and read the bom files in /var/db/receipts Users also install other package management systems such as MacPorts, fink, or Homebrew. Or compile their own in whatever prefix. pkgutil will not list packages installed by these methods. If your target operating system is OS X10.9 or OS X 10.10 then gcc --version Either the command will output the gcc version or you will be prompted to install the XCode command line tools. gcc, g++, cpp, make, and git will be installed along with other tools. The Java package is offered from Oracle. You could test with java --version though you'll need to familiarize yourself with Apple's frameworks, plugins, and bundles to search for header files. pkgutil would be a good candidate for this process. The other packages you mentioned could possibly be compiled with in a shell script or compiled and put into an Apple installer package then installed via a shell script. There isn't an easy method.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/207053", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/58256/" ] }
207,058
I have a following bash prompt string: root@LAB-VM-host:~# echo "$PS1"${debian_chroot:+($debian_chroot)}\u@\h:\w\$ root@LAB-VM-host:~# hostname LAB-VM-hostroot@LAB-VM-host:~# Now if I change the hostname from LAB-VM-host to VM-host with hostname command, the prompt string for this bash session does not change: root@LAB-VM-host:~# hostname VM-hostroot@LAB-VM-host:~# Is there a way to update hostname part of bash prompt string for current bash session or does it apply only for new bash sessions?
Does Debian really pick up a changed hostname if PS1 is re-exported, as the other answers suggest? If so, you can just refresh it like this: export PS1="$PS1" Don't know about debian, but on OS X Mountain Lion this will not have any effect. Neither will the explicit version suggested in other answers (which is exactly equivalent to the above). Even if this works, the prompt must be reset separately in every running shell. In which case, why not just manually set it to the new hostname? Or just launch a new shell (as a subshell with bash , or replace the running process with exec bash )-- the hostname will be updated. To automatically track hostname changes in all running shells , set your prompt like this in your .bashrc : export PS1='\u@$(hostname):\w\$ ' or in your case: export PS1='${debian_chroot:+($debian_chroot)}\u@$(hostname):\w\$ ' I.e., replace \h in your prompt with $(hostname) , and make sure it's enclosed in single quotes. This will execute hostname before every prompt it prints, but so what. It's not going to bring the computer to its knees.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207058", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33060/" ] }
207,090
Disclaimer : I now use virt-manager to create and manage my VM and it is really a life saver. It can be used remotely (from a third machine, typically your workstation) if the host does not have graphical display. The occurrences of vnc in the installation tutorials I found made me think that the "recommended" procedure required X stuff either on host, guest or both. Absolutely not. My mistake. If you are in the same situation, think twice before trying to install the VM from command line. I'm trying to install a Debian VM in a Debian host using virt-install and I don't know how to pass it the .iso image. virt-install --connect qemu:///system --virt-type kvm --name prod --ram 6144 --disk /srv/vm/prod.qcow,format=qcow2,size=10 --location=/home/jerome/debian-8.0.0-amd64-netinst.iso --network bridge=br0 --os-type linux --os-variant debianwheezy --extra-args='console=tty0 console=ttyS0,115200n8 serial'Starting install...Retrieving file info... | 160 B 00:00 ... ERROR Could not find an installable distribution at '/home/jerome/debian-8.0.0-amd64-netinst.iso'The location must be the root directory of an install tree.Domain installation does not appear to have been successful.If it was, you can restart your domain by running: virsh --connect qemu:///system start prodotherwise, please restart your installation.root@versailles:/etc# The solutions I have seen seem quite twisted, like using apache to serve locally the .iso image as if it was from a distant place. Linux Mint 14 : install Ubuntu 12.10 Server within KVM via CLI ( no GUI ) [Xen-users] installing a vm with virt-install (It is Xen but it looks like it is the same issue anyway. I can't believe it is that complicated. Is it? man virt-install says: If you want to use those options with CDROM media, you have a few options: * Run virt-install as root and do --location ISO * Mount the ISO at a local directory, and do --location DIRECTORY * Mount the ISO at a local directory, export that directory over local http, and do --location http://localhost/DIRECTORY Isn't this what I'm doing? Someone says he moved the .iso to /cdrom and it worked but I didn't understand exactly what he did and I couldn't reproduce. I cannot use --cdrom instead of --location as in this question because --extra-args only work if specified with --location . virt-install --connect qemu:///system --virt-type kvm --name prod --ram 6144 --disk /srv/vm/prod.qcow,format=qcow2,size=10 --cdrom=/home/jerome/debian-8.0.0-amd64-netinst.iso --network bridge=br0 --os-type linux --os-variant debianwheezy --extra-args='console=tty0 console=ttyS0,115200n8 serial'ERROR --extra-args only work if specified with --location. See the man page for examples of using --location with CDROM media Edit: Log with --debug virt-install --connect qemu:///system --virt-type kvm --name prod --ram 6144 --disk /srv/vm/prod.qcow,format=qcow2,size=10 --location=/home/jerome/debian-8.0.0-amd64-netinst.iso --network bridge=br0 --os-type linux --os-variant debianwheezy --extra-args='console=tty0 console=ttyS0,115200n8 serial' --debug[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (cli:187) Launched with command line: /usr/share/virt-manager/virt-install --connect qemu:///system --virt-type kvm --name prod --ram 6144 --disk /srv/vm/prod.qcow,format=qcow2,size=10 --location=/home/jerome/debian-8.0.0-amd64-netinst.iso --network bridge=br0 --os-type linux --os-variant debianwheezy --extra-args=console=tty0 console=ttyS0,115200n8 serial --debug[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (cli:195) Requesting libvirt URI qemu:///system[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (cli:199) Received libvirt URI qemu:///system[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (virt-install:193) Requesting virt method 'default', hv type 'kvm'.[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (virt-install:432) Received virt method 'kvm'[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (virt-install:433) Hypervisor name is 'hvm'[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (cli:476) DISPLAY is not set: defaulting to nographics.[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (guest:208) Setting Guest.os_variant to 'debianwheezy'[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (diskbackend:171) Path '/srv/vm' is target for pool 'srv-kvm'. Creating volume 'prod.qcow'.[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (connection:228) Fetching volume XML failed: Storage volume not found: no storage vol with matching path '/media/cdrom0/debian-8.0.0-amd64-netinst.iso'[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (virt-install:551) Guest.has_install_phase: TrueStarting install...[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (distroinstaller:417) Using scratchdir=/var/lib/libvirt/boot[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:204) Preparing mount at /var/lib/libvirt/boot/virtinstmnt.srz86f[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:214) mount cmd: ['/bin/mount', '-o', 'ro,loop', '/home/jerome/debian-8.0.0-amd64-netinst.iso', '/var/lib/libvirt/boot/virtinstmnt.srz86f'][mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:302) Finding distro store for location=/home/jerome/debian-8.0.0-amd64-netinst.iso[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:183) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.srz86f/.treeinfo[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:325) Prioritizing distro store=<class 'virtinst.urlfetcher.DebianDistro'>[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:183) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.srz86f/current/images/MANIFEST[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:183) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.srz86f/daily/MANIFEST[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:183) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.srz86f/Fedora[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:99) Fetching URI: /var/lib/libvirt/boot/virtinstmnt.srz86f/.disk/info[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:110) Saved file to /var/lib/libvirt/boot/virtinst-info.lZMVqLRetrieving file info... | 160 B 00:00 ... [mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:1016) Regex didn't match, not a ALT Linux distro[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:183) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.srz86f/current/images/MANIFEST[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:183) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.srz86f/install/netboot/version.info[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:183) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.srz86f/SL[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:183) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.srz86f/directory.yast[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:183) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.srz86f/CentOS[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:183) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.srz86f/VERSION[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:183) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.srz86f/Server[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:183) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.srz86f/Client[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:183) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.srz86f/RedHat[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:183) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.srz86f/images/pxeboot/vmlinuz[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:183) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.srz86f/images/boot.iso[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:183) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.srz86f/boot/boot.iso[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:183) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.srz86f/current/images/netboot/mini.iso[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:183) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.srz86f/install/images/boot.iso[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (urlfetcher:225) Cleaning up mount at /var/lib/libvirt/boot/virtinstmnt.srz86f[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (cli:234) File "/usr/share/virt-manager/virt-install", line 876, in <module> sys.exit(main()) File "/usr/share/virt-manager/virt-install", line 870, in main start_install(guest, continue_inst, options) File "/usr/share/virt-manager/virt-install", line 588, in start_install fail(e, do_exit=False) File "/usr/share/virt-manager/virtinst/cli.py", line 234, in fail logging.debug("".join(traceback.format_stack()))[mer., 03 juin 2015 17:46:12 virt-install 12991] ERROR (cli:235) Could not find an installable distribution at '/home/jerome/debian-8.0.0-amd64-netinst.iso'The location must be the root directory of an install tree.[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (cli:237) Traceback (most recent call last): File "/usr/share/virt-manager/virt-install", line 560, in start_install dom = guest.start_install(meter=meter, noboot=options.noreboot) File "/usr/share/virt-manager/virtinst/guest.py", line 384, in start_install self._prepare_install(meter, dry) File "/usr/share/virt-manager/virtinst/guest.py", line 277, in _prepare_install util.make_scratchdir(self.conn, self.type)) File "/usr/share/virt-manager/virtinst/installer.py", line 201, in prepare self._prepare(guest, meter, scratchdir) File "/usr/share/virt-manager/virtinst/distroinstaller.py", line 444, in _prepare self._prepare_kernel_url(guest, fetcher) File "/usr/share/virt-manager/virtinst/distroinstaller.py", line 347, in _prepare_kernel_url store = urlfetcher.getDistroStore(guest, fetcher) File "/usr/share/virt-manager/virtinst/urlfetcher.py", line 346, in getDistroStore fetcher.location))ValueError: Could not find an installable distribution at '/home/jerome/debian-8.0.0-amd64-netinst.iso'The location must be the root directory of an install tree.[mer., 03 juin 2015 17:46:12 virt-install 12991] DEBUG (cli:248) Domain installation does not appear to have been successful.If it was, you can restart your domain by running: virsh --connect qemu:///system start prodotherwise, please restart your installation.Domain installation does not appear to have been successful.If it was, you can restart your domain by running: virsh --connect qemu:///system start prodotherwise, please restart your installation. It works with --location http://ftp.us.debian.org/debian/dists/stable/main/installer-amd64/ but isn't it a bit of a shame to do this when an .iso image is available locally? Loss of traceability : you can't reproduce later being sure you get the exact same source. Multiplicated use of bandwidth from servers/mirrors. Need for internet access. Slower.
virt-install tries to extract kernel and initrd files from the ISO image. With --debug you can see the whole activities of it including loop-mounting, searching for those files, etc. Starting install...[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (Installer:182) scratchdir=/var/lib/libvirt/boot[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:184) Preparing mount at /var/lib/libvirt/boot/virtinstmnt.dwcpql[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (OSDistro:65) Attempting to detect distro:[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/.treeinfo[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/Fedora[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/Server[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/Client[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/RedHat[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/CentOS[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/SL[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/directory.yast[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/current/images/MANIFEST[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/daily/MANIFEST[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/current/images/MANIFEST[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/install/netboot/version.info[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/initrd.gz[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (OSDistro:990) Doesn't look like an Ubuntu Distro.[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/VERSION[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/VERSION[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/boot/platform/i86xpv/kernel/unix[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/platform/i86xpv/kernel/unix[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/STARTUP/XNLOADER.SYS[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/images/pxeboot/vmlinuz[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/images/boot.iso[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/boot/boot.iso[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/current/images/netboot/mini.iso[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:169) local hasFile: Couldn't find /var/lib/libvirt/boot/virtinstmnt.dwcpql/install/images/boot.iso[Wed, 03 Jun 2015 07:56:40 virt-install 29692] DEBUG (ImageFetcher:205) Cleaning up mount at /var/lib/libvirt/boot/virtinstmnt.dwcpql[Wed, 03 Jun 2015 07:56:40 virt-install 29692] ERROR (cli:445) Could not find an installable distribution at '/home/yaegashi/debian-8.0.0-amd64-netinst.iso' I suppose virt-install doesn't support Debian netinst ISO images with --location (but somehow Ubuntu supported?). To boot a kernel with --extra-args , virt-install needs to have those kernel and corresponding initrd files. --cdrom simply attaches ISO to the guest's CD drive, that's insufficient to work with --extra-args . If you want use --extra-args , I recommend you to use "netboot" kernel/initrd files by specifying Debian installer URL of your nearest mirror, like --location http://ftp.us.debian.org/debian/dists/stable/main/installer-amd64/ as described in the manual.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207090", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/116842/" ] }
207,136
Marias-MacBook-Air:~ marias$ ls --helpls: illegal option -- -usage: ls [-ABCFGHLOPRSTUWabcdefghiklmnopqrstuwx1] [file ...]
The ls version in Mac OS X is based on BSD ls , and doesn't support long-format options including --help . See the ls manpage or man ls on your system for details.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207136", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
207,141
I'm attempting to set up an SSH server on my local machine using OpenSSH. When I try to SSH from a remote host into my local SSH server, the SSH server doesn't respond and the request times out. I'm pretty sure there's a really obvious fix for this that I'm simply overlooking. Here's what happens when I try to SSH in from a remote host: yoshimi@robots:/$ ssh -vv [email protected]_6.7p1 Debian-5, OpenSSL 1.0.1k 8 Jan 2015debug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 19: Applying options for *debug2: ssh_connect: needpriv 0debug1: Connecting to 99.3.26.94 [99.3.26.94] port 22.debug2: fd 3 setting O_NONBLOCKdebug1: connect to address 99.3.26.94 port 22: Connection timed outssh: connect to host 99.3.26.94 port 22: Connection timed out Where robots is my remote host, and 99.3.26.94 is my local SSH server. SSH Is Running volt@arnold:~$ ps -A | grep sshd 5784 ? 00:00:00 sshd Where arnold is my local SSH server. Port Forwarding Is Set Up On the Router I've got my home router set up to forward ports 80 and 22 to my SSH server. Interestingly, port 80 worked without a hitch -- goes straight to the Apache web directory. Port 22 -- not so much. NMap Says It's Filtered yoshimi@robots:/$ nmap -p 22 99.3.26.94Starting Nmap 6.47 ( http://nmap.org ) at 2015-06-02 14:45 EDTNmap scan report for 99-3-26-94.lightspeed.bcvloh.sbcglobal.net (99.3.26.94)Host is up (0.33s latency).PORT STATE SERVICE22/tcp filtered sshNmap done: 1 IP address (1 host up) scanned in 7.59 seconds Where robots is my remote host, and 99.3.26.94 is my local SSH server. It's Not IPTables (I think) volt@arnold:~$ sudo iptables -LChain INPUT (policy ACCEPT)target prot opt source destination fail2ban-ssh tcp -- anywhere anywhere multiport dports sshACCEPT tcp -- anywhere anywhere tcp dpt:sshACCEPT tcp -- anywhere anywhere tcp dpt:httpChain FORWARD (policy ACCEPT)target prot opt source destination Chain OUTPUT (policy ACCEPT)target prot opt source destination Chain fail2ban-ssh (1 references)target prot opt source destination RETURN all -- anywhere anywhere ...And I don't have any other firewalls in place -- it's a relatively fresh Debian netinst. So, then: What else could it be? It certainly appears to be a firewall-y sort of thing to just ignore traffic, but if it's not the router, it's not iptables, and it's not another firewall on the SSH server, ...what the heck else is there?? EDIT: Connection Request from Internal Network Error yoshimi@robots:/$ ssh [email protected]: connect to host 192.168.1.90 port 22: No route to host
A Very Disappointing Self-Answer Having set this problem aside for a day and come back to it, I was both relieved and perturbed (more perturbed than relieved) to find that everything was, mysteriously, working properly. So, What Was the Issue? No settings were changed or adjusted -- not on the router, not on the SSH server, and not on the SSH client's machine. It's fairly safe to say it was the router not handling the incoming traffic properly, in spite of proper settings. Given that dinky home router software isn't really designed to deal with port forwarding, it took the poor guy a while to implement the necessary changes. But It's Been Like 6 Hours!! Yeah dude, I know. I spent all day trying to figure out what was wrong -- and didn't ever find it because there wasn't anything wrong. Evidently, it can take 6 hours -- possibly more -- for the router settings to take effect. So How Do I Know If This Is My Issue? A nifty tool I came across during this escapade is tcpdump . This lean little guy sniffs traffic for you, offering valuable insight into what's actually going on. Plus, he's got some super filtering features that allow you to narrow down exactly what you want to look at/for. For example, the command: tcpdump -i wlan1 port 22 -n -Q inout Tells tcpdump to look for traffic via the wlan1 interface ( -i = 'interface'), only through port 22, ignore DNS name resolution ( -n = 'no name resolution'), and we want to see both incoming and outgoing traffic ( -Q accepts in , out , or inout ; inout is the default). By running this command on your SSH server while attempting to connect via a remote machine, it quickly becomes clear where precisely the problem lies. There are, essentially, 3 possibilities: If you're seeing incoming traffic from the remote machine, but no outgoing traffic from your local server, the problem lies with the server: there's probably a firewall rule that needs to be changed, etc. If you're seeing both incoming and outgoing , but your remote machine isn't receiving the response, it's most likely the router: it's allowing the incoming traffic, but dropping your outgoing packets. If there's no traffic at all , that's probably a router issue as well: the remote machine's SYN packets are being ignored and dropped by the router before they even reach your server. And once you've discovered where the problem lies, a fix is (usually) trivial.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/207141", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117791/" ] }
207,142
I am trying to fetch an xml block like this: <machine name="sample1" min="1" max="10" idleTime="300" backend="ABC,XYZ"> <handler className="com.abc.xyz.qwerty.foo.FooBar" /> <details queue="ABC.SAMPLE" suggExpiry="30" minExpiry="4" maxExpiry="500"/> </machine> details queue will be the input parameter. I was successful when machine name (@ the start of the block) was my parameter by using awk '/<machine.*name="sample1"/,/<\/machine>/' Target.xml How am I going to fetch the same xml block when the input parameter is the details queue (@ the middle of the block)?
A Very Disappointing Self-Answer Having set this problem aside for a day and come back to it, I was both relieved and perturbed (more perturbed than relieved) to find that everything was, mysteriously, working properly. So, What Was the Issue? No settings were changed or adjusted -- not on the router, not on the SSH server, and not on the SSH client's machine. It's fairly safe to say it was the router not handling the incoming traffic properly, in spite of proper settings. Given that dinky home router software isn't really designed to deal with port forwarding, it took the poor guy a while to implement the necessary changes. But It's Been Like 6 Hours!! Yeah dude, I know. I spent all day trying to figure out what was wrong -- and didn't ever find it because there wasn't anything wrong. Evidently, it can take 6 hours -- possibly more -- for the router settings to take effect. So How Do I Know If This Is My Issue? A nifty tool I came across during this escapade is tcpdump . This lean little guy sniffs traffic for you, offering valuable insight into what's actually going on. Plus, he's got some super filtering features that allow you to narrow down exactly what you want to look at/for. For example, the command: tcpdump -i wlan1 port 22 -n -Q inout Tells tcpdump to look for traffic via the wlan1 interface ( -i = 'interface'), only through port 22, ignore DNS name resolution ( -n = 'no name resolution'), and we want to see both incoming and outgoing traffic ( -Q accepts in , out , or inout ; inout is the default). By running this command on your SSH server while attempting to connect via a remote machine, it quickly becomes clear where precisely the problem lies. There are, essentially, 3 possibilities: If you're seeing incoming traffic from the remote machine, but no outgoing traffic from your local server, the problem lies with the server: there's probably a firewall rule that needs to be changed, etc. If you're seeing both incoming and outgoing , but your remote machine isn't receiving the response, it's most likely the router: it's allowing the incoming traffic, but dropping your outgoing packets. If there's no traffic at all , that's probably a router issue as well: the remote machine's SYN packets are being ignored and dropped by the router before they even reach your server. And once you've discovered where the problem lies, a fix is (usually) trivial.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/207142", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117383/" ] }
207,148
I have a script somewhere in my home directory. I need to give another user or a group permissions such that when they execute that specific script, it executes as though I am logged in and has all permissions that my ID has. I do not want to use sudo or su and go through setting them as sudoers or entering passwords.
With sudo you can get very granular with your permissions.If you want to give a user permission to only run your script and nothing else you can add this line to your /etc/sudoers: user ALL=(yourusername) NOPASSWD: /path/to/your/script Then, as the other use, you would run: sudo -u yourusername /path/to/your/script
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207148", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21470/" ] }
207,171
I have this file 1 deiauk David Smith from California 12 582 edvin from Nevada 12 5 8 95 2 48 53 jaco My Name Is Jacob I'm from NY 5 6 845 156 5854 from Miami And I need to get values after specific word from is it possible to do that in shell?My output should be CaliforniaNevadaNYMiami
Or: awk '{for (I=1;I<NF;I++) if ($I == "from") print $(I+1)}' file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/207171", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111696/" ] }
207,195
I want to remove files, more specifically, symbolic links of /usr/include that are newer than 2 JUN 22:27 How can I do this?
You might want to use find -newermt . Make sure to review files to be removed first: find /usr/include -type l -newermt "Jun 2 22:27" Use -delete to perform actual removes. find /usr/include -type l -newermt "Jun 2 22:27" -delete
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/207195", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112677/" ] }
207,210
pwd gives me /data/users/me/some/random/folder Is there an easy way of obtaining ~/some/random/folder from pwd ?
If your're using bash, then the dirs builtin has the desired behavior: dirs +0~/some/random/folder (Note +0 , not -0 .) With zsh : dirs~/some/random/folder To be exactly, we first need to clear the directory stack, else dirs would print all contents: dirs -c; dirs Or with zsh 's print builtin: print -rD $PWD or print -P %~ (that one turns prompt expansion on. %~ in $PS1 expands to the current directory with $HOME replaced with ~ but also handles other named directories like the home directory of other users or named directories that you define yourself).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/207210", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/71888/" ] }
207,216
I wanted to have some of the sub-folders of my home directory (like Music, Downloads,Videos) on my hard-disk-raid instead of my SSD. Therefore I deleted those folders in my home directory, recreated them on the RAID and made symlinks pointing from my home-folder to the RAID (e.g. /home/user/Music > /mnt/home-big-data/user/Music ). However, the newly created folders don't have the correct meta-data-properties in Gnome 3 (wrong icon, folders won't open as Music folder, etc.). Which is the best way to remap those features to the folders on the RAID? I tried editing the /home/user/.config/user-dirs.dirs and setting XDG_DOWNLOAD_DIR="$HOME/Downloads" (according to the symlink) but it was resetted to XDG_DOWNLOAD_DIR="$HOME/" after reboot. Another thing I tried was using gvfs-set-attribute to reset the standard-icon, but this also failed. Which would be the correct way to do that?
Most likely, your user-dirs are reset to $HOME/ each time you reboot because those locations are not available on session startup when xdg-user-dirs-update is automatically run. After editing ~/.config/user-dirs.dirs a possible solution is to prevent xdg-user-dirs-update from running (and resetting your configuration at each session start up) by adding enabled=False to your user-dirs.conf : enabled= boolean When set to False , xdg-user-dirs-update will not change the XDG user dirs configuration. So to disable it only for your user account, add enabled=False to ~/.config/user-dirs.conf (this will override system-wide settings). If you want to disable it for all users add that key/value to /etc/xdg/user-dirs.conf .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/207216", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7129/" ] }
207,217
I am writing a shell script, using any general UNIX commands. I have to retrieve the line that has the least characters (whitespace included). There can be up to around 20 lines. I know I can use head -$L | tail -1 | wc -m to find the character count of line L. The problem is, the only method I can think of, using that, would be to manually write a mess of if statements, comparing the values. Example data: seven/74for8 eight?five! Would return 4for since that line had the least characters. In my case, if multiple lines have the shortest length, a single one should be returned. It does not matter which one is selected, as long as it is of the minimum length. But I don't see the harm in showing both ways for other users with other situations.
A Perl way. Note that if there are many lines of the same, shortest length, this approach will only print one of them: perl -lne '$m//=$_; $m=$_ if length()<length($m); END{print $m if $.}' file Explanation perl -lne : -n means "read the input file line by line", -l causes trailing newlines to be removed from each input line and a newline to be added to each print call; and -e is the script that will be applied to each line. $m//=$_ : set $m to the current line ( $_ ) unless $m is defined. The //= operator is available since Perl 5.10.0. $m=$_ if length()<length($m) : if the length of the current value of $m is greater than the length of the current line, save the current line ( $_ ) as $m . END{print $m if $.} : once all lines have been processed, print the current value of $m , the shortest line. The if $. ensures that this only happens when the line number ( $. ) is defined, avoiding printing an empty line for blank input. Alternatively, since your file is small enough to fit in memory, you can do: perl -e '@K=sort{length($a) <=> length($b)}<>; print "$K[0]"' file Explanation @K=sort{length($a) <=> length($b)}<> : <> here is an array whose elements are the lines of the file. The sort will sort them according to their length and the sorted lines are saved as array @K . print "$K[0]" : print the first element of array @K : the shortest line. If you want to print all shortest lines, you can use perl -e '@K=sort{length($a) <=> length($b)}<>; print grep {length($_)==length($K[0])}@K; ' file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/207217", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45038/" ] }
207,239
I am trying to open a .zip file via vim . By default the cursor points to the bottom of the file once it opens. How can I bring the cursor to the beginning of the file as soon as I open a zip file? How can I change the default behavior of the vim to point it to the beginning of the file as soon as I open a *.zip file?
A Perl way. Note that if there are many lines of the same, shortest length, this approach will only print one of them: perl -lne '$m//=$_; $m=$_ if length()<length($m); END{print $m if $.}' file Explanation perl -lne : -n means "read the input file line by line", -l causes trailing newlines to be removed from each input line and a newline to be added to each print call; and -e is the script that will be applied to each line. $m//=$_ : set $m to the current line ( $_ ) unless $m is defined. The //= operator is available since Perl 5.10.0. $m=$_ if length()<length($m) : if the length of the current value of $m is greater than the length of the current line, save the current line ( $_ ) as $m . END{print $m if $.} : once all lines have been processed, print the current value of $m , the shortest line. The if $. ensures that this only happens when the line number ( $. ) is defined, avoiding printing an empty line for blank input. Alternatively, since your file is small enough to fit in memory, you can do: perl -e '@K=sort{length($a) <=> length($b)}<>; print "$K[0]"' file Explanation @K=sort{length($a) <=> length($b)}<> : <> here is an array whose elements are the lines of the file. The sort will sort them according to their length and the sorted lines are saved as array @K . print "$K[0]" : print the first element of array @K : the shortest line. If you want to print all shortest lines, you can use perl -e '@K=sort{length($a) <=> length($b)}<>; print grep {length($_)==length($K[0])}@K; ' file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/207239", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70728/" ] }
207,276
I would like to know how file types are known if filenames don't have suffixes. For example, a file named myfile could be binary or text to start with, how does the system know if the file is binary or text?
The file utility determines the filetype over 3 ways: First the filesystem tests : Within those tests one of the stat family system calls is invoked on the file. This returns the different unix file types : regular file, directory, link, character device, block device, named pipe or a socket. Depending on that, the magic tests are made. The magic tests are a bit more complex. File types are guessed by a database of patterns called the magic file . Some file types can be determined by reading a bit or number in a particular place within the file (binaries for example). The magic file contains " magic numbers " to test the file whether it contains them or not and which text info should be printed. Those " magic numbers " can be 1-4Byte values, strings, dates or even regular expressions. With further tests additional information can be found. In case of an executable, additional information would be whether it's dynamically linked or not, stripped or not or the architecture. Sometimes multiple tests must pass before the file type can be truly identified. But anyway, it doesn't matter how many tests are performed, it's always just a good guess . Here are the first 8 bytes in a file of some common filetypes which can help us to get a feeling of what these magic numbers can look like: Hexadecimal ASCIIPNG 89 50 4E 47|0D 0A 1A 0A ‰PNG|....JPG FF D8 FF E1|1D 16 45 78 ÿØÿá|..ExJPG FF D8 FF E0|00 10 4A 46 ÿØÿà|..JFZIP 50 4B 03 04|0A 00 00 00 PK..|....PDF 25 50 44 46|2D 31 2E 35 %PDF|-1.5 If the file type can't be found over magic tests, the file seems to be a text file and file looks for the encoding of the contents. The encoding is distinguished by the different ranges and sequences of bytes that constitute printable text in each set. The line breaks are also investigated, depending on their HEX values: 0A ( \n ) classifies a Un*x/Linux/BSD/OSX terminated file 0D 0A ( \r\n ) are file from Microsoft operating systems 0D ( \r ) would be Mac OS until version 9 15 ( \025 ) would be IBMs AIX Now the language tests start. If it appears to be a text file, the file is searched for particular strings to find out which language it contains (C, Perl, Bash). Some script languages can also be identified over the hashbang ( #!/bin/interpreter ) in the first line of the script. If nothing applies to the file, the file type can't be determined and file just prints "data". So, you see there is no need for a suffix. A suffix anyway could confuse, if set wrong.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/207276", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9115/" ] }
207,294
I want to take down data in /path/to/data/folder/month/date/hour/minute/file and symlink it to /path/to/recent/file and do this automatically every time a file is created. Assuming I will not know ahead of time if /path/to/recent/file exists, how can I go about creating it (if it doesn't exist) or replacing it (if it does exist)? I am sure I can just check if it exists and then do a delete, symlink, but I'm wondering if there is a simple command which will do what I want in one step.
This is the purpose of ln 's -f option: it removes existing destination files, if any, before creating the link. ln -sf /path/to/data/folder/month/date/hour/minute/file /path/to/recent/file will create the symlink /path/to/recent/file pointing to /path/to/data/folder/month/date/hour/minute/file , replacing any existing file or symlink to a file if necessary (and working fine if nothing exists there already). If a directory, or symlink to a directory, already exists with the target name, the symlink will be created inside it (so you'd end up with /path/to/recent/file/file in the example above). The -n option, available in some versions of ln , will take care of symlinks to directories for you, replacing them as necessary: ln -sfn /path/to/data/folder/month/date/hour/minute/file /path/to/recent/file POSIX ln doesn’t specify -n so you can’t rely on it generally. Much of ln ’s behaviour is implementation-defined so you really need to check the specifics of the system you’re using. If you’re using GNU ln , you can use the -t and -T options too, to make its behaviour fully predictable in the presence of directories ( i.e. fail instead of creating the link inside the existing directory with the same name).
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/207294", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32234/" ] }
207,317
I have a directory similar to the following: -rw-r--r-- 1 root root 223K Apr 28 14:25 2015.04.28_14.25-rw-r--r-- 1 root root 253K Apr 28 14:55 2015.04.28_14.55-rw-r--r-- 1 root root 276K Apr 28 15:25 2015.04.28_15.25-rw-r--r-- 1 root root 254K Apr 28 15:55 2015.04.28_15.55-rw-r--r-- 1 root root 122K Apr 29 09:08 2015.04.29_09.08-rw-r--r-- 1 root root 127K Apr 29 09:38 2015.04.29_09.38-rw-r--r-- 1 root root 67K Apr 29 11:43 2015.04.29_11.43-rw-r--r-- 1 root root 137K May 1 12:13 2015.04.29_12.13-rw-r--r-- 1 root root 125K May 1 12:43 2015.04.29_12.43-rw-r--r-- 1 root root 165K May 1 13:13 2015.04.29_13.13-rw-r--r-- 1 root root 110K May 1 13:43 2015.04.29_13.43 My question is, how would I find the largest file from each date? For example, largest file from Apr 28, largest from Apr 29, May 1, etc. OS info: Linux Kali 3.18.0-kali3-amd64 #1 SMP Debian 3.18.6-1~kali2 (2015-03-02) x86_64 GNU/Linux
On GNU/anything, ls -l --time-style=+%s \| awk '{$6 = int($6/86400); print}' \| sort -nk6,6 -nrk5,5 \| sort -sunk6,6 That will get you UTC boundaries, add your local time offset to the calc as needed,e.g. int(($6-7*3600)/86400) for -0700 midnight boundaries.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207317", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117923/" ] }
207,326
What does -ar flag mean here: cp -ar ../foo/bar/. qux/quux/ I'm quite new to command line languages and trying my best to learn it. Not sure of the -ar flag only. -r is recursive right, then can you just add -a and it becomes -ar ? Does -a mean all ?
Generally, multiple single letter flags can be combined into a single argument. In this case: cp -ar ../foo/bar/. qux/quux/ is equivalent to: cp -a -r ../foo/bar/. qux/quux/ If you look in the manual, it will tell you that -a is "same as -dR --preserve=all". You can look all of those up if you want, but the short version is that the -a flag causes the new files to have the same permissions, owner, timestamp, etc. as the original files. (Normally, they would be owned by the user performing the 'cp' with permissions defined by your shell configuration, and a current timestamp.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207326", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/105924/" ] }
207,365
I am using Open SSH (OpenSSH_6.6.1p1, OpenSSL 1.0.1i 6 Aug 2014) in Windows 8.1. X11 Forwarding does not appear to be working. The DISPLAY environment variable does not appear to be set. For example, if I use BitVise or Putty to connect, and run env, I see: [marko@vm:~]$ envXDG_SESSION_ID=6TERM=xtermSHELL=/bin/bashSSH_CLIENT=192.168.1.174 61102 22SSH_TTY=/dev/pts/0USER=markoMAIL=/var/mail/markoPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/gamesPWD=/home/markoLANG=en_CA.UTF-8NODE_PATH=/usr/lib/nodejs:/usr/lib/node_modules:/usr/share/javascriptSHLVL=1HOME=/home/markoLANGUAGE=en_CA:enLOGNAME=markoSSH_CONNECTION=192.168.1.174 61102 192.168.1.64 22XDG_RUNTIME_DIR=/run/user/1000DISPLAY=localhost:10.0_=/usr/bin/env If I instead use OpenSSH (ssh -X marko@vm): [marko@vm:~]$ envXDG_SESSION_ID=8TERM=cygwinSHELL=/bin/bashSSH_CLIENT=192.168.1.174 61150 22SSH_TTY=/dev/pts/1USER=markoMAIL=/var/mail/markoPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/gamesPWD=/home/markoLANG=en_CA.UTF-8NODE_PATH=/usr/lib/nodejs:/usr/lib/node_modules:/usr/share/javascriptSHLVL=1HOME=/home/markoLANGUAGE=en_CA:enLOGNAME=markoSSH_CONNECTION=192.168.1.174 61150 192.168.1.64 22XDG_RUNTIME_DIR=/run/user/1000_=/usr/bin/env
Have you set DISPLAY environment variable on the client? I'm not sure which shell you are using, but with Bourne shell derivative (like bash), please try: export DISPLAY=127.0.0.1:0ssh -X marko@vm Or if you're using cmd.exe: set DISPLAY=127.0.0.1:0ssh -X marko@vm Or if you're using powershell.exe: $env:DISPLAY = '127.0.0.1:0'ssh -X marko@vm
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/207365", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50909/" ] }
207,369
I have a bunch of jpeg images.Say 001abcd.jpg , 002abcd.jpg ,and so on. I want to capture the filename and add it as a text in the image itself in one corner.So the result would be, for example, the file 003abcd.jpg will have "003abcd" imprinted in one corner of that image. (The extension need not be there.) I want a terminal command that can batch process hundreds of images and add its own filenames in its respective images. I am using Linux Mint 17. Something tells me that imagemagick can be useful, but I don't know scripting. It is easy to put a single common text in all images. but I don't know how to put unique filenames as the text in the respective images in one go.
mogrify does batch processing so you could use something like this (change the font, size, color, position etc as per your taste): mogrify -font Liberation-Sans -fill white -undercolor '#00000080' \-pointsize 26 -gravity NorthEast -annotate +10+10 %t *.jpg to add the file name without extension ( %t ) to all jpg s in the current dir, e.g. orca-lm-1.jpg : This will overwrite your files so make sure you have backup copies. If you use a different format (e.g. png ) for the output files then the original files will remain unchanged: mogrify -format 'png' -font Liberation-Sans -fill white -undercolor \'#00000080' -pointsize 26 -gravity NorthEast -annotate +10+10 %t *.jpg
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207369", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117977/" ] }
207,438
I have file1 & file2 , I want to cat file1 into file2 after match of 22 pattern. Can I do it with cat or I need to go for awk or sed file1 aabbcc file2 11223344 Resultant file2 1122aabbcc3344
With sed : sed -e '/22/r file1' file2
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207438", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93151/" ] }
207,442
I am trying to use afl-fuzz with openssl in Ubuntu. A normal usage of afl-fuzz would be: afl-gcc test.c //-- this will produce a.outmkdir testcasesecho "Test case here." > testcases/case1afl-fuzz -i testcases -o findings ./a.out Now for openssl it would be something like: afl-gcc ./configmake //-- not sure of this :)afl-fuzz -i test -o findings <exe_name> where "test" is the folder with testcases for openssl My question is what is the parameter for "exe_name" for openssl? And please correct me if i'm wrong with the rest of the code. Thank you
With sed : sed -e '/22/r file1' file2
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207442", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118030/" ] }
207,452
I'm creating a basic bash script which deploy a simple webapp. My current code is as follows. #!/bin/bashclearecho "********************************************";echo "Hello, I'm going to deploy the QuizProject";echo "********************************************";git pull --all;#Only need to execute if option is presentcomposer install;echo "********************************************";echo "All the jobs done! Cheers";echo "********************************************"; At the moment I'm running this script in the command as bash deploy.sh But this will execute all the commands in the bash file. I want to make it if specific command is passed then only run the "composer install" bash -composer deploy.sh
I understand your question that you want to control the function. Maybe it's best done with options. Here's one way: #!/bin/bashdo_all=1do_git=0do_install=0while getopts "gi" optdo case $opt in (g) do_all=0 ; do_git=1 ;; (i) do_all=0 ; do_install=1 ;; (*) printf "Illegal option '-%s'\n" "$opt" && exit 1 ;; esacdoneclearecho "********************************************";echo "Hello, I'm going to deploy the QuizProject";echo "********************************************";(( do_all || do_git )) && git pull --all;(( do_all || do_install )) && composer install;echo "********************************************";echo "All the jobs done! Cheers";echo "********************************************"; If you call that script without options: bash deploy.sh both, git and install, will be called. If you call it with option -i (or resp. -g ) only the install (resp. the git call) will be done: bash deploy.sh -ibash deploy.sh -g You can also specify both options to do both, in one of these ways: bash deploy.sh -gibash deploy.sh -g -i
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207452", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100061/" ] }
207,469
I've got a problem with this (shortened) systemd service file: [Unit]Description=control FOO daemonAfter=syslog.target network.target[Service]Type=forkingUser=FOOdGroup=FOOExecStartPre=/bin/mkdir -p /var/run/FOOd/ExecStartPre=/bin/chown -R FOOd:FOO /var/run/FOOd/ExecStart=/usr/local/bin/FOOd -P /var/run/FOOd/FOOd.pidPIDFile=/var/run/FOOd/FOOd.pid[Install]WantedBy=multi-user.target Let FOOd be the user name and FOO the group name, which already exist for my daemon /usr/local/bin/FOOd . I need to create the directory /var/run/FOOd/ before starting the daemon process /usr/local/bin/FOOd via # systemctl start FOOd.service . This fails, because mkdir can't create the directory due to permissions: ...Jun 03 16:18:49 PC0515546 mkdir[2469]: /bin/mkdir: cannot create directory /var/run/FOOd/: permission deniedJun 03 16:18:49 PC0515546 systemd[1]: FOOd.service: control process exited, code=exited status=1... Why does mkdir fail at ExecStartPre and how can I fix it? (And no, I can't use sudo for mkdir...)
You need to add PermissionsStartOnly=true to [Service] . Your user FOOd is of course not authorized to create a directory in /var/run . To cite the man page: Takes a boolean argument. If true, the permission-related execution options, as configured with User= and similar options (see systemd.exec(5) for more information), are only applied to the process started with ExecStart=, and not to the various other ExecStartPre=, ExecStartPost=, ExecReload=, ExecStop=, and ExecStopPost= commands. If false, the setting is applied to all configured commands the same way. Defaults to false.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/207469", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118045/" ] }
207,487
This script: #!/bin/bashtmppipe=/tmp/temppipemkfifo $tmppipeecho "test" > $tmppipecat $tmppipeexit does not terminate. I assume that the cat command is waiting for an EOF from the pipe; how do I send one?
No, it's echo test > "$tmppipe" # BTW, you've got the quotes in the wrong places that hangs. More precisely, it's the shell opening the pipe for writing before running echo . pipe are inter-process communication mechanisms, they are to be used between processes running concurrently . Here, the open(WR_ONLY) ( > ) will block until another process does an open in read mode. echo test > "$tmppipe" &cat < "$tmppipe" will work because echo and cat run concurrently. On Linux, you can get away with: exec 3<> "$tmppipe" 4< "$tmppipe"echo test >&3exec 3>&-cat <&4 That works because read+write open s ( <> ) on pipes don't block on Linux, and because the test\n output by echo is small enough to fit in the pipe, so you can do the write and the read sequentially. It wouldn't work for a larger output like: exec 3<> "$tmppipe" 4< "$tmppipe"seq 100000 >&3exec 3>&-cat <&4 Because seq would fill up the pipe (64kiB in current versions of Linux) and block until some other process reads data off that pipe, which will never happen because cat won't run until seq has finished. Note that: echo test 1<> "$tmppipe"cat < "$tmppipe" would not work either because the echo command line would open the pipe, write test and then close the pipe (and then the system would destroy it as there's no file descriptor open to it anymore). So the next cat command line would try to instantiate a new pipe (and block until something opens the fifo file for writing).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/207487", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5032/" ] }
207,504
I would like to find files whose name has only 4 characters. Example, there are three files under /tmp : $ ls /tmptxtfilelinux Output should only show file because it only has 4 characters.
Use the ? wildcard for file globbing: ls -d /tmp/???? This will print all files and directories whose filename is 4-char long. As suggested by @roaima, the -d flag will prevent ls to display the content of subdirectories that match the pattern.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/207504", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118063/" ] }
207,566
How can I grep / awk / sed a file looking for some pattern, and print the entire line (including continuation lines if the matched line ends with \ ? File foo.txt contains: somethingwhateverthisXXX line \ has a continuation lineblahblaha \multipleXXX \continuation \line What should I execute to get (not necessarily in one line, not necessarily removing multiple spaces): thisXXX line has a continuation linea multipleXXX continuation line BTW I'm using bash and fedora21, so it does not need to be POSIX-compliant (but I'll appreciate a solution if it is POSIX)
Another approach using perl to remove newlines that are preceded by \ and whitespace: $ perl -pe 's/\\\n/ /' file | grep XXXthisXXX line has a continuation linea multipleXXX continuation line To remove extra spaces, pass it through sed: $ perl -pe 's/\\\n/ /' file | grep XXX | sed 's/ */ /g'thisXXX line has a continuation linea multipleXXX continuation line
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207566", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17326/" ] }
207,585
I would like to pass params to a bash script, dd-style.Basically, I want ./script a=1 b=43 to have the same effect as a=1 b=43 ./script I thought I could achieve this with: for arg in "$@"; do eval "$arg";done What's a good way of ensuring that the eval is safe, i.e. that "$arg" matches a static (no code execution), variable assignment? Or is there a better way to do this? (I would like to keep this simple).
You can do this in bash without eval (and without artificial escaping): for arg in "$@"; do if [[ $arg =~ ^[[:alpha:]_][[:alnum:]_]*= ]]; then declare +i +a +A "$arg" fidone Edit: Based on a comment by Stéphane Chazelas, I added flags to the declare to avoid having the variable assigned being already declared as an array or integer variable, which will avoid a number of cases in which declare will evaluate the value part of the key=val argument. (The +a will cause an error if the variable to be set is already declared as an array variable, for example.) All of these vulnerabilities relate to using this syntax to reassign existing (array or integer) variables, which would typically be well-known shell variables. In fact, this is just an instance of a class of injection attacks which will equally affect eval -based solutions: it would really be much better to only allow known argument names than to blindly set whichever variable happened to be present in the command-line. (Consider what happens if the command line sets PATH , for example. Or resets PS1 to include some evaluation which will happen at the next prompt display.) Rather than use bash variables, I'd prefer to use an associative array of named arguments, which is both easier to set, and much safer. Alternatively, it could set actual bash variables, but only if their names are in an associative array of legitimate arguments. As an example of the latter approach: # Could use this array for default values, too.declare -A options=([bs]= [if]= [of]=)for arg in "$@"; do # Make sure that it is an assignment. # -v is not an option for many bash versions if [[ $arg =~ ^[[:alpha:]_][[:alnum:]_]*= && ${options[${arg%%=*}]+ok} == ok ]]; then declare "$arg" # or, to put it into the options array # options[${arg%%=*}]=${arg#*=} fidone
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207585", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23692/" ] }
207,591
I just installed NodeJS & NPM on Debian Jessie using the recommended approach: apt-get install curlcurl -sL https://deb.nodesource.com/setup | bash -apt-get install -y nodejs However it’s a pretty old version (node v0.10.38 & npm 1.4.28). Any suggestions on the easiest way to install newer versions, e.g., currently node is v0.12.4 and npm is 2.7.4? Is installing from source my only approach?
There is a setup script available for Node.js (see installation insctructions ): # Adapt version number to the version you wantcurl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -sudo apt-get install -y nodejs A little comment: In my humble opinion, it's a very bad idea to curl | sudo bash . You are running a script you did not check with root privileges. It's always better to download the script, read through it, check for malicious commands, and after that , run it. But that's just my two cents. The installation can be achieved manually in a few steps following the manual installation procedure : Remove old PPA (if applicable) Add node repo ssh key Add node repo to sources.list update package list and install using favorite apt tool
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/207591", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118108/" ] }
207,617
I'm not sure if this gets the past date within the current day or if it only takes 30 or 31 days to it. e.g. If the current date is March 28th , 1 month ago must be February 28th , but what happen when it's March 30th ? Scenario I want to backup some files each day, the script will save this files within the current date with $(date +%Y%m%d) format, like 20150603_bckp.tar.gz , then when the next month arrives, remove all those files within 1 month ago except the 1st's and the 15th's files, so this is my condition: past_month = $(date -d "-1 month" +%Y%m%d)day = $(date +%d)if [ "$day" != 01 ] && [ "$day" != 15 ]then rm /path/of/files/${past_month}_bckp.tar.gz echo "Depuration done"else echo "Keep file"fi But I want to know, what will happen when the date is 30th, 31th or even the past February example? It will keep those files? or remove day 1st files? When it's 31th the depuration will execute, so if the past month only had 30 days, this will remove the day 1st file? I hope I hinted.
- 1 month will subtract one from the month number, and then if the resulting date is not valid ( February 30 , for example), adjust it so that it is valid. So December 31 - 1 month is December 1 , not a day in November, and March 31 - 1 month is March 3 (unless executed in a leap year). Here's quote from the info page for Gnu date (which is the date version which implements this syntax), which includes a good suggestion to make the arithmetic more robust: The fuzz in units can cause problems with relative items. For example, 2003-07-31 -1 month might evaluate to 2003-07-01, because 2003-06-31 is an invalid date. To determine the previous month more reliably, you can ask for the month before the 15th of the current month. For example: $ date -R Thu, 31 Jul 2003 13:02:39 -0700 $ date --date='-1 month' +'Last month was %B?' Last month was July? $ date --date="$(date +%Y-%m-15) -1 month" +'Last month was %B!' Last month was June! Another warning, also quoted from the info page: Also, take care when manipulating dates around clock changes such as daylight saving leaps. In a few cases these have added or subtracted as much as 24 hours from the clock, so it is often wise to adopt universal time by setting the TZ environment variable to UTC0 before embarking on calendrical calculations.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/207617", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68382/" ] }
207,669
I am trying to migrate an init.d script from centos 6.6 server to ubuntu 14.04. Centos machine start, status, stop commands are, daemon --pidfile=/path/to/pidfile /path/to/daemon/scriptstatus -p /path/to/pidfile /path/to/daemon/scriptkillproc -p /path/to/pidfile /path/to/daemon/script start command works good in its original form on ubuntu 14.04 but the other two functions, status, killproc are not defined on ubuntu distros. What is the equivalent of these commands in ubuntu machines?
On my Ubuntu system, killproc is provided by /lib/lsb/init-functions . http://refspecs.linuxbase.org/LSB_3.1.0/LSB-Core-generic/LSB-Core-generic/iniscrptfunc.html Have you tried putting . /lib/lsb/init-functions near the top of your init script? $ dpkg -S /lib/lsb/init-functionslsb-base: /lib/lsb/init-functions$ dpkg -S /sbin/statusupstart: /sbin/status$ apt-cache show lsb-basePackage: lsb-basePriority: required
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207669", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
207,723
Some Linux binaries end with a "d", for examplesshd, httpd, ppd, etc. Why is this so?
d at the end of some process means daemon . Deamon means processes which works in background and services works in background. Background here means that you don't have direct access to it and they aren't waiting for you! If you set that a service comes up after system booting it will run automatically. A bit more technically: Daemons are usually instantiated as processes. A process is an executing (i.e., running) instance of a program. Processes are managed by the kernel (i.e., the core of the operating system), which assigns each a unique process identification number (PID). There are three basic types of processes in Linux: interactive, batch and daemon. Interactive processes are run interactively by a user at the command line (i.e., all-text mode). Batch processes are submitted from a queue of processes and are not associated with the command line; they are well suited for performing recurring tasks when system usage is otherwise low. Daemons are recognized by the system as any processes whose parent process has a PID of one, which always represents the process init. init is always the first process that is started when a Linux computer is booted up (i.e., started), and it remains on the system until the computer is turned off. init adopts any process whose parent process dies (i.e., terminates) without waiting for the child process's status. Thus, the common method for launching a daemon involves forking (i.e., dividing) once or twice, and making the parent (and grandparent) processes die while the child (or grandchild) process begins performing its normal function. Two good references: http://www.linfo.org/daemon.html http://en.wikipedia.org/wiki/Daemon_(computing)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207723", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118201/" ] }
207,732
Is there such a feature or can it be emulated reasonably easily?I want the same behavior except it should return where set -e would have caused a call to exit .
Sub shell might be useful. func() {(set -e echo a ehco b echo c)}funcfuncfunc This script produces: ascript.sh: line 3: ehco: command not foundascript.sh: line 3: ehco: command not foundascript.sh: line 3: ehco: command not found Alternatively you might be interested in this try/catch implemetation in bash .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/207732", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23692/" ] }
207,754
My Thinkpad T430 has no visible indicator if a num lock/caps lock is on/off is there a way to notify on screen when turned on/off?
You can try getting info with xset : xset q | grep Caps Result: 00: Caps Lock: off 01: Num Lock: on 02: Scroll Lock: off But if no X you can try kbdinfo : kbdinfo gkbled Result: scrolllock:off numlock:on capslock:off Edit: If you want to change states with xset you may check following answer . Or you can change state using xdotool : xdotool key Caps_Lock For onscreen notifier you may check key-mon . You can try also following script: #!/bin/bash#lockkey.shsleep .2case $1 in 'num') mask=2 key="Num" ;; 'caps') mask=1 key="Caps" ;;esacvalue="$(xset q | grep 'LED mask' | awk '{ print $NF }')"if [ $(( 0x$value & 0x$mask )) == $mask ]then output="$key Lock is on"else output="$key Lock is off"finotify-send "$output" You can copy script in /usr/local/bin and bind Caps to run it as: /usr/local/bin/lockkey.sh caps and/or Num as: /usr/local/bin/lockkey.sh num
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207754", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117312/" ] }
207,782
When I view the length and width of my terminal emulator with stty size then it is 271 characters long and 71 lines tall. When I log into another server over SSH and execute stty size , then it is also 271 characters long and 71 lines tall. I can even log into some Cisco IOS device and terminal is still 271 characters long and 71 lines tall: C1841#show terminal | i Len|WidLength: 71 lines, Width: 271 columnsC1841# Now if I resize my terminal emulator(Gnome terminal) window in local machine, both stty size in remote server and "show terminal" in IOS show different line length and number of lines. How are terminal length and width forwarded over SSH and telnet?
The telnet protocol, described in RFC 854 , includes a way to send in-band commands, consisting of the IAC character , '\255' , followed by several more bytes. These commands can do things like send an interrupt to the remote, but typically they're used to send options . A detailed look at an exchange that sends the terminal type option can be found in Microsoft Q231866 . The window size option is described in RFC 1073 . The client first sends its willingness to send an NAWS option. If the server replies DO NAWS , the client can then send the NAWS option data, which is comprised of two 16-bit values. Example session, on a 47 row 80 column terminal: telnet> set optionsWill show option processing.telnet> open localhostTrying 127.0.0.1...Connected to localhost.Escape character is '^]'.SENT WILL NAWSRCVD DO NAWSSENT IAC SB NAWS 0 80 (80) 0 47 (47) The ssh protocol is described in RFC 4254 . It consists of a stream of messages. One such message is "pty-req" , which requests a pseudo-terminal, and its parameters include the terminal height and width. byte SSH_MSG_CHANNEL_REQUESTuint32 recipient channelstring "pty-req"boolean want_replystring TERM environment variable value (e.g., vt100)uint32 terminal width, characters (e.g., 80)uint32 terminal height, rows (e.g., 24)uint32 terminal width, pixels (e.g., 640)uint32 terminal height, pixels (e.g., 480)string encoded terminal modes The telnet and ssh clients will catch the SIGWINCH signal, so if you resize a terminal window during a session, they will send an appropriate message to the server with the new size. Ssh sends the Window Dimension Change Message: byte SSH_MSG_CHANNEL_REQUESTuint32 recipient channelstring "window-change"boolean FALSEuint32 terminal width, columnsuint32 terminal height, rowsuint32 terminal width, pixelsuint32 terminal height, pixels
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/207782", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33060/" ] }
207,835
I want to insert 5 blank lines after every line in my input file. foo.txt : line 1line 2line 3 out.txt : line 1line 2line 3 ... Solaris 5.10, nawk or sed .
That's the job for sed : sed -e 'G;G;G;G;G' file With awk : nawk -vORS='\n\n\n\n\n\n' 1 file Or shorter version: awk 'ORS="\n\n\n\n\n\n"' file or avoid setting ORS for each input line: awk 'BEGIN{ORS="\n\n\n\n\n\n"};1' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207835", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115560/" ] }
207,907
I'm using Centos 6.5 and when I want to install packages from yum I get this error: GPG key retrieval failed: [Errno 14] Could not open/read file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puias How can I fix this?
This error happens because you have some YUM repository configuration in /etc/yum.repos.d/ that lists a GPG key like this: gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puias This configuration is telling YUM that the GPG key for the repository exists on disk. The error you get from YUM is YUM letting you know that it couldn't find the GPG key at the path /etc/pki/rpm-gpg/RPM-GPG-KEY-puias So, by manually writing the GPG key to /etc/pki/rpm-gpg/RPM-GPG-KEY-puias like you did, YUM was then able to find the key at that path. Alternatively, you could have set gpgkey to the URL of the key, like this: gpgkey=http://springdale.math.ias.edu/data/puias/6/x86_64/os/RPM-GPG-KEY-puias in you repository configuration. GPG and YUM/RPM can be quite tricky. If you are curious about how more of the internals work, check out this blog post .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/207907", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63869/" ] }
207,919
> brew install moreutils ==> Downloading https://homebrew.bintray.com/bottles/moreutils-0.55.yosemite.bottle.tar.gz ######################################################################## 100.0% ==> Pouring moreutils0.55.yosemite.bottle.tar.gz /usr/local/Cellar/moreutils/0.55: 67 files, 740K sponge reads standard input and writes it out to the specified file. Unlike a shell redirect, sponge soaks up all its input before writing the output file. This allows constructing pipelines that read from and write to the same file. I don't understand. Please give me some useful examples. What does soaks up mean?
Assume that you have a file named input , you want to remove all line start with # in input . You can get all lines don't start with # using: grep -v '^#' input But how do you make changes to input ? With standard POSIX toolchest, you need to use a temporary file, some thing like: grep -v '^#' input >/tmp/input.tmpmv /tmp/input.tmp ./input With shell redirection: grep -v '^#' input >input will truncate input before you reading from it. With sponge , you can: grep -v '^#' input | sponge input
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/207919", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26612/" ] }
207,935
I have two Linux systems communicating over sockets (Desktop and ARM-based development board). I want to restart (or reset) my client application (running on a development board) when server sends a particular predefined message. I don't want to restart (reboot) Linux, I just want that client application restart itself automatically. I am unable to understand how it should be done.
The normal way to do this is to let your program exit, and use a monitoring system to restart it. The init program offers such a monitoring system. There are many different init programs (SysVinit, BusyBox, Systemd, etc.), with completely different configuration mechanisms (always writing a configuration file, but the location and the syntax of the file differs), so look up the documentation of the one you're using. Configure init to launch your program at boot time or upon explicit request, and to restart it if it dies. There are also fancier monitoring programs but you don't sound like you need them. This approach has many advantages over having the program do the restart by itself: it's standard, so you can restart a bunch of services without having to care how they're made; it works even if the program dies due to a bug. There's a standard mechanism to tell a process to exit: signals . Send your program a TERM signal. If your program needs to perform any cleanup, write a signal handler. That doesn't preclude having a program-specific command to make it shut down if you have an administrative channel to send it commands like this.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207935", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66912/" ] }
207,957
#!/bin/bashfunction0(){ local t1=$(exit 1) echo $t1}function0 echo prints empty value. I expected: 1 Why doesn't t1 variable get assigned the exit command's return value - 1 ?
local t1=$(exit 1) tells the shell to: run exit 1 in a subshell; store its output (as in, the text it outputs to standard output) in a variable t1 , local to the function. It's thus normal that t1 ends up being empty. ( $() is known as command substitution .) The exit code is always assigned to $? , so you can do function0(){ (exit 1) echo "$?"} to get the effect you're looking for. You can of course assign $? to another variable: function0(){ (exit 1) local t1=$? echo "$t1"}
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/207957", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
207,974
Using the following command. rsync --archive --delete --partial --progress --recursive --no-links --no-devices --quiet source target Using --no-links and --no-devices already. Getting the error messages such as this. rsync: mknod "/mnt/shared/backup/var/spool/postfix/dev/log" failed: Operation not permitted (1) Makes rsync exit non-zero. This is bad. Breaks my backup script. (I don't want to use ignore this error using || true in case rsync would fail for "legitimate" reasons such as no disk space left.) In this example, it's a socket file. I don't care about this kind of special files. Can I make rsync ignore/skip those?
rsync -a --no-specials --no-devices would tell rsync to skip these files. It will still print an information message, but it would return 0 if no other error occurs. If there's a set of known paths that you don't want to transfer, you could exclude them altogether. Also, do pass the -x option to skip all mounted filesystems (including /dev , which takes care of the biggest offender), and if there are multiple on-disk filesystems, list all the mount points (e.g. rsync -ax / /home /destination ). rsync -ax --exclude='/var/spool/postfix/dev/*' / /mnt/shared/backup If none of that is satisfactory, make a list of files you want to skip . Beware that if some of the file names are under control of an adversary, they could cause some files to be omitted from a backup. For example, if they create a directory whose name is a newline and create a named socket called * inside it, then using the output of find -type s as an exclude list would result in /* being excluded. To prevent such problems, keep problematic names out of the exclude list. { cd /path/to/source && find . -name '*[\[?*]*' -prune -o \ \( -type b -o -type c -o -type p -o -type s \) -print | sed 's/^\.//'} | rsync -a --exclude-from=- /path/to/source /destination
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207974", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/49297/" ] }
207,992
I want to be able to get the received signal strength indication from my computer's wifi interface, ideally as expressed in dBm . This article explains what I am after.
There are 2 commands you can use that can give you the RSSI value. You can first cat /proc/net/wireless file and get the results there. This uses the least amount of resources of the 2 methods $ /bin/cat /proc/net/wirelessInter-| sta-| Quality | Discarded packets | Missed | WE face | tus | link level noise | nwid crypt frag retry misc | beacon | 22wlo1: 0000 70. -31. -256 0 0 0 0 25 0 The other alternative is to use iwconfig/iwlist (my wireless interface is wlo1, so replace with the appropriate interface name) $ /sbin/iwconfig wlo1wlo1 IEEE 802.11 ESSID:"COD PUBLIC WIRELESS" Mode:Managed Frequency:5.745 GHz Access Point: 40:E3:D6:63:BC:B0 Bit Rate=866.7 Mb/s Tx-Power=22 dBm Retry short limit:7 RTS thr:off Fragment thr:off<br> Power Management:on Link Quality=67/70 Signal level=-43 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:27 Missed beacon:0$ iwlist wlo1 scanningwlo1 Scan completed : Cell 01 - Address: 40:E3:D6:63:BC:B0 Channel:149 Frequency:5.745 GHz Quality=70/70 Signal level=-38 dBm Encryption key:off ESSID:"COD PUBLIC WIRELESS" Bit Rates:12 Mb/s; 24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=00000075dccaa121 Extra: Last beacon: 81624ms ago IE: Unknown: 0013434F44205055424C494320574952454C455353 IE: Unknown: 010598B048606C IE: Unknown: 030195 IE: Unknown: 2D1AEF091BFFFFFFFF00000000000000000000000000000000000000 IE: Unknown: 3D1695050400000000000000000000000000000000000000 IE: Unknown: 4A0E14000A002C01C800140005001900 IE: Unknown: 7F080100080000000040 IE: Unknown: BF0CB1798B33AAFF0000AAFF0000 IE: Unknown: C005019B000000 IE: Unknown: DD180050F2020101800003A4000027A4000042435E0062322F00
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/207992", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63940/" ] }
208,047
I've been exploring learning and about the Linux system via VirtualBox for a few month's and decided that I want to make it my main OS and install it on my hard-drive. I have a number of files, packages and settings (such as changes to .bashrc) that I'd like to bring over when I install it directly to my HDD, so how can I do this? Also, right now I'm running Ubuntu if I decide to switch to a similar Debian or RPM,RHEL based distro would it be the same process? What considerations would I have to take into account if any?
Package management is one of the main differentiators between distributions. Between unrelated distributions, you won't be able to do anything automatic. Different distributions break down software into different sets of packages and use different names. Between machines running the same version of the same distribution, you can achieve a similar installation by reproducing the list of installed packages. On systems using apt , such as Debian and derivatives (Ubuntu, Mint, …), use apt-clone . See How do I replicate installed package selections from one Debian system to another? (Debian Wheezy) for the exact commands. In a nutshell, on the old machine: sudo apt-get install apt-cloneapt-clone clone foo Copy foo.apt-clone.tar.gz to the new machine and run sudo apt-get install apt-clonesudo apt-clone restore foo.apt-clone.tar.gz apt-clone may work between related distributions, e.g. Debian and Ubuntu. Use restore-new-distro instead of restore in that case. If that fails, use the manual method with dpkg --get-selections and apt-mark , and fiddle with the package list until apt is satisfied. For your own settings, it's simpler: just copy the dot files from your home directory. As a rule, configure things that aren't related to the hardware in your account, not system-wide; that will make it easy to copy them to another machine.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/208047", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118378/" ] }
208,053
I have my Linux/Debian/Sid amd64 (with i7 3770K, 16Gb RAM, 1 SSD + 2 hard disks) PC with xen (notably I have installed a package xen-linux-system-amd64 ), so sudo xen list Name ID Mem VCPUs State Time(s) Domain-0 0 16016 8 r----- 2634.8 I am understanding that the Dom0 is my Linux 4.0 kernel & system; I have xen-hypervisor-4.5-amd ... I have a disk with two partitions for FreeBSD sudo fdisk -l /dev/sddDisk /dev/sdd: 465.8 GiB, 500106780160 bytes, 976771055 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0x624aeae3Device Boot Start End Sectors Size Id Type/dev/sdd1 2048 754976767 754974720 360G 83 Linux/dev/sdd2 754976768 773851135 18874368 9G 82 Linux swap / Solaris/dev/sdd3 773851136 792725503 18874368 9G b8 BSDI swap/dev/sdd4 792725504 976771054 184045551 87.8G b7 BSDI fs (BTW, this is not my system disk; /dev/sdd1 is for my /xtra Linux ext4 file system, and dev/sdd2 is my second swap partition) I would like to run FreeBSD, probably only on the command line since I don't need to run X11 server under FreeBSD, (preferably FreeBSD 10.1 for x86-64 architecture) under xen using /dev/sdd4 for its root (and only) filesystem and /dev/sdd3 for its swap area. But I am not sure of how to proceed next. Should I download FreeBSD-10.1-RELEASE-amd64-bootonly.iso or what? What are the next steps to run it under xen? The http://wiki.xen.org/wiki/How_to_Install_a_FreeBSD_domU_on_a_Linux_Host page is not helpful enough (& https://www.freebsd.org/doc/handbook/virtualization-guest.html don't mention xen) FWIW, I am an old Unixer (my first Unix was SunOS3.2 on Sun3/160 in the mid 1980s) and my main motivation for trying FreeBSD is to check that my MELT software (a GPLv3 plugin for recent GCC to easily extend and customize GCC in a Lisp-like language) can be used on FreeBSD.
Package management is one of the main differentiators between distributions. Between unrelated distributions, you won't be able to do anything automatic. Different distributions break down software into different sets of packages and use different names. Between machines running the same version of the same distribution, you can achieve a similar installation by reproducing the list of installed packages. On systems using apt , such as Debian and derivatives (Ubuntu, Mint, …), use apt-clone . See How do I replicate installed package selections from one Debian system to another? (Debian Wheezy) for the exact commands. In a nutshell, on the old machine: sudo apt-get install apt-cloneapt-clone clone foo Copy foo.apt-clone.tar.gz to the new machine and run sudo apt-get install apt-clonesudo apt-clone restore foo.apt-clone.tar.gz apt-clone may work between related distributions, e.g. Debian and Ubuntu. Use restore-new-distro instead of restore in that case. If that fails, use the manual method with dpkg --get-selections and apt-mark , and fiddle with the package list until apt is satisfied. For your own settings, it's simpler: just copy the dot files from your home directory. As a rule, configure things that aren't related to the hardware in your account, not system-wide; that will make it easy to copy them to another machine.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/208053", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50557/" ] }
208,095
Bash code to print all folders: for f in ~/*;doif [ $f == '/home/sk/.' -o $f == '/home/sk/..' ]; then true else echo "$f" fidone It works on bash. When i ran the code on z shell, it threw error: = not found Then I converted [ into [[ , ] into ]] to avoid this error in z shell and ran it on z shell. It threw next error: condition expected: $f With [[ and ]] , bash also throws error as: syntax error in conditional expressionsyntax error near `-o' Is there a POSIX standard to do string comparison in shell, that works across shells?
There are various issues here. First, == is not standard, the POSIX way is = . Same goes for the -o . This one will work on both bash and zsh: for f in ~/*;doif [ "$f" = '/home/sk/.' ] || [ "$f" = '/home/sk/..' ]; then true else echo "$f" fidone Note that your if is unneeded, dotfiles are ignored by default in both bash and zsh. You can simply write: for f in ~/*; do echo "$f"; done Or even printf "%s\n" ~/*
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/208095", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
208,105
The output of my program has a .raw file extension. If I try to open this with less I get: No isoinfo availableInstall mkisofs to view ISO images The file isn't an image file, it's just text. Is there a way to tell less that the file should be opened as plain text?
The attempt to use isoinfo comes from lesspipe , which is generally used as a helper for less via the LESSOPEN variable. Running LESSOPEN= less file.raw will open file.raw without interpretation.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/208105", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98726/" ] }
208,112
I'm observing some weird behavior when using set -e ( errexit ), set -u ( nounset ) along with ERR and EXIT traps. They seem related, so putting them into one question seems reasonable. 1) set -u does not trigger ERR traps Code: #!/bin/bashtrap 'echo "ERR (rc: $?)"' ERRset -uecho ${UNSET_VAR} Expected: ERR trap gets called, RC != 0 Actual: ERR trap is not called, RC == 1 Note: set -e does not change the result 2) Using set -eu the exit code in an EXIT trap is 0 instead of 1 Code: #!/bin/bashtrap 'echo "EXIT (rc: $?)"' EXITset -euecho ${UNSET_VAR} Expected: EXIT trap gets called, RC == 1 Actual: EXIT trap is called, RC == 0 Note: When using set +e , the RC == 1. The EXIT trap returns the proper RC when any other command throws an error. Edit: There is a SO post on this topic with an interesting comment suggesting that this might be related to the Bash version being used. Testing this snippet with Bash 4.3.11 results in an RC=1, so that's better. Unfortunately upgrading Bash (from 3.2.51) on all hosts is not possible at the moment, so we have to come up with some other solution. Can anyone explain either of these behaviors? Searching these topics was not very successful, which is rather surprising given the number of posts on Bash settings and traps. There is one forum thread , though, but the conclusion is rather unsatisfying.
From man bash : set -u Treat unset variables and parameters other than the special parameters "@" and "*" as an error when performing parameter expansion. If expansion is attempted on an unset variable or parameter, the shell prints an error message, and, if not -i nteractive, exits with a nonzero status. POSIX states that, in the event of an expansion error , a non-interactive shell shall exit when the expansion is associated with either a shell special builtin (which is a distinction bash regularly ignores anyway, and so maybe is irrelevant) or any other utility besides. Consequences of Shell Errors : An expansion error is one that occurs when the shell expansions defined in Word Expansions are carried out (for example, "${x!y}" , because ! is not a valid operator) ; an implementation may treat these as syntax errors if it is able to detect them during tokenization, rather than during expansion. [A]n interactive shell shall write a diagnostic message to standard error without exiting. Also from man bash : trap ... ERR If a sigspec is ERR , the command arg is executed whenever a pipeline (which may consist of a single simple command) , a list, or a compound command returns a non-zero exit status, subject to the following conditions: The ERR trap is not executed if the failed command is part of the command list immediately following a while or until keyword... ...part of the test in an if statement... ...part of a command executed in a && or || list except the command following the final && or || ... ...any command in a pipeline but the last... ...or if the command's return value is being inverted using ! . These are the same conditions obeyed by the errexit -e option. Note above that the ERR trap is all about the evaluation of some other command's return. But when an expansion error occurs, there is no command run to return anything. In your example, echo never happens - because while the shell evaluates and expands its arguments it encounters an -u nset variable, which has been specified by explicit shell option to cause an immediate exit from the current, scripted shell. And so the EXIT trap, if any, is executed, and the shell exits with a diagnostic message and exit status other than 0 - exactly as it should do. As for the rc: 0 thing, I expect that is a version specific bug of some kind - probably to do with the two triggers for the EXIT occurring at the same time and the one getting the other's exit code (which should not occur) . And anyway, with an up-to-date bash binary as installed by pacman : bash <<\IN printf "shell options:\t$-\n" trap 'echo "EXIT (rc: $?)"' EXIT set -eu echo ${UNSET_VAR}IN I added the first line so you can see that the shell's conditions are those of a scripted shell - it is not interactive. The output is: shell options: hBbash: line 4: UNSET_VAR: unbound variableEXIT (rc: 1) Here are some relevant notes from recent changelogs : Fixed a bug that caused asynchronous commands to not set $? correctly. Fixed a bug that caused error messages generated by expansion errors in for commands to have the wrong line number. Fixed a bug that caused SIGINT and SIGQUIT to not be trap pable in asynchronous subshell commands. Fixed a problem with interrupt handling that caused a second and subsequent SIGINT to be ignored by interactive shells. The shell no longer blocks receipt of signals while running trap handlers for those signals, and allows most trap handlers to be run recursively (running trap handlers while a trap handler is executing) . I think it is either the last or the first that is most relevant - or possibly a combination of the two. A trap handler is by its very nature asynchronous because its whole job is to wait for and handle asynchronous signals . And you trigger two simultaneously with -eu and $UNSET_VAR . And so maybe you should just update, but if you like yourself, you'll do it with a different shell altogether.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/208112", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18549/" ] }
208,140
I am trying to delete all the files with a space in their names. I am using following command. But it is giving me an error Command : ls | egrep '. ' | xargs rm Here if I am using only ls | egrep '. ' command it is giving me all the file name with spaces in the filenames. But when I am trying to pass the output to rm, all the spaces (leading or trailing) gets deleted. So my command is not getting properly executed. Any pointers on how to delete the file having atleast one space in their name?
You can use standard globbing on the rm command: rm -- *\ * This will delete any file whose name contains a space; the space is escaped so the shell doesn't interpret it as a separator. Adding -- will avoid problems with filenames starting with dashes (they won’t be interpreted as arguments by rm ). If you want to confirm each file before it’s deleted, add the -i option: rm -i -- *\ *
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/208140", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118444/" ] }
208,184
Sometimes I use, $PROJECT_HOME/* to delete all files in the project. When the environment variable, PROJECT_HOME is not set (because I did su and the new user doesn't have this environment variable set), it starts deleting all files from the root folder. This is apocalyptic. How can I configure bash to throw error, when I use an undefined environment variable in the shell?
In POSIX shell, you can use set -u : #!/bin/shset -u: "${UNSET_VAR}" or using Parameter Expansion : : "${UNSET_VAR?Unset variable}" In your case, you should use :? instead of ? to also fail on set but empty variables: rm -rf -- "${PROJECT_HOME:?PROJECT_HOME empty or unset}"/*
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/208184", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
208,202
Input: firstlinesecondlinethirdline ...some magic happens here... :) Output: insertedtextfirstlinesecondlinethirdline Question : How can I insert the insertedtext to the start of the first line in a file?
With GNU sed : sed -i '1s/^/insertedtext/' file This replaces the beginning of the first line with the inserted text. -i replaces the text in file rather than sending the modified text to the standard output.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/208202", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112826/" ] }
208,257
I'm trying to add a debug option to a script of mine. Normally I want to hide any output, like warnings etc, so I put >/dev/null 2>&1 behind a lot of commands. Now, when I want to debug my script, I would have to manually remove those, or put every command inside an if ... fi testing for a $DEBUG Variable. I thought putting >/dev/null 2>&1 inside a Variable ( $REDIR ) and writing command arg1 $REDIR would do the trick. If I want to debug, I would only need to leave $REDIR empty. But a short test on my shell showed me, that it won't work that way: ~$ echo "bla" >/dev/null 2>&1~$ REDIR=>/dev/null 2>&1~$ echo "bla" $REDIRbla~$ Using " or ' around >/dev/null 2>&1 didn't work either for obvious reasons. So why does my idea not work here? Did I misunderstand something about putting commands etc. into Variables and calling them?
For such a purpose I usually define a function like run . This can correctly handle args with spaces and others in most cases. #!/bin/bashrun() { if $DEBUG; then v=$(exec 2>&1 && set -x && set -- "$@") echo "#${v#*--}" "$@" else "$@" >/dev/null 2>&1 fi}DEBUG=falserun echo "bla"DEBUG=truerun echo "bla"run printf "%s . %s . %s\n" bla "more bla" bla Output: $ bash debug.sh # echo blabla# printf '%s . %s . %s\n' bla 'more bla' blabla . more bla . bla
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/208257", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88357/" ] }
208,260
While tinkering with a Linux VM I regularly get kernel panics, that push up all the helpful output with stuff I don't understand and that is probably not helpful to me. The panics mostly occur during the initramfs phase. I'm using VirtualBox . The normal Shift + Pg Up does not work (in my case). Is there another way to scroll back up and look at the output of whatever came before?
Serial port The serial port is an old and reliable communication protocol hardware that the Linux kernel supports and most emulators emulate. You can stream the kernel messages to a host file or console through it: VirtualBox: How does one Capture the Entire Kernel Panic on Boot | Stack Overflow QEMU: to console: How to switch to qemu monitor console when running with "-curses" | Stack Overflow to file: Write QEMU booting virtual machine output to a file | Super User Here's a minimal setup to reproduce the problem: https://github.com/cirosantilli/linux-kernel-module-cheat/blob/b366bac0c5410ceef7f2b97f96d93d722c4d9ea6/kernel_module/panic.c real hardware: the serial port is not exposed on most modern laptops, which is a shame... but on desktops it looks like this: Source . And on the Raspberry Pi: More details at: What is the difference between ttys0, ttyUSB0 and ttyAMA0 in Linux? Serial alternatives There are even fancier methods mentioned at: Determining cause of Linux kernel panic | Unix & Linux Stack Exchange netdump: sends trace over network. Supposes panic didn't break networking, which is more likely than the serial. The advantages over serial are: works for systems that don't have serial exposed such as modern laptops serial cables have quite limited maximum wire lengths, which is problematic if you want to have all the boards of your company on a remote room to share resources across developers. There are however serial connectors with an Ethernet server which I would recommend instead if your target exposes serial, e.g. this one : kdump: boots a secondary Linux kernel that inspects the panicked kernel. What could possibly go wrong? Those methods are more flexible, but less reliable. See also: Scrolling up the failed screen with kernel panic | Super User
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/208260", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88357/" ] }
208,309
Say I have to use quotes to encapsulate subshell output like: DATA="$(cat file.hex | xxd -r)" But I need to nest this kind of stuff like: DATA="$(cat file.hex | xxd -r | tr -d \"$(cat trim.txt)\")" I can't use single quotes because those do not expand variables that are inside of them. Escaping quotes doesn't work because they are just treated as passive text. How do I handle this?
You don't need to escape the quotes inside a subshell, since the current shell doesn't interpret them (it doesn't interpret anything from $( to ) , actually), and the subshell doesn't know about any quotes that are above. Quoting a subshell at variable assignment is unnecessary too, for more info see man bash .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/208309", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89807/" ] }
208,332
I have an input file which contains both Unix (LF) and Windows (CR/LF) style newlines. (Specifically, it's XML from a Linux system, but it contains some raw HTTP headers, and HTTP prefers CRLF for headers): <response_page cause="default"> <response_type>custom</response_type> <response_header>HTTP/1.1 200 OK^MCache-Control: no-cache^MPragma: no-cache^MConnection: close</response_header> I'm working on a gawk script to go through this file to make some simple tweaks to the XML* and the only problem is that it reads both LF and CRLF valid RS but only outputs LF regardless of what was there... In essence, it strips the CRs. I've tried various things, the most ambitious being regex matching for RS and printing RT: BEGIN { RS = "\r\n|\n"; go = "no" }(go ~ /yes/) { sub(/false/, "true", $0) go = "no"}($0 ~ /<signature signature_id="200000017">/) { print "Found signature!" go = "yes"} { printf $0 RT} I would greatly appreciate any pointers on getting gawk to reproduce mixed-platform RS terminators. * In this case, the simple tweak is to change 'false' to 'true' on the line following the line with the correct signature ID. I fully realize that using an XML parser would be the correct way to do this, but for such a lightweight need am trying to avoid buying into the howl of pain and angst that is XML parsing. Update: As it turns out, this solution works - when run under Linux. When run under Cygwin gawk, on Windows, the CRLF/LF distinction is apparently muted, and it does not work as expected. I am awarding the answer points to Peter.O, even though he essentially reiterated what I was trying, because he did so in a thorough manner that made me question my assumptions when I realized we were doing the same thing and mine didn't work.
You can use the built-in variable RT RT is set each time a record is read. It contains the input text that matched the text denoted by RS, the record separator. This variable is a gawk extension. printf '%s\n' LF CRLF$'\r' | gawk 'BEGIN { RS = "\r\n|\n" } { printf($0 RT) }' Output when piped to sed -n l - which shows CR as \r , and end-of-line as $ - which, to sed means that the next character is \n (or end-of-input . LF$CRLF\r$ However, if you want to toggle the terminator from CRLF to LF or vice-versa, the two actions are: printf '%s\n' was-LF was-CRLF$'\r' | gawk 'BEGIN { RS = "\r\n|\n" } RT == "\r\n" { printf($0 "\n") } RT == "\n" { printf($0 "\r\n") }' Output when piped to sed -n l was-LF\r$was-CRLF$ Note: You will need to use if for the tests when they aren't the first lines of (main section) code: gawk 'BEGIN { RS = "\r\n|\n" } { # some processing code here (before the tests) if( RT == "\r\n" ) { printf($0 "\n") } if( RT == "\n") { printf($0 "\r\n") } }'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/208332", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66633/" ] }
208,334
I'd like to write the following test in an installer script 1 : if [ -n "`/etc/grub.d/30_os-prober`" ]; then install_dual_bootelse install_linux_onlyfi However, it's also possible that 30_os-prober produces no output because it somehow failed to complete. If 30_os-prober fails with a non-zero exit status, it would be safer to assume the a dual-boot situation. How can I check that 30_os-prober produced no output successfully ? Basically, I would like to obtain an effect similar to… if [ -n "`/etc/grub.d/30_os-prober`" ]; then # Do stuff for a dual-boot machine install_dual_bootelif ! /etc/grub.d/30_os-prober >/dev/null; then # OS probe failed; assume dual boot out of abundance of caution install_dual_bootelse install_linux_onlyfi … but without running the same command twice. 1 Background information: 30_os-prober comes with GRUB2 , and the script I am writing is part of my custom configuration for FAI .
First of all, while they are functionally equivalent, $(…) is widely considered to be clearer than `…` —see this , this , and this .  Secondly,you don’t need to use $? to check whether a command succeeded or failed. My attention was recently drawn to Section 2.9.1, Simple Commands of The Open Group Base Specifications for Shell & Utilities (Issue 7) : A "simple command" is a sequence of optional variable assignments and redirections, in any sequence, optionally followed by words and redirections, terminated by a control operator. When a given simple command is required to be executed … ⋮ (blah, blah, blah …) If there is a command name, execution shall continue as described in Command Search and Execution .  If there is no command name, but the command contained a command substitution, the command shall complete with the exit status of the last command substitution performed.  … For example, the exit status of the command ls -ld "module_$(uname).c" is the exit status from the ls , but the exit status of the command myfile="module_$(uname).c" is the exit status from the uname . So ferada’s answer can be streamlined a bit: if output=$(/etc/grub.d/30_os-prober) && [ -z "$output" ] # i.e., if 30_os-prober successfully produced no outputthen install_linux_onlyelse install_dual_bootfi Note that it is good practice to use all-upper-case namesonly for environment variables(or variables to be visible throughout the script). Variables of limited scope are usually named in lower case(or, if you prefer, camelCase).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/208334", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17535/" ] }
208,349
I am trying to create a script that will be getting one variable from user and should print a pyramid as below: *************** I used this script but it shows me numbers: for i in {1..5}doa=${a}${i}echo ${a}done the output: 112123123412345 How can I insert "*" sign instead of numbers?
Simply append the * character to the a variable, instead of the loop counter: for i in {1..5}do a+='*' echo "${a}"done Note that a="${a}*" instead of a+='*' works just as well, but I think the += version is neater/clearer. If you want to do this with a while loop instead, you could do something like this: while (( "${#a}" < 5 )); do a+='*' echo "$a"done ${#a} expands to the length of the string in the a variable. Note that both of the above code snippets (as well as the code in the question) assume that the a var is empty or not set at the start of the snippet. If this is not the case, then you'll need to reinitialize it first: a= I am assuming you are using the bash shell. Here is the full manual. Here is the section on looping constructs.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/208349", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117440/" ] }
208,407
I often hear people refer to the Linux kernel as the Linux kernel image and I can't seem to find an answers on any search engines as to why its called an image. When I think of an image I can only think of two things either a copy of a disk in or a photo. It sure as hell isn't a photo image so why is it referred to as an image?
The Unix boot process has (had) only limited capabilities of intelligently loading a program (relocating it, loading libraries etc). Therefore the initial program was an exact image, stored on disc, of what needed to be loaded into memory and "called" to get the kernel going. Only much later things like (de-)compression were added and although more powerful bootloaders are now in place, the image name has stuck.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/208407", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118378/" ] }
208,412
My understanding is that during ssl negotiation, the client (i.e. curl) sends a list of ciphers to the server, and the server replies with its preferred choice. How do I see the list of ciphers that curl is sending?
There is a website that offers curl cipher request detection as a service: curl https://www.howsmyssl.com/a/check However, it does not accept all ciphers - if one of the ciphers they accept is not on the list that your curl is sending, then you will not be able to get a response at all.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/208412", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5032/" ] }
208,435
I want to check the auth.log file for accepted sshd connections and execute an action if any are found. Here is my code: tail -f /var/log/auth.log | awk '{ if($0 ~ /sshd/ && $0 ~ /Accepted/) { system("echo FOUND") } }' For some reason, this produces no output. Can somebody explain why?
I think that your problem is related to the bufferisation of tail -f : ~$ tail auth.log | awk '{ if($0 ~ /sshd/ && $0 ~ /Accepted/) { system("echo FOUND") } }'FOUNDFOUND It works with tail , but fails with tail -f : ~$ tail -f auth.log | awk '{ if($0 ~ /sshd/ && $0 ~ /Accepted/) { system("echo FOUND") } }'^C A workaround you could use, is using a while loop to read each line of tail -f : ~$ tail -f auth.log | while read line> do> echo $line | awk '{ if($0 ~ /sshd/ && $0 ~ /Accepted/) { system("echo FOUND")} }'> doneFOUNDFOUND -- Searching man awk for buffer , I found the -W option (but this is a mawk version…): -W interactive sets unbuffered writes to stdout and line buffered reads from stdin. Records from stdin are lines regardless of the value of RS . also: mawk accepts abbreviations for any of these options, e.g., “ -W i ” and “ -W i ” … ~$ tail -f auth.log | awk -Wi '/sshd/ && /Accepted/ {system("echo FOUND")}'FOUNDFOUND
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/208435", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67810/" ] }
208,436
My prompt string is printed using this statement, printf '\033]0;%s@%s:%s\007' user host /home/user Why does it need an escape character ( \033 ) and a bell character ( 007 )? When i ran the same command manually, it prints nothing. When i removed the escape characters and gave the command as, printf '%s@%s:%s' user host /home/user it prints, user@home:/home/user which is easier to understand. So, how does the escape characters, \033 and 007 get converted to a shell prompt string?
Only \033 is an escape and it initiates the escape sequence up until and include the ; . \033]0; . This initiates a string that sets the title in the titlebar of the terminal and that string is terminated by the \007 special character. See man console_codes : It accepts ESC ] (OSC) for the setting of certain resources. In addi‐ tion to the ECMA-48 string terminator (ST), xterm(1) accepts a BEL to terminate an OSC string. These are a few of the OSC control sequences recognized by xterm(1): ESC ] 0 ; txt ST Set icon name and window title to txt. That you don't see any changes is probably because your prompt sets the title to the default title string on returning to the prompt. Try: PROMPT_COMMAND= ; printf '\033]0;Hello World!\007'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/208436", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
208,437
The official ssl docs list ciphers in a different format than curl takes. For instance, if I want curl to use the cipher TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, I have to pass it curl --ciphers ecdhe_rsa_3des_sha . I know what some of the mappings are, but not all of them - for instance, what do I have to pass to curl to get it to use cipher TLS_DHE_RSA_WITH_AES_128_GCM_SHA256? Is there anywhere I can find a document showing how the cipher names in the ssl docs map to the cipher names that curl accepts? Edit: I eventually have discovered that my curl is backed by NSS, not OpenSSL, and the problem is specifically because there is no good documentation on using NSS-backed curl, while it requires a different argument than OpenSSL does to use the same cipher. So my question is specific to NSS.
There is no documentation covering all of the conversions between the name of the cipher, and the name that curl is expecting as an argument. Luckily, curl is open source, and the mapping is available in the source code . For the benefit of future searchers, I reproduce it more neatly here: SSL2 cipher suites <argument> <name>rc4 SSL_EN_RC4_128_WITH_MD5rc4-md5 SSL_EN_RC4_128_WITH_MD5rc4export SSL_EN_RC4_128_EXPORT40_WITH_MD5rc2 SSL_EN_RC2_128_CBC_WITH_MD5rc2export SSL_EN_RC2_128_CBC_EXPORT40_WITH_MD5des SSL_EN_DES_64_CBC_WITH_MD5desede3 SSL_EN_DES_192_EDE3_CBC_WITH_MD5 SSL3/TLS cipher suites <argument> <name>rsa_rc4_128_md5 SSL_RSA_WITH_RC4_128_MD5rsa_rc4_128_sha SSL_RSA_WITH_RC4_128_SHArsa_3des_sha SSL_RSA_WITH_3DES_EDE_CBC_SHArsa_des_sha SSL_RSA_WITH_DES_CBC_SHArsa_rc4_40_md5 SSL_RSA_EXPORT_WITH_RC4_40_MD5rsa_rc2_40_md5 SSL_RSA_EXPORT_WITH_RC2_CBC_40_MD5rsa_null_md5 SSL_RSA_WITH_NULL_MD5rsa_null_sha SSL_RSA_WITH_NULL_SHAfips_3des_sha SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHAfips_des_sha SSL_RSA_FIPS_WITH_DES_CBC_SHAfortezza SSL_FORTEZZA_DMS_WITH_FORTEZZA_CBC_SHAfortezza_rc4_128_sha SSL_FORTEZZA_DMS_WITH_RC4_128_SHAfortezza_null SSL_FORTEZZA_DMS_WITH_NULL_SHA TLS 1.0: Exportable 56-bit Cipher Suites. <argument> <name>rsa_des_56_sha TLS_RSA_EXPORT1024_WITH_DES_CBC_SHArsa_rc4_56_sha TLS_RSA_EXPORT1024_WITH_RC4_56_SHA AES ciphers. <argument> <name>dhe_dss_aes_128_cbc_sha TLS_DHE_DSS_WITH_AES_128_CBC_SHAdhe_dss_aes_256_cbc_sha TLS_DHE_DSS_WITH_AES_256_CBC_SHAdhe_rsa_aes_128_cbc_sha TLS_DHE_RSA_WITH_AES_128_CBC_SHAdhe_rsa_aes_256_cbc_sha TLS_DHE_RSA_WITH_AES_256_CBC_SHArsa_aes_128_sha TLS_RSA_WITH_AES_128_CBC_SHArsa_aes_256_sha TLS_RSA_WITH_AES_256_CBC_SHA ECC ciphers. <argument> <name>ecdh_ecdsa_null_sha TLS_ECDH_ECDSA_WITH_NULL_SHAecdh_ecdsa_rc4_128_sha TLS_ECDH_ECDSA_WITH_RC4_128_SHAecdh_ecdsa_3des_sha TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHAecdh_ecdsa_aes_128_sha TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHAecdh_ecdsa_aes_256_sha TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHAecdhe_ecdsa_null_sha TLS_ECDHE_ECDSA_WITH_NULL_SHAecdhe_ecdsa_rc4_128_sha TLS_ECDHE_ECDSA_WITH_RC4_128_SHAecdhe_ecdsa_3des_sha TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHAecdhe_ecdsa_aes_128_sha TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHAecdhe_ecdsa_aes_256_sha TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHAecdh_rsa_null_sha TLS_ECDH_RSA_WITH_NULL_SHAecdh_rsa_128_sha TLS_ECDH_RSA_WITH_RC4_128_SHAecdh_rsa_3des_sha TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHAecdh_rsa_aes_128_sha TLS_ECDH_RSA_WITH_AES_128_CBC_SHAecdh_rsa_aes_256_sha TLS_ECDH_RSA_WITH_AES_256_CBC_SHAechde_rsa_null TLS_ECDHE_RSA_WITH_NULL_SHAecdhe_rsa_rc4_128_sha TLS_ECDHE_RSA_WITH_RC4_128_SHAecdhe_rsa_3des_sha TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHAecdhe_rsa_aes_128_sha TLS_ECDHE_RSA_WITH_AES_128_CBC_SHAecdhe_rsa_aes_256_sha TLS_ECDHE_RSA_WITH_AES_256_CBC_SHAecdh_anon_null_sha TLS_ECDH_anon_WITH_NULL_SHAecdh_anon_rc4_128sha TLS_ECDH_anon_WITH_RC4_128_SHAecdh_anon_3des_sha TLS_ECDH_anon_WITH_3DES_EDE_CBC_SHAecdh_anon_aes_128_sha TLS_ECDH_anon_WITH_AES_128_CBC_SHAecdh_anon_aes_256_sha TLS_ECDH_anon_WITH_AES_256_CBC_SHA new HMAC-SHA256 cipher suites specified in RFC <argument> <name>rsa_null_sha_256 TLS_RSA_WITH_NULL_SHA256rsa_aes_128_cbc_sha_256 TLS_RSA_WITH_AES_128_CBC_SHA256rsa_aes_256_cbc_sha_256 TLS_RSA_WITH_AES_256_CBC_SHA256dhe_rsa_aes_128_cbc_sha_256 TLS_DHE_RSA_WITH_AES_128_CBC_SHA256dhe_rsa_aes_256_cbc_sha_256 TLS_DHE_RSA_WITH_AES_256_CBC_SHA256ecdhe_ecdsa_aes_128_cbc_sha_256 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256ecdhe_rsa_aes_128_cbc_sha_256 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 AES GCM cipher suites in RFC 5288 and RFC 5289 <argument> <name>rsa_aes_128_gcm_sha_256 TLS_RSA_WITH_AES_128_GCM_SHA256dhe_rsa_aes_128_gcm_sha_256 TLS_DHE_RSA_WITH_AES_128_GCM_SHA256dhe_dss_aes_128_gcm_sha_256 TLS_DHE_DSS_WITH_AES_128_GCM_SHA256ecdhe_ecdsa_aes_128_gcm_sha_256 TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256ecdh_ecdsa_aes_128_gcm_sha_256 TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256ecdhe_rsa_aes_128_gcm_sha_256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256ecdh_rsa_aes_128_gcm_sha_256 TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256 So if you want to use the cipher TLS_DHE_RSA_WITH_AES_128_CBC_SHA , the command would be: curl --ciphers dhe_rsa_aes_128_cbc_sha <url> In order to specify multiple ciphers, separate the list with commas. So if you want to use the cipher TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256 as well, the command would be: curl --ciphers dhe_rsa_aes_128_cbc_sha,ecdh_rsa_aes_128_gcm_sha_256 <url> To view a list of the ciphers that curl is using, you will need an external service - like this: curl --ciphers ecdhe_rsa_aes_256_sha https://www.howsmyssl.com/a/check Although NB, that service does not accept all ciphers, which means if you are restricting connection to only one cipher which is not in use, you will get an error "Cannot communicate securely with peer: no common encryption algorithm" instead of a response.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/208437", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5032/" ] }
208,482
I want to view the content of the tarred file without extracting it, Scenario: I have a.tar and inside there is a file called ./x/y.txt . I want to view the content of y.txt without actually extracting the a.tar .
It's probably a GNU specific option, but you could use the -O or --to-stdout to extract files to standard output $ tar -axf file.tgz foo/bar -O
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/208482", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118663/" ] }
208,495
I was reading about SSH key authentication and setting it up with my 3 computers at home. I have one main computer, call it "A", and two others, call them "B" and "C". Now based on the documentation I've read, I would run ssh-keygen on B and C and put the public keys on computer A assuming I will always SSH into computer A, if I'm on B or C. But, I think the documentation examples I've read assumes only 1 home computer will be used with lets say some other outside computer. In my situation, does it make sense to just run ssh-keygen on one computer and copy the files over to the others? This way I only need to back up one set of keys? And when I log into an outside computer, I only have to set it up with 1 set of keys as well as opposed to setting it up with all three computers. Does this make sense? Any flaws or cautionary notes to consider? Thanks.
You can theoratically do both ways, but they each have their advantages and drawbacks : You can indeed create only 1 key, say it's "yours" (as a person), secure it somewhere and copy it to any computer you use. The advantage is that you can connect to A from wherever you go, as long as you possess your SSH private key. The drawback is that as long as you copy your private key from a place to another, whatever the way, you increase the risk of it being read by someone eavesdropping the connection. Worse, if computer C gets stolen, you have to regenerate a new key on all computers who use this key, and distribute a new one. On the other hand, using 1 key per user@computer has the advantage of more "fine-control" over "what" can connect "where". It's the most common way to do. If, for example, you were to give computer C to your brother/sister/wife/husband/friend/dog, or to a thief (without your approval), you would just have to remove the key from A's ''authorized_keys'' file. So even if it means "more keys in authorized_keys" I suggest the second approach.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/208495", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99353/" ] }