source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
187,339 | I am using Mac OS X 10.9.4, following is my script to copy files from local machine to different host #!/bin/bash#!/usr/bin/expectecho "I will fail if you give junk values!!"echo " "echo "Enter file name: "read filePathecho " "echo "Where you want to copy?"echo "Enter"echo "1. if Host1"echo "2. if Host2"echo "3. if Host3"read choiceecho " "if [ $choice -eq "1" ]then spawn scp filePath uname@host1:/usr/tmp expect "password" send "MyPassword\r" interactelif [ $choice -eq "2" ]then spawn scp filePath uname@host2:/usr/tmp expect "password" send "MyPassword\r" interactelif [ $choice -eq "3" ]then spawn scp filePath uname@host3:/usr/tmp expect "password" send "MyPassword\r" interactelse echo "Wrong input"fi when running this script i am getting following ./rcopy.sh: line 21: spawn: command not foundcouldn't read file "password": no such file or directory./rcopy.sh: line 23: send: command not found./rcopy.sh: line 24: interact: command not found | Your script is attempting to combine two interpreters. You have both #!/bin/bash and #!/usr/bin/expect . That won't work. You can only use one of the two. Since bash was first, your script is being run as a bash script. However, within your script, you have expect commands such as spawn and send . Since the script is being read by bash and not by expect , this fails. You could get around this by writing different expect scripts and calling them from your bash script or by translating the whole thing to expect . The best way though, and one that avoids the horrible practice of having your passwords in plain text in a simple text file, is to set up passwordless ssh instead. That way, the scp won't need a password and you have no need for expect : First, create a public ssh key on your machine: ssh-keygen -t rsa You will be asked for a passphrase which you will be asked to enter the first time you run any ssh command after each login. This means that for multiple ssh or scp commands, you will only have to enter it once. Leave the passphrase empty for completely passwordless access. Once you have generated your public key, copy it over to each computer in your network : while read ip; do ssh-copy-id -i ~/.ssh/id_rsa.pub user1@$ip done < IPlistfile.txt The IPlistfile.txt should be a file containing a server's name or IP on each line. For example: host1host2host3 Since this is the first time you do this, you will have to manually enter the password for each IP but once you've done that, you will be able to copy files to any of these machines with a simple: scp file user@host1:/path/to/file Remove the expect from your script. Now that you have passwordless access, you can use your script as: #!/bin/bashecho "I will fail if you give junk values!!"echo " "echo "Enter file name: "read filePathecho " "echo "Where you want to copy?"echo "Enter"echo "1. if Host1"echo "2. if Host2"echo "3. if Host3"read choiceecho " "if [ $choice -eq "1" ]then scp filePath uname@host1:/usr/tmp elif [ $choice -eq "2" ]then scp filePath uname@host2:/usr/tmp elif [ $choice -eq "3" ]then scp filePath uname@host3:/usr/tmp else echo "Wrong input"fi | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/187339",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104834/"
]
} |
187,340 | I have an host with proxmox with single public ip and some virtual machine installed whit webservers and multiple doimains, the first VM is a proxy with haproxy that forward the request to other VM and in proxmox host i have this iptables script: iptables -Fiptables -P INPUT ACCEPTiptables -P FORWARD ACCEPTiptables -P OUTPUT ACCEPTiptables -A INPUT -p icmp --icmp-type echo-request -j DROPiptables -t nat -A POSTROUTING -s 192.168.1.0/24 -o eth0 -j MASQUERADEiptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 22100 -j DNAT --to-destination 192.168.1.100:22iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 192.168.1.100:80iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 192.168.1.100:443iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 22101 -j DNAT --to-destination 192.168.1.101:22iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 22102 -j DNAT --to-destination 192.168.1.102:22iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 22103 -j DNAT --to-destination 192.168.1.103:22iptables-save > /etc/iptables.rules Internal lan is 192.168.1.0, the interface eth0 has public ip, the proxy is 192.168.1.100 and the other machine is 101, 102, 103 etc.. In another VM i have installed a website that works if i connect from external, instead if i launch curl www.mydomain.com from the same VM i have curl: (7) Failed connect to www.mydomain.com:80 ; Connection refused, i think it is a problem of iptables | Your script is attempting to combine two interpreters. You have both #!/bin/bash and #!/usr/bin/expect . That won't work. You can only use one of the two. Since bash was first, your script is being run as a bash script. However, within your script, you have expect commands such as spawn and send . Since the script is being read by bash and not by expect , this fails. You could get around this by writing different expect scripts and calling them from your bash script or by translating the whole thing to expect . The best way though, and one that avoids the horrible practice of having your passwords in plain text in a simple text file, is to set up passwordless ssh instead. That way, the scp won't need a password and you have no need for expect : First, create a public ssh key on your machine: ssh-keygen -t rsa You will be asked for a passphrase which you will be asked to enter the first time you run any ssh command after each login. This means that for multiple ssh or scp commands, you will only have to enter it once. Leave the passphrase empty for completely passwordless access. Once you have generated your public key, copy it over to each computer in your network : while read ip; do ssh-copy-id -i ~/.ssh/id_rsa.pub user1@$ip done < IPlistfile.txt The IPlistfile.txt should be a file containing a server's name or IP on each line. For example: host1host2host3 Since this is the first time you do this, you will have to manually enter the password for each IP but once you've done that, you will be able to copy files to any of these machines with a simple: scp file user@host1:/path/to/file Remove the expect from your script. Now that you have passwordless access, you can use your script as: #!/bin/bashecho "I will fail if you give junk values!!"echo " "echo "Enter file name: "read filePathecho " "echo "Where you want to copy?"echo "Enter"echo "1. if Host1"echo "2. if Host2"echo "3. if Host3"read choiceecho " "if [ $choice -eq "1" ]then scp filePath uname@host1:/usr/tmp elif [ $choice -eq "2" ]then scp filePath uname@host2:/usr/tmp elif [ $choice -eq "3" ]then scp filePath uname@host3:/usr/tmp else echo "Wrong input"fi | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/187340",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64116/"
]
} |
187,369 | What is this file anyway? Documentation makes no mention of it. And it's not supposed to be run automatically (version 4.3 , 2 February 2014): Invoked as an interactive login shell, or with --login When Bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior. When a login shell exits, Bash reads and executes commands from the file ~/.bash_logout, if it exists. Invoked as an interactive non-login shell When an interactive shell that is not a login shell is started, Bash reads and executes commands from ~/.bashrc, if that file exists. This may be inhibited by using the --norc option. The --rcfile file option will force Bash to read and execute commands from file instead of ~/.bashrc. So, typically, your ~/.bash_profile contains the line if [ -f ~/.bashrc ]; then . ~/.bashrc; fi after (or before) any login-specific initializations. Invoked non-interactively When Bash is started non-interactively, to run a shell script, for example, it looks for the variable BASH_ENV in the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read and execute. Bash behaves as if the following command were executed: if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi but the value of the PATH variable is not used to search for the filename. As noted above, if a non-interactive shell is invoked with the --login option, Bash attempts to read and execute commands from the login shell startup files. | From Debian's bash README : What is /etc/bash.bashrc ? It doesn't seem to be documented. The Debian version of bash is compiled with a special option( -DSYS_BASHRC ) that makes bash read /etc/bash.bashrc before ~/.bashrc for interactive non-login shells. So, on Debian systems, /etc/bash.bashrc is to ~/.bashrc as /etc/profile is to ~/.bash_profile . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/187369",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29867/"
]
} |
187,378 | I want to print the values from properties file dynamically I am using below code but getting wrong output. testProperty.properties edmcws,edmbws,edmwt Code file=/ThreadDump/testProperty.propertiescount=$(head -1 $file | sed 's/[^,]//g' | wc -c)echo "$count"for((i=1;i<=$count;i++))doabc=$(awk -F "," '(NR==1){print $($i)}' $file)echo "$abc"done Output 3edmcws,edmbws,edmwtedmcws,edmbws,edmwtedmcws,edmbws,edmwt But when I hard-code the value I am getting correct output. Code file=/ThreadDump/testProperty.propertiescount=$(head -1 $file | sed 's/[^,]//g' | wc -c)echo "$count"for((i=1;i<=$count;i++))doabc=$(awk -F "," '(NR==1){print $1}' $file)echo "$abc"done Output 3edmcwsedmcwsedmcws | From Debian's bash README : What is /etc/bash.bashrc ? It doesn't seem to be documented. The Debian version of bash is compiled with a special option( -DSYS_BASHRC ) that makes bash read /etc/bash.bashrc before ~/.bashrc for interactive non-login shells. So, on Debian systems, /etc/bash.bashrc is to ~/.bashrc as /etc/profile is to ~/.bash_profile . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/187378",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104846/"
]
} |
187,404 | I need to password protect my PDF file(s), because I am going to send them through email and I want anyone who would view my PDF file(s) to be prompted for a password. How can I add a password to a PDF in Linux Mint 17.1? | You can use the program pdftk to set both the owner and/or user password pdftk input.pdf output output.pdf owner_pw xyz user_pw abc where owner_pw and user_pw are the commands to add the passwords xyz and abc respectively (you can also specify one or the other but the user_pw is necessary in order to prohibit opening). You might also want to ensure that encryption strength is 128 bits by adding (though currently 128 bits is default ): .... encrypt_128bit If you cannot run pdftk as it is no longer in every distro, you can try qpdf . Using qpdf --help gives information on the syntax. Using the same "values" as for pdftk : qpdf --encrypt abc xyz 256 -- input.pdf output.pdf | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/187404",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74809/"
]
} |
187,407 | I would want to have Ctrl + Shift as prefix in tmux (because I'm trying to switch from terminator, and the majority of my shortcuts used Ctrl + Shift ). I tried this in my .tmux.conf : unbind-key C-bset-option -g prefix C-Sbind-key C-S send-prefix It unbind Ctrl + B , but it doesn't rebind it to Ctrl + Shift (actually, the second line alone has the same behavior). Is there a way to do that, or, as these are two "special" keys, we can't bind them alone? Thanks ! | Ctrl and Shift are modifiers. These keys aren't transmitted to applications running in a terminal. Rather, when you press something like Ctrl + Shift + A , this sends a character or a character sequence at the time you press the A key. See How do keyboard input and text output work? for more details. There may be some terminal emulators that can be configured to send a key sequence when you press Ctrl + Shift , but even that isn't a given and might depend on which order you press the two keys in, and you'd lose the ability to make Ctrl + Shift + key shortcuts. If your terminal emulator permits it, you could configure it to send C-b a when you press Ctrl + Shift + A and so on. That would allow you to use single-keychord bindings for some commands. If you want to free the keychord Ctrl + B so that it's sent to the underlying application, pick a different prefix such as C-\ or C-] or C-^ . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/187407",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104527/"
]
} |
187,415 | I try to launch Firefox over SSH, using ssh -X user@hostname and then firefox -no-remote but it's very very slow. How can I fix this? Is it a connection problem? | The default ssh settings make for a pretty slow connection. Try the following instead: ssh -YC4c arcfour,blowfish-cbc user@hostname firefox -no-remote The options used are: -Y Enables trusted X11 forwarding. Trusted X11 forwardings are not subjected to the X11 SECURITY extension controls. -C Requests compression of all data (including stdin, stdout, stderr, and data for forwarded X11 and TCP connections). The compression algorithm is the same used by gzip(1), and the “level” can be controlled by the CompressionLevel option for pro‐ tocol version 1. Compression is desirable on modem lines and other slow connections, but will only slow down things on fast networks. The default value can be set on a host-by-host basis in the configuration files; see the Compression option. -4 Forces ssh to use IPv4 addresses only. -c cipher_spec Selects the cipher specification for encrypting the session. For protocol version 2, cipher_spec is a comma-separated list of ciphers listed in order of preference. See the Ciphers keyword in ssh_config(5) for more information. The main point here is to use a different encryption cypher, in this case arcfour which is faster than the default, and to compress the data being transferred. NOTE: I am very, very far from an expert on this. The command above is what I use after finding it on a blog post somewhere and I have noticed a huge improvement in speed. I am sure the various commenters below know what they're talking about and that these encryption cyphers might not be the best ones. It is very likely that the only bit of this answer that is truly relevant is using the -C switch to compress the data being transferred. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/187415",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90717/"
]
} |
187,455 | I am using Scientific Linux (SL). I am trying to compile a project that uses a bunch of C++ (.cpp) files. In the directory user/project/Build , I enter make to compile and link all the .cpp files. I then have to go to user/run/ and then type ./run.sh values.txt To debug with GDB, I have to go to user/run and then type gdb ../project/Build/bin/Project and to run, I enter run -Project INPUT/inputfile.txt . However, I am trying to print out the value of variable using p variablename . However, I get the message s1 = <value optimized out> . I have done some research online, and it seems I need to compile without optimizations using -O0 to resolve this. But where do I enter that? In the CMakeLists ? If so, which CMakeLists? The one in project/Build or project/src/project ? | Chip's answer was helpful, however since the SET line overwrote CMAKE_CXX_FLAGS_DEBUG this removed the -g default which caused my executable to be built without debug info. I needed to make a small additional modification to CMakeLists.txt in the project source directory to get an executable built with debugging info and -O0 optimizations (on cmake version 2.8.12.2). I added the following to CMakeLists.txt to add -O0 and leave -g enabled: # Add -O0 to remove optimizations when using gccIF(CMAKE_COMPILER_IS_GNUCC) set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -O0") set(CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG} -O0")ENDIF(CMAKE_COMPILER_IS_GNUCC) This adds the -O0 optimization to flags already used for debug by CMake and only is included for GCC builds if you happen to be using a cross platform project. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/187455",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95343/"
]
} |
187,462 | I have a Yubikey NEO, and I'm trying to get it to work on Debian. When I plug it in, I get udev events, but no /dev/hidraw? device. Here's what I know so far: cat /boot/config-$(uname -r) | grep CONFIG_HIDRAW) gives: CONFIG_HIDRAW=y lsusb -v -d 1050:0211 gives: Bus 002 Device 013: ID 1050:0211 Yubico.com Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x1050 Yubico.com idProduct 0x0211 bcdDevice 0.20 iManufacturer 1 Yubico iProduct 2 Yubico WinUSB Gnubby (gnubby1) iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 30mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 0Device Status: 0x0000 (Bus Powered) If I run udevadm monitor as I plug and then unplug the Yubikey, I get: monitor will print the received events for:UDEV - the event which udev sends out after rule processingKERNEL - the kernel ueventKERNEL[7941.975349] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2 (usb)KERNEL[7941.975583] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.0 (usb)UDEV [7941.985350] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2 (usb)UDEV [7942.998352] add /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.0 (usb)KERNEL[7945.487692] remove /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.0 (usb)KERNEL[7945.487791] remove /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2 (usb)UDEV [7945.488139] remove /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.0 (usb)UDEV [7945.488620] remove /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2 (usb) I have added udev rules, as recommended here and cat /etc/udev/rules.d/70-u2f.rules gives: ACTION!="add|change", GOTO="u2f_end"#KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="1050", TAG+="uaccess"ATTRS{idVendor}=="1050", GROUP="plugdev", MODE="0660"LABEL="u2f_end" (This didn't work when the commented line was uncommented either.) I've tried installing libhidapi-hidraw0 , but that hasn't seemed to work either. I've also tried installing and running the Yubikey NEO manager, but both it and the yubikey personalization tool don't think any Yubikey device is plugged in, which why I think the blocker is the lack of a /dev/hidraw0 device (or something at a similar level). I've reached the limits of both my knowledge of Linux and my ability to Google for solutions, so both further debugging suggestions and (if you know it) solutions would be helpful. | Chip's answer was helpful, however since the SET line overwrote CMAKE_CXX_FLAGS_DEBUG this removed the -g default which caused my executable to be built without debug info. I needed to make a small additional modification to CMakeLists.txt in the project source directory to get an executable built with debugging info and -O0 optimizations (on cmake version 2.8.12.2). I added the following to CMakeLists.txt to add -O0 and leave -g enabled: # Add -O0 to remove optimizations when using gccIF(CMAKE_COMPILER_IS_GNUCC) set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -O0") set(CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG} -O0")ENDIF(CMAKE_COMPILER_IS_GNUCC) This adds the -O0 optimization to flags already used for debug by CMake and only is included for GCC builds if you happen to be using a cross platform project. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/187462",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26000/"
]
} |
187,537 | Is there any way to use grep to search for entry matching multiple patterns in any order using single condidion? As showed in How to run grep with multiple AND patterns? for multiple patterns i can use grep -e 'foo.*bar' -e 'bar.*foo' but i have to write 2 conditions here, 6 conditions for 3 patterns and so on...I want to write single condition if possible.For finding patterns in any order you can suggest to use: grep -e 'foo' | grep -e 'bar' # at least i do not retype patterns here and this will work but i would like to see colored output and in this case only bar will be highlighted. I would like to write condition as easy as awk '/foo/ && /bar/' if it is possible for grep (awk does not highlight results and i doubt it can be done easily). agrep can probably do what i want, but i wonder if my default grep (2.10-1) on ubuntu 12.04 can do this. | If your version of grep supports PCRE (GNU grep does this with the -P or --perl-regexp option), you can use lookaheads to match multiple words in any order: grep -P '(?=.*?word1)(?=.*?word2)(?=.*?word3)^.*$' This won't highlight the words, though. Lookaheads are zero-length assertions, they're not part of the matching sequence. I think your piping solution should work for that. By default, grep only colors the output when it's going to a terminal, so only the last command in the pipeline does highlighting, but you can override this with --color=always . grep --color=always foo | grep --color=always bar | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/187537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85258/"
]
} |
187,583 | I've seen history | grep blah and history |grep blah ; and history|grep blah also works, though no one ever seems to use it. Is there any significance in the spaces (e.g. piping to/from different commands requires different use of spaces), or is it always arbitrary? | bash defines several metacharacters . From man bash : metacharacter A character that, when unquoted, separates words. One of the following: | & ; ( ) < > space tab Because metacharacters separate words, it does not matter whether they are surrounded by spaces. The pipe symbol, | , is a metacharacter and hence, as you noticed, it does not need spaces around it. Note that [ , ] , { , } , and = are not metacharacters. Their meaning, by contrast, depends strongly on whether they are surrounded by blanks. Examples of when spaces are and are not needed As you noticed, it does not matter whether | is surrounded by spaces. Let us consider some examples that commonly confuse bash users. Consider: $ (date)Sun Mar 1 12:47:07 PST 2015 The parens above force the date command to be run in a subshell. Because ( and ) are metacharacters, no spaces are needed. By contrast: $ {date}bash: {date}: command not found Since { and } are not metacharacters, the shell treats {date} as one word. Instead of looking for the date command, it looks for a command named {date} . Because it doesn't find one, an error results. Another common problem is the test command. The following works successfully: $ [ abc ] && echo YesYes Remove the spaces and an error occurs: $ [abc] && echo Yesbash: [abc]: command not found Because [ and ] are not metacharacters, the shell treats [.bashrc] as a single word and the result, just like in the date example, is an error. Assignment statements are also sensitive to spaces. The following assignment is successful: $ v=date$ echo $vdate Add a space and the assignment fails: $ v= dateSun Mar 1 12:55:05 PST 2015 In the above, the shell temporarily sets v to empty and then executes the date command. Add a space before = also causes a failure but for a different reason: $ v =datebash: v: command not found Here, the shell attempts to execute the command v with the argument =date . The error is because it found no command named v . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/187583",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85900/"
]
} |
187,651 | I'm reading shell tutorial today from http://www.tutorialspoint.com/unix/unix-quoting-mechanisms.htm In which it mentions: If a single quote appears within a string to be output, you should not put the whole string within single quotes instead you whould preceed that using a backslash () as follows: echo 'It\'s Shell Programming' I tried this on my centos server, it doesn't work, a > prompts out to hint me type more. I was wondering, since two single quotes transform every special characters into normal characters, which include escape symbol \ , but exclude itself, the ' , how should I represent a single single quote ' in a single-quoted phrase? | The tutorial is wrong. POSIX says: A single-quote cannot occur within single-quotes. Here's some alternatives: echo $'It\'s Shell Programming' # ksh, bash, and zsh only, does not expand variablesecho "It's Shell Programming" # all shells, expands variablesecho 'It'\''s Shell Programming' # all shells, single quote is outside the quotesecho 'It'"'"'s Shell Programming' # all shells, single quote is inside double quotes Further reading: Quotes - Greg's Wiki | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/187651",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74226/"
]
} |
187,663 | What is difference between pbrun and sudo command? I have seen people executing pbrun sudo su - what this means? I know su -, it will try to switch to root user. What speciality pbrun gives to sudo when it is used along with sudo command? | PowerBroker is a full-featured solution with a rich suite of securityoptions. Because of these features, PowerBroker can be much morecomplicated to set up if all the security options are used. In a strictcomparison with sudo however; using only the features available to sudo , the installation and maintenance is no more complicated. Sudo can be an effective solution for organizations where the primaryneed is to restrict access for cooperative users and to avoiderrors. If there is a need for a system that will provide an audit trailand solidly enforce security policy such as is required by HIPAA , SOX ,etc. to prove regulatory compliance, PowerBroker has the built-intools to handle that task. PowerBroker is issued on a per node license which can become costlyfor a large scale corporation. The cost is one reason some companiesmay not uniformly deploy PowerBroker and may instead use itas a point solution for mission critical or sensitive data systems. Symark Power Broker is a commercial package. Sudo is an open source tool. Courtesy | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/187663",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102396/"
]
} |
187,666 | I read Advanced Programming in the UNIX Environment by Stevens, 8 th chapter.I read and understand all the six of exec functions. One thing I notice is, in all the exec functions: first argument is the file name / path name (depends on the exec function). second argument is argv[0] that we get in main() , which is the file name itself. So here we do have to pass the file name twice in the function. Is there any reason for it (like we cannot get the file name from the path name from the first argument)? | So here we do have to pass the file name twice in the function. They are not quite the same thing as you notice by observing that one of them is used as the argv[0] value. This doesn't have to be the same as the basename of the executable; many/most things ignore it and you can put whatever you want in there. The first one is the actual path to the executable, for which there is an obvious necessity. The second one is passed to the process ostensibly as the name used to invoke it, but, e.g.: execl("/bin/ls", "banana", "-l", NULL); Will work fine, presuming /bin/ls is the correct path. Some applications do, however, make use of argv[0] . Usually these have one or more symlinks in $PATH ; this is common with compression utilities (sometimes they use shell wrappers instead). If you have xz installed, stat $(which xzcat) shows it's a link to xz , and man xzcat is the same as man xz which explains "xzcat is equivalent to xz --decompress --stdout". The way xz can tell how it was invoked is by checking argv[0] , making these equivalent: execl("/bin/xz", "xzcat", "somefile.xz", NULL);execl("/bin/xz", "xz", "--decompress", "--stdout", "somefile.xz", NULL); | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/187666",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86216/"
]
} |
187,679 | I was wondering if there is a (simple) possibility to redo/reverse a command that was executed in the bash shell? (to make it undone) Is there something similar to the ctrl + z combination to redo any action (for example in word or LibreOffice)? | You should understand that bash is just an execution environment. It executes commands that you call - it's not the business of the shell to even know what the command does, you can call any executable you want. In most cases, it's not even clear what an undo would do - for instance, can you "unplay" a movie? Can you "unsend" an e-mail? What would "undo running firefox" even mean, for instance? You may close it, but bookmarks, downloads and history won't be the same. If you run a command, it is executed, whatever it does. It's up to you to know what you are doing. Note that this doesn't mean individual commands don't have "undo"... they can - you can even write a wrapper function that does something to protect you from foolish mistakes. For instance, mv is easily reversible by just moving the file back where it came from, unless you have overwritten something. That's why -i switch exists, to ask you before overwriting. Technically, inverse of cp is rm , unless something was overwritten (again, -i asks you about it). rm is more permanent, to try to get the files back, you have to actually do some lower-level hacking (there are tools for that). If you considered the filesystem as a black-box, it technically wouldn't be possible at all (only the details of logical and physical layout of data allows you to do some damage control). rm means rm , if you want "trash" functionality, that's actually just mv into some pre-arranged directory (and possibly a scheduled service to maintain or empty it) - nothing special about it. But you can use -i to prompt you before deletion. You may use a function or an alias to always include -i in these commands. Note that most applications are protecting you from data loss in different ways. Most (~all) text editors create backup files with ~ at the end in case you want to bring the old version back. On some distros, ls is aliased by default so that it hides them ( -B ), but they are there. A lot of protection is given by managing permissions properly: don't be root unless you need to be, make files read-only if you don't want them to change. Sometimes it's useful to have a "sandbox" environment - you run things on a copy, see if it's alright, and then merge the changes (or abandon the changes). chroot or lxc can prevent your scripts to escape from a directory and do damage. When you try to execute things in bulk - for instance, if you have a complex find command, while loop, a long pipeline, or anything like that, it's a good idea to first just echo the commands that will get executed. Then, if the commands look reasonable, remove echo and run it for real. And of course, if you really aren't sure about what you are doing, make a copy first. I sometimes just create a tarball of the current directory. Speaking of tarballs - tarbombs and zipbombs are quite common unfortunately (when people make an archive without a proper subdirectory, and unpacking scatters the files around, making a huge mess). I got used to just making a subdirectory myself before unpacking (I could list the contents, but I'm lazy). I'm thinking about making a script that will create a subdirectory only if the contents were archived without a subdirectory. But when it does happen, ls -lrt helps to find the most recent files to put where they belong. I just gave this as an example - a program can have many side effects which the shell has no way of knowing about (How could it? It's a different program being called!) The only sure way of avoiding mistakes is to be careful (think twice, run once). Possibly the most dangerous commands are the ones that deal with the filesystem: mkfs, fdisk/gdisk and so on. They can utterly destroy the filesystem (although with proper forensic software, at least partial reverse-engineering is possible). Always double-check the device you are formatting and partitioning is correct, before running the command. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/187679",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100276/"
]
} |
187,695 | I'm asking this question while using xfce4-terminal, but I'm interested in a general solution: is there a way to stop a terminal emulator announcing mouse support in consoles? I need mouse-select and copy-paste much more frequent that I need mouse support in vim or wherever. | You can hold the Shift key to use the normal mouse selection while xterm mouse-tracking is enabled. That works in all terminal emulators that I know ( xterm , vte (like xfce-terminal ) or rxvt -based ones). In vim specifically, mouse is normally not enabled by default in terminals. So there's probably a set mouse=a somewhere in you ~/.vimrc or your OS-supplied system vimrc. You can always add: set mouse= to your ~/.vimrc to disable it. Or: if !has("gui_running") set mouse=endif to avoid disabling it for the GUI versions of vim . Mouse support is (sort of) advertised in the terminfo database with the kmous capability. Now, not all applications rely on that to decide whether to enable mouse tracking or not. You could redefine the entry for your terminal (in a local terminfo database) to remove that capability: infocmp -1x | grep -v kmous= | TERMINFO=~/.terminfo tic -x -export TERMINFO=~/.terminfo For applications using ncurses , it's enough to set the XM user-defined capability (not documented in terminfo(5) but mentioned in curs_caps(5) and curs_mouse(3) ) to the empty string. That doesn't prevent the application from handling mouse events if they're sent by the terminal, but that prevents the application from sending the sequence that enters the mouse tracking mode. So you can combine both with: infocmp -1x | sed '/kmous=/d;/XM=/d;$s/$/XM=,/' | TERMINFO=~/.terminfo tic -x -export TERMINFO=~/.terminfo | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/187695",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49132/"
]
} |
187,725 | I have files that were generated by a program that did not put newlines at the end of records. I want to put newlines between the records, and I can do so with a simple sed script: sed -e 's/}{/}\n{/g' The problem is that the input files are multiple gigabytes in size, and therefore the input lines to sed are multiple GBs in length. sed tries to hold a line in memory, which doesn't work in this case. I tried the --unbuffered option, but that just seemed to make it slower and did not allow it to finish correctly. | You can use another tool that lets you set the input record separator. For example Perl perl -pe 'BEGIN{ $/="}{" } s/}{/}\n{/g' file The special variable $/ is the input record separator. Setting it to }{ defines lines as ending in }{ . That way you can achieve what you want without reading the entire thing into memory. mawk or gawk awk -v RS="}{" -vORS= 'NR > 1 {print "}\n{"}; {print}' file This is the same idea. RS="}{" sets the record separator to }{ and then you print } , a newline, { (except for the first record) and the current record. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/187725",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27025/"
]
} |
187,742 | I am using a script to regularly download my gmail messages that compresses the raw .eml into .gz files. The script creates a folder for each day, and then compresses every message into its own file. I would like a way to search through this archive for a "string." Grep alone doesn't appear to do it. I also tried SearchMonkey. | If you want to grep recursively in all .eml.gz files in the current directory, you can use: find . -name \*.eml.gz -print0 | xargs -0 zgrep "STRING" You have to escape the first * so that the shell does not interpret it. -print0 tells find to print a null character after each file it finds; xargs -0 reads from standard input and runs the command after it for each file; zgrep works like grep , but uncompresses the file first. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/187742",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62794/"
]
} |
187,756 | I was trying out tar command to make archive with .tar extension. please have a look at the following series of commands I tried: $ lsabc.jpg hello.jpg xjf.jpg$ tar -cfv test.tar *.jpgtar: test.tar: Cannot stat: No such file or directorytar: Exiting with failure status due to previous errors$ lsabc.jpg hello.jpg v xjf.jpg$ rm v$ lsabc.jpg hello.jpg xjf.jpg$ tar -cvf test.tar *.jpgabc.jpghello.jpgxjf.jpg$ lsabc.jpg hello.jpg test.tar xjf.jpg why it gives different response with different sequence of options ? i.e -cfv vs cvf . from what I have learnt in bash scripting that option sequence does not matter. | As @jcbermu said, for most programs and in most cases, the order of command line flags is not important. However, some flags expect a value. Specifically, tar's -f flag is: -f, --file ARCHIVE use archive file or device ARCHIVE So, tar expects -f to have a value and that value will be the name of the tarfile it creates. For example, to add all .jpg files to an archive called foo.tar , you would run tar -f foo.tar *jpg What you were running was tar -cfv test.tar *.jpg tar understands that as "create ( -c ) an archive called v ( -fv ), containing files test.tar and any ending in .jpg . When you run tar -cvf test.tar *.jpg on the other hand, it takes test.tar as the name of the archive and *jpg as the list of files. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/187756",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52733/"
]
} |
187,889 | How do I replace only the last occurrence of "-" in a string with a space using sed ? For example: echo $MASTER_DISK_RELEASEswp-RedHat-Linux-OS-5.5.0.0-03 but I want to get the following output ( replacing the last hyphen [“-“] with a space ) swp-RedHat-Linux-OS-5.5.0.0 03 | You can do it with single sed : sed 's/\(.*\)-/\1 /' or, using extended regular expression: sed -r 's/(.*)-/\1 /' The point is that sed is very greedy, so matches as many characters before - as possible, including others - . $ echo 'swp-RedHat-Linux-OS-5.5.0.0-03' | sed 's/\(.*\)-/\1 /'swp-RedHat-Linux-OS-5.5.0.0 03 | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/187889",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67059/"
]
} |
187,931 | I often come across the situation when developing, where I am running a binary file, say a.out in the background as it does some lengthy job. While it's doing that, I make changes to the C code which produced a.out and compile a.out again. So far, I haven't had any problems with this. The process which is running a.out continues as normal, never crashes, and always runs the old code from which it originally started. However, say a.out was a huge file, maybe comparable to the size of the RAM. What would happen in this case? And say it linked to a shared object file, libblas.so , what if I modified libblas.so during runtime? What would happen? My main question is - does the OS guarantee that when I run a.out , then the original code will always run normally, as per the original binary, regardless of the size of the binary or .so files it links to, even when those .o and .so files are modfied during runtime? I know there are these questions that address similar issues: https://stackoverflow.com/questions/8506865/when-a-binary-file-runs-does-it-copy-its-entire-binary-data-into-memory-at-once What happens if you edit a script during execution? How is it possible to do a live update while a program is running? Which have helped me understand a bit more about this but I don't think that they are asking exactly what I want, which is a general rule for the consequences of modifying a binary during execution | While the Stack Overflow question seemed to be enough at first, I understand, from your comments, why you may still have a doubt about this. To me, this is exactly the kind of critical situation involved when the two UNIX subsystems (processes and files) communicate. As you may know, UNIX systems are usually divided into two subsystems: the file subsystem, and the process subsystem. Now, unless it is instructed otherwise through a system call, the kernel should not have these two subsystems interact with one another. There is however one exception: the loading of an executable file into a process' text regions . Of course, one may argue that this operation is also triggered by a system call ( execve ), but this is usually known to be the one case where the process subsystem makes an implicit request to the file subsystem. Because the process subsystem naturally has no way of handling files (otherwise there would be no point in dividing the whole thing in two), it has to use whatever the file subsystem provides to access files. This also means that the process subsystem is submitted to whatever measure the file subsystem takes regarding file edition/deletion. On this point, I would recommend reading Gilles' answer to this U&L question . The rest of my answer is based on this more general one from Gilles. The first thing that should be noted is that internally, files are only accessible through inodes . If the kernel is given a path, its first step will be to translate it into a inode to be used for all other operations. When a process loads an executable into memory, it does it through its inode, which has been provided by the file subsystem after translation of a path. Inodes may be associated to several paths (links), and programs may only delete links. In order to delete a file and its inode, userland must remove all existing links to that inode, and ensure that it is completely unused. When these conditions are met, the kernel will automatically delete the file from disk. If you have a look at the replacing executables part of Gilles' answer, you'll see that depending on how you edit/delete the file, the kernel will react/adapt differently, always through a mechanism implemented within the file subsystem. If you try strategy one ( open/truncate to zero/write or open/write/truncate to new size ), you'll see that the kernel won't bother handling your request. You'll get an error 26: Text file busy ( ETXTBSY ). No consequences whatsoever. If you try strategy two, the first step is to delete your executable. However, since it is being used by a process, the file subsystem will kick in and prevent the file (and its inode) from being truly deleted from disk. From this point, the only way to access the old file's content is to do it through its inode, which is what the process subsystem does whenever it needs to load new data into text sections (internally, there is no point in using paths, except when translating them into inodes). Even though you've unlinked the file (removed all its paths), the process can still use it as if you'd done nothing. Creating a new file with the old path doesn't change anything: the new file will be given a completely new inode, which the running process has no knowledge of. Strategies 2 and 3 are safe for executables as well: although running executables (and dynamically loaded libraries) aren't open files in the sense of having a file descriptor, they behave in a very similar way. As long as some program is running the code, the file remains on disk even without a directory entry. Strategy three is quite similar since the mv operation is an atomic one. This will probably require the use of the rename system call, and since processes can't be interrupted while in kernel mode, nothing can interfere with this operation until it completes (successfully or not). Again, there is no alteration of the old file's inode: a new one is created, and already-running processes will have no knowledge of it, even if it's been associated with one of the old inode's links. With strategy 3, the step of moving the new file to the existing name removes the directory entry leading to the old content and creates a directory entry leading to the new content. This is done in one atomic operation, so this strategy has a major advantage: if a process opens the file at any time, it will either see the old content or the new content — there's no risk of getting mixed content or of the file not existing. Recompiling a file : when using gcc (and the behaviour is probably similar for many other compilers), you are using strategy 2. You can see that by running a strace of your compiler's processes: stat("a.out", {st_mode=S_IFREG|0750, st_size=8511, ...}) = 0unlink("a.out") = 0open("a.out", O_RDWR|O_CREAT|O_TRUNC, 0666) = 3chmod("a.out", 0750) = 0 The compiler detects that the file already exists through the stat and lstat system calls. The file is unlinked . Here, while it is no longer accessible through the name a.out , its inode and contents remain on disk, for as long as they are being used by already-running processes. A new file is created and made executable under the name a.out . This is a brand new inode, and brand new contents, which already-running processes don't care about. Now, when it comes to shared libraries, the same behaviour will apply. As long as a library object is used by a process, it will not be deleted from disk, no matter how you change its links. Whenever something has to be loaded into memory, the kernel will do it through the file's inode, and will therefore ignore the changes you made to its links (such as associating them with new files). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/187931",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/97175/"
]
} |
187,960 | I essentially want to run a script on machine A which will SSH into machine B, execute commands on B, and return the output to machine A. So I am creating a csh script to do this, hopefully, if it's possible. For instance, this solution from this answer works from command line: ssh user@host << 'ENDSSH'ls -lENDSSH I can verify it is running the ls on the remote and not locally. However I want to be able to do this within the backticks, inside my csh script running locally. Would that be possible? I know it is possible to SSH in and run a script that is on the remote, but that is not my case. I have also heard it is possible to send a second script stored locally over the SSH (which I also have not figured out), but I like the idea of having all of the commands inside the original script, hence the HEREDOC. I know it is a pretty particular question, but if there is a way, it would be very cool to know how. Things that don't work 1) `ssh user@host<<'ENDSSH'``echo "ls -l"``echo "ENDSSH"` This gives the error: stty: standard input: Invalid argument. stty: standard input: Invalid argument. Warning: Command not found And then it runs ls -l locally, and then it tried to run ENDSSH locally and of course fails with command not found. 2) `ssh user@host<<'ENDSSH'`echo "ls -l"echo "ENDSSH" Same problem as above, except it only displays ls -l and ENDSSH as text. Makes sense...since the HEREDOC portion failed. 3) 3-5 aren't exactly what I asked about, but still trying just to get even the basic case working. `ssh user@host ls-l` This returns total: Command not found. Yeah, I have no clue where that's coming from. 4) `cat ./test.csh | ssh user@host` Also returns the same as attempt #1 5) `cat ./test.csh | ssh -t user@host` Same as #1 and 4. I am beginning to exhaust other solutions from stackoverflow/Ubuntu/serverfault...etc. Most people ask about how to run their script on a remote from command line, not how to run commands on a remote from a locally running script. Edit Well I think I have part of the problem figured out. With case 3 from above for example, what happens is ls -l runs locally, and the first part of the output is then sent over SSH as a command. I noticed this because I tried ssh user@host $CMD , with $CMD set to "whoami" . The output was username: Command not found , and then I realized the total is from the ls output. Still not sure about how to solve this, though | If you wrap the entire command that you want executed inside backticks it's likely you'll get the correct solution. There are issues regarding the evaluation of variables and other interpolated elements, but I'll put those to one side for a moment. However, I'm not entirely sure why you want anything in backticks. Your Question says you want to run a local script remotely. Your first example does this. ssh -q user@host << ENDSSHhostnamels -lidENDSSH Perhaps you want to get the output of the remote execution into a variable? It doesn't say that anywhere in the Question, but you'd do it like this ( bash ) result=$(ssh user@host <<'ENDSSH'hostnameidENDSSH)...echo "$result" Or, since you mentioned csh , like this set result = `ssh -q user@host` <<ENDSSHhostnameidENDSSH...echo "$result" Note that for the csh variant all output lines are concatenated together. I see that you have reported an error, stty: standard input: Invalid argument. This is probably occurring because you have an stty command in your login script on the remote server. This should be wrapped in a test for an interactive shell. In bash you would do something like this: if test -n "$PS1"then # Interactive setup here stty ...fi | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/187960",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67157/"
]
} |
187,966 | I am trying to calculate the time elapsed since the log file was last updated. I guess following commands will be used lastUpdate=$(date -r myLogFile.log)now=$(date) How can I subtract them and get result for number of seconds elapsed? | lastUpdate="$(stat -c %Y myLogFile.log)"now="$(date +%s)"let diff="${now}-${lastUpdate}" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/187966",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82553/"
]
} |
188,033 | Is it possible to rename the current working directory from within a shell (Bash in my particular case)? If I attempt to do this the straightforward way, I end up with an error: nathan@nathan-desktop:/tmp/test$ mv . test2mv: cannot move ‘.’ to ‘test2’: Device or resource busy Is there another way to do this without changing the current directory? I realize that I can easily accomplish this by changing to the parent directory, but I'm curious if this is necessary. After all, if I rename the directory from another shell, I can still create files in the original shell afterwards. | Yes, but you have to refer to the directory by name, not by using the . notation. You can use a relative path, it just has to end with something other than . or .. : /tmp/test$ mv ../test ../test2/tmp/test$ pwd/tmp/test/tmp/test$ pwd -P/tmp/test2 You can use an absolute path: /tmp/test$ cd -P ./tmp/test2$ mv "$PWD" "${PWD%/*}/test3"/tmp/test2$ Similarly, rmdir . won't ever work, but rmdir "$PWD" does. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/188033",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1049/"
]
} |
188,042 | I am currently trying to understand the difference between init.d and cron @reboot for running a script at startup/booting of the system. The use of @reboot (this method was mentioned in this forum by hs.chandra ) is some what simpler, by simply going into crontab -e and creating a @reboot /some_directory/to_your/script/your_script.txt and then your_script.txt shall be executed every time the system is rebooted. An in depth explanation of @reboot is here Alternatively by embedding /etc/init.d/your_script.txt into the second line of your script ie: #!/bin/bash# /etc/init.d/your_script.txt You can run chmod +x /etc/init.d/your_script.txt and that should also result for your_script.txt to run every time the system is booted. What are the key differences between the two? Which is more robust? Is there a better one out of the two? Is this the correct way of embedding a script to run during booting? I will be incorporating a bash .sh file to run during startup. | init.d , also known as SysV script, is meant to start and stop services during system initialization and shutdown. ( /etc/init.d/ scripts are also run on systemd enabled systems for compatibility). The script is executed during the boot and shutdown (by default). The script should be an init.d script, not just a script . It should support start and stop and more (see Debian policy ) The script can be executed during the system boot (you can define when). crontab (and therefore @reboot ). cron will execute any regular command or script, nothing special here. any user can add a @reboot script (not just root) on a Debian system with systemd: cron's @reboot is executed during multi-user.target . on a Debian system with SysV (not systemd), crontab(5) mention: Please note that startup, as far as @reboot is concerned, is the time when the cron(8) daemon startup. In particular, it may be before some system daemons, or other facilities, were startup. This is due to the boot order sequence of the machine. it's easy to schedule the same script at boot and periodically. /etc/rc.local is often considered to be ugly or deprecated (at least by redhat ), still it had some nice features: rc.local will execute any regular command or script, nothing special here. on a Debian system with SysV (not systemd): rc.local was (almost) the last service to start. but on a Debian system with systemd: rc.local is executed after network.target by default (not network-online.target !) Regarding systemd's network.target and network-online.target , read Running Services After the Network is up . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/188042",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102813/"
]
} |
188,078 | I have some text files, and I'd like to be able to move an arbitrary line in any of the files up or down one line (lines at the beginning or end of the file would stay where they are). I have some working code but it seems kludgy and I'm not convinced I have all the edge cases covered, so I'm wondering if there's some tool or paradigm that does this better (e.g. easier to understand the code (for other readers or me in 6 months), easier to debug, and easier to maintain; "more efficient" isn't very important). move_up() { # fetch line with head -<line number> | tail -1 # insert that one line higher # delete the old line sed -i -e "$((line_number-1))i$(head -$line_number $file | tail -1)" -e "${line_number}d" "$file"}move_down() { file_length=$(wc -l < "$file") if [[ "$line_number" -ge $((file_length - 1)) ]]; then # sed can't insert past the end of the file, so append the line # then delete the old line echo $(head -$line_number "$file" | tail -1) >> "$file" sed -i "${line_number}d" "$file" else # get the line, and insert it after the next line, and delete the original sed -i -e "$((line_number+2))i$(head -$line_number $file | tail -1)" -e "${line_number}d" "$file" fi} I can do error checking of inputs inside or outside these functions, but bonus points if bad input (like non-integers, non-existent files, or line numbers greater than the length of the file) are handled sanely. I want it to run in a Bash script on modern Debian/Ubuntu systems. I don't always have root access but can expect "standard" tools to be installed (think a shared web server), and may be able to request installation of other tools if I can justify the request (though fewer external dependencies is always better). Example: $ cat b1234$ file=b line_number=3 move_up$ cat b1324$ file=b line_number=3 move_down$ cat b1342$ | Similar to Archemar 's suggestion, you could script this with ed : printf %s\\n ${linenr}m${addr} w q | ed -s infile i.e. linenr # is the line numberm # command that moves the lineaddr=$(( linenr + 1 )) # if you move the line downaddr=$(( linenr - 2 )) # if you move the line upw # write changes to fileq # quit editor e.g. to move line no. 21 one line up: printf %s\\n 21m19 w q | ed -s infile to move line no. 21 one line down: printf %s\\n 21m22 w q | ed -s infile But since you only need to move a certain line up or down by one line, you could also say that you practically want to swap two consecutive lines. Meet sed : sed -i -n 'addr{h;n;G};p' infile i.e. addr=${linenr} # if you move the line downaddr=$(( linenr - 1 )) # if you move the line uph # replace content of the hold buffer with a copy of the pattern spacen # read a new line replacing the current line in the pattern space G # append the content of the hold buffer to the pattern spacep # print the entire pattern space e.g. to move line no. 21 one line up: sed -i -n '20{h;n;G};p' infile to move line no. 21 one line down: sed -i -n '21{h;n;G};p' infile I used gnu sed syntax above. If portability is a concern: sed -n 'addr{hnG}p' infile Other than that, the usual checks: file exists and is writable; file_length > 2 ; line_no. > 1 ; line_no. < file_length ; | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/188078",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
188,146 | I want to run a Windows executable (a console application) from a shell script. This script is part of a bigger project that automatically builds packages, so I need to avoid user interaction. Actually I use something like this: export WINEARCH=win64export WINEPREFIX=/somewhere/$WINEARCHwine pgen.exe It basically works but with with the following problems: If WINEPREFIX is empty (and this is the case in the first run), a dialog appears showing to wait for configuration. I'd like to avoid depending on an X server running. If not previously installed, wine shows a window asking to install Gecko and requiring user intervention. WINEPREFIX needs to be owned by the user calling wine, so I don't know how to provide a system-wide wrapper to run the above application. Apart the latter issue (that I can live without), the other ones are blocking. At the end I just want to run a console application. | If you run wine with an empty $DISPLAY , it will skip displaying any dialogs and run without user interaction: DISPLAY= wine pgen.exe To avoid the prefix ownership issue, I generally point WINEPREFIX at a temporary directory so the prefix is re-created every time (but that's slow). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/188146",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38039/"
]
} |
188,164 | I want to shuffle a few keys around with XKB. (Why? On a laptop where some keys are inconveniently located.) I currently use xmodmap: keycode 110 = Priorkeycode 115 = Deletekeycode 112 = Nextkeycode 117 = Insertkeycode 119 = Endkeycode 118 = Home Instead I want to use XKB and assign different symbolic names for certain physical keys, rather than assign different keysyms to certain keycodes. ( This is why.) I want keycode 110 to send PGUP instead of HOME , keycode 115 to send DELE instead of END , etc. The rest of the configuration must not be affected (so PGUP is to keep sending the keysym Prior , etc., and all other keys remain as they are). How can I change the assignment of these specific keycodes? I'll load a file with xkbcomp somefile.xkb $DISPLAY , what do I need to put in somefile.xkb ? | Create a file containing your keycode changes, and save it as (for example) ~/.xkb/keycodes/local . (The keycodes directory is hard-coded; the base directory can be something else, and the filename too.) This will contain in your case xkb_keycodes { <PGUP> = 110; <PGDN> = 112; <DELE> = 115; <INS> = 117; <HOME> = 118; <END> = 119;}; To load this, run setxkbmap -print | sed -e '/xkb_keycodes/s/"[[:space:]]/+local&/' | xkbcomp -I${HOME}/.xkb - $DISPLAY This outputs your current settings, adds +local to the xkb_keycodes include statement, and loads it into the XKB compiler, adding ~/.xkb to the include path. (If you named your file something other than ~/.xkb/keycodes/local , you'll obviously need to change +local and -I${HOME}/.xkb} .) That way all the other settings are preserved. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/188164",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/885/"
]
} |
188,170 | Is it possible to generate major page faults in the linux kernel at will? Can a program be written such that it is guaranteed to cause a major page fault on it's execution. | Create a file containing your keycode changes, and save it as (for example) ~/.xkb/keycodes/local . (The keycodes directory is hard-coded; the base directory can be something else, and the filename too.) This will contain in your case xkb_keycodes { <PGUP> = 110; <PGDN> = 112; <DELE> = 115; <INS> = 117; <HOME> = 118; <END> = 119;}; To load this, run setxkbmap -print | sed -e '/xkb_keycodes/s/"[[:space:]]/+local&/' | xkbcomp -I${HOME}/.xkb - $DISPLAY This outputs your current settings, adds +local to the xkb_keycodes include statement, and loads it into the XKB compiler, adding ~/.xkb to the include path. (If you named your file something other than ~/.xkb/keycodes/local , you'll obviously need to change +local and -I${HOME}/.xkb} .) That way all the other settings are preserved. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/188170",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105332/"
]
} |
188,175 | Is there a way to only execute a command after another is done without a temp file?I have one longer running command and another command that formats the output and sends it to a HTTP server using curl.If i just execute commandA | commandB , commandB will start curl , connect to the server and start sending data. Because commandA takes so long, the HTTP server will timeout.I can do what I want with commandA > /tmp/file && commandB </tmp/file && rm -f /tmp/file Out of curiosity I want to know if there is a way to do it without the temp file.I tried mbuffer -m 20M -q -P 100 but the curl process is still started right at the beginning. Mbuffer waits just until commandA is done with actually sending the data.(The data itself is just a few hundred kb at max) | This is similar to a couple of the other answers. If you have the “moreutils” package, you should have the sponge command. Try commandA | sponge | { IFS= read -r x; { printf "%s\n" "$x"; cat; } | commandB; } The sponge command is basically a pass-through filter (like cat )except that it does not start writing the output until it has read the entire input. I.e., it “soaks up” the data, and then releases it when you squeeze it (like a sponge). So, to a certain extent, this is “cheating” –if there’s a non-trivial amount of data, sponge almost certainly uses a temporary file. But it’s invisible to you; you don’t have to worry about housekeeping thingslike choosing a unique filename and cleaning up afterwards. The { IFS= read -r x; { printf "%s\n" "$x"; cat; } | commandB; } reads the first line of output from sponge . Remember, this doesn’t appear until commandA has finished. Then it fires up commandB , writes the first line to the pipe,and invokes cat to read the rest of the output and write it to the pipe. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/188175",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74958/"
]
} |
188,182 | I want to have a script that takes the current working directory to a variable. The section that needs the directory is like this dir = pwd . It just prints pwd how do I get the current working directory into a variable? | There's no need to do that, it's already in a variable: $ echo "$PWD"/home/terdon The PWD variable is defined by POSIX and will work on all POSIX-compliant shells: PWD Set by the shell and by the cd utility. In the shell the valueshall be initialized from the environment as follows. If a value forPWD is passed to the shell in the environment when it is executed, thevalue is an absolute pathname of the current working directory that isno longer than {PATH_MAX} bytes including the terminating null byte,and the value does not contain any components that are dot or dot-dot,then the shell shall set PWD to the value from the environment.Otherwise, if a value for PWD is passed to the shell in theenvironment when it is executed, the value is an absolute pathname ofthe current working directory, and the value does not contain anycomponents that are dot or dot-dot, then it is unspecified whether theshell sets PWD to the value from the environment or sets PWD to thepathname that would be output by pwd -P. Otherwise, the sh utilitysets PWD to the pathname that would be output by pwd -P. In caseswhere PWD is set to the value from the environment, the value cancontain components that refer to files of type symbolic link. In caseswhere PWD is set to the pathname that would be output by pwd -P, ifthere is insufficient permission on the current working directory, oron any parent of that directory, to determine what that pathname wouldbe, the value of PWD is unspecified. Assignments to this variable maybe ignored. If an application sets or unsets the value of PWD, thebehaviors of the cd and pwd utilities are unspecified. For the more general answer, the way to save the output of a command in a variable is to enclose the command in $() or ` ` (backticks): var=$(command) or var=`command` Of the two, the $() is preferred since it is easier to build complex commands like: command0 "$(command1 "$(command2 "$(command3)")")" Whose backtick equivalent would look like: command0 "`command1 \"\`command2 \\\"\\\`command3\\\`\\\"\`\"`" | {
"score": 10,
"source": [
"https://unix.stackexchange.com/questions/188182",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
188,197 | I'm using a virtual machine running Windows for development purposes inside a Ubuntu host (I also use the Ubuntu part for my regular activities, but not both at the same time). As I need to compile on Windows regularly, I want to increase the performance of the VM as much as I can. Therefore I want to use a "minimal" version of my desktop environment: if possible, I want only my VM running, in fullscreen. Is it possible to use such a minimal system? If yes, what is it, or how can I achieve this setup myself? An environment chooser on my login screen would be great, but optional. | There's no need to do that, it's already in a variable: $ echo "$PWD"/home/terdon The PWD variable is defined by POSIX and will work on all POSIX-compliant shells: PWD Set by the shell and by the cd utility. In the shell the valueshall be initialized from the environment as follows. If a value forPWD is passed to the shell in the environment when it is executed, thevalue is an absolute pathname of the current working directory that isno longer than {PATH_MAX} bytes including the terminating null byte,and the value does not contain any components that are dot or dot-dot,then the shell shall set PWD to the value from the environment.Otherwise, if a value for PWD is passed to the shell in theenvironment when it is executed, the value is an absolute pathname ofthe current working directory, and the value does not contain anycomponents that are dot or dot-dot, then it is unspecified whether theshell sets PWD to the value from the environment or sets PWD to thepathname that would be output by pwd -P. Otherwise, the sh utilitysets PWD to the pathname that would be output by pwd -P. In caseswhere PWD is set to the value from the environment, the value cancontain components that refer to files of type symbolic link. In caseswhere PWD is set to the pathname that would be output by pwd -P, ifthere is insufficient permission on the current working directory, oron any parent of that directory, to determine what that pathname wouldbe, the value of PWD is unspecified. Assignments to this variable maybe ignored. If an application sets or unsets the value of PWD, thebehaviors of the cd and pwd utilities are unspecified. For the more general answer, the way to save the output of a command in a variable is to enclose the command in $() or ` ` (backticks): var=$(command) or var=`command` Of the two, the $() is preferred since it is easier to build complex commands like: command0 "$(command1 "$(command2 "$(command3)")")" Whose backtick equivalent would look like: command0 "`command1 \"\`command2 \\\"\\\`command3\\\`\\\"\`\"`" | {
"score": 10,
"source": [
"https://unix.stackexchange.com/questions/188197",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105348/"
]
} |
188,205 | I have a scenario like VAR = `Some command brings the file path name and assigns it to VAR` For example VAR can have value like /root/user/samp.txt I want to grep command like grep HI $VAR This doesnt works gives an error saying cannot open /root/user/samp.txtsame error when tried cat $VAR. How to handle this ? I have given try echo $VAR | grep HIgrep HI "$VAR" Using korn shell | There's no need to do that, it's already in a variable: $ echo "$PWD"/home/terdon The PWD variable is defined by POSIX and will work on all POSIX-compliant shells: PWD Set by the shell and by the cd utility. In the shell the valueshall be initialized from the environment as follows. If a value forPWD is passed to the shell in the environment when it is executed, thevalue is an absolute pathname of the current working directory that isno longer than {PATH_MAX} bytes including the terminating null byte,and the value does not contain any components that are dot or dot-dot,then the shell shall set PWD to the value from the environment.Otherwise, if a value for PWD is passed to the shell in theenvironment when it is executed, the value is an absolute pathname ofthe current working directory, and the value does not contain anycomponents that are dot or dot-dot, then it is unspecified whether theshell sets PWD to the value from the environment or sets PWD to thepathname that would be output by pwd -P. Otherwise, the sh utilitysets PWD to the pathname that would be output by pwd -P. In caseswhere PWD is set to the value from the environment, the value cancontain components that refer to files of type symbolic link. In caseswhere PWD is set to the pathname that would be output by pwd -P, ifthere is insufficient permission on the current working directory, oron any parent of that directory, to determine what that pathname wouldbe, the value of PWD is unspecified. Assignments to this variable maybe ignored. If an application sets or unsets the value of PWD, thebehaviors of the cd and pwd utilities are unspecified. For the more general answer, the way to save the output of a command in a variable is to enclose the command in $() or ` ` (backticks): var=$(command) or var=`command` Of the two, the $() is preferred since it is easier to build complex commands like: command0 "$(command1 "$(command2 "$(command3)")")" Whose backtick equivalent would look like: command0 "`command1 \"\`command2 \\\"\\\`command3\\\`\\\"\`\"`" | {
"score": 10,
"source": [
"https://unix.stackexchange.com/questions/188205",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105356/"
]
} |
188,244 | From the man page for tcpdump 4.1.1 (yes I know its old) -i Listen on interface. If unspecified, tcpdump searches the system interface list for the lowest numbered, configured up interface (excluding loopback). Ties are broken by choosing > the earliest match. On Linux systems with 2.2 or later kernels, an interface argument of ``any'' can be used to capture packets from all interfaces. Note that captures on the ``any'' device will not be done in promiscuous mode. Can anyone shed light on what exactly is meant by the last statement. I'm working with an IDS server that has many interfaces and when I use tcpdump -i any, it clearly shows traffic not sourced/destined for the IDS server. However there is another service that already puts all the interfaces into promiscuous mode. Do they maybe just mean that if you use -i any that tcpdump won't put the interfaces into PROMISC mode? | Do they maybe just mean that if you use -i any that tcpdump won't put the interfaces into PROMISC mode? Yes, that's what I meant by that. The "any" device doesn't work by opening all devices independently and capturing on them, it works by opening a "packet socket" and, instead of binding it to a particular device (which is how you capture on that device on Linux), leaving it unbound so it listens to all sockets. The call to set promiscuous mode would fail on an unbound socket (I just tested it on a fairly recent kernel), so libpcap will not turn promiscuous mode on for the "any" device. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/188244",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2691/"
]
} |
188,257 | I'm trying to come up with an solution to this problem, I need to incrementally count and then print the counts of the unique values in column 1 of a tab delimited text file. Here is an example: Apple_1 1 300Apple_2 1 500Apple_2 500 1500Apple_2 1500 2450Apple_3 1 1250Apple_3 1250 2000 And the desired output is: Apple_1 1 300 1Apple_2 1 500 1Apple_2 500 1500 2Apple_2 1500 2450 3Apple_3 1 1250 1Apple_3 1250 2000 2 I know that I can print the line number in awk with just print NR, but I don't know how to reset it for each unique value of column 1. Thanks for any help you can offer, I appreciate it. | The standard trick for this kind of problem in Awk is to use an associative counter array: awk '{ print $0 "\t" ++count[$1] }' This counts the number of times the first word in each line has been seen. It's not quite what you're asking for, since Apple_1 1 300Apple_2 1 500Apple_1 500 1500 would produce Apple_1 1 300 1Apple_2 1 500 1Apple_1 500 1500 2 (the count for Apple_1 isn't reset when we see Apple_2 ), but if the input is sorted you'll be OK. Otherwise you'd need to track a counter and last-seen key: awk '{ if (word == $1) { counter++ } else { counter = 1; word = $1 }; print $0 "\t" counter }' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/188257",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95635/"
]
} |
188,264 | Original file claudioantonioclaudiomichele I want to change only the first occurrence of "claudio" with "claudia"so that I would get the following: claudiaantonioclaudiomichele I have tried the following: sed -e '1,/claudio/s/claudio/claudia/' nomi The above command performs global substitution (it replaces all occurrences of 'claudio') . Why? | If you are using GNU sed , try: sed -e '0,/claudio/ s/claudio/claudia/' nomi sed does not start checking for the regex that ends a range until after the line that starts that range. From man sed (POSIX manpage, emphasis mine): An editing command with two addresses shall select the inclusive range from the first pattern space that matches the first address through the next pattern space that matches the second. The 0 address is not standard though, that's a GNU sed extension not supported by any other sed implementation. Using awk Ranges in awk work more as you were expecting: $ awk 'NR==1,/claudio/{sub(/claudio/, "claudia")} 1' nomiclaudiaantonioclaudiomichele Explanation: NR==1,/claudio/ This is a range that starts with line 1 and ends with the first occurrence of claudio . sub(/claudio/, "claudia") While we are in the range, this substitute command is executed. 1 This awk's cryptic shorthand for print the line. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/188264",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80389/"
]
} |
188,285 | In my terminal shell, I ssh'ed into a remote server, and I cd to the directory I want. Now in this directory, there is a file called table that I want to copy to my local machine /home/me/Desktop . How can I do this? I tried scp table /home/me/Desktop but it gave an error about no such file or directory. Does anyone know how to do this? | The syntax for scp is: If you are on the computer from which you want to send file to a remote computer: scp /file/to/send username@remote:/where/to/put Here the remote can be a FQDN or an IP address. On the other hand if you are on the computer wanting to receive file from a remote computer: scp username@remote:/file/to/send /where/to/put scp can also send files between two remote hosts: scp username@remote_1:/file/to/send username@remote_2:/where/to/put So the basic syntax is: scp username@source:/location/to/file username@destination:/where/to/put You can read man scp to get more ideas on this. | {
"score": 10,
"source": [
"https://unix.stackexchange.com/questions/188285",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55077/"
]
} |
188,303 | I prefer (Perl/Python Compatible Regular Expressions) regular expressions. man grep : ...., but only work if pcre is available in the system Is this supported on the most common linux distributions? I don't care for freebsd, solarix, busybox, ... | PCRE is installed on pretty much all server and desktop Linux systems, but you can't necessarily expect it on lightweight systems or embedded systems (phones, routers, TVs, and other IoT ), as they often have very trimmed versions of the standard userland (basically, anything with a busybox base is all but guaranteed to lack PCRE). Debian has a Popularity Contest feature that measures installation metrics on various packages. grep (25th most common, 176k installs) depends ( not optionally) on libpcre3 (94th most common, 174k installs). I cannot explain the discrepancy, but I also wouldn't worry about it. It should be safe to assume that modern desktop and servers running full Linux distributions will have versions of grep compiled with PCRE support. Still, if you want PCRE with a better assurance of portability, don't rely on grep -P or pcregrep (9363th at 1k installs) or ack (21728th at 180 installs), use perl (88th at 175k installs) directly: perl -ne 'print if /regexp/' Note, there are servers that intentionally lack perl, python, and php for "security purposes," namely that many rogue scripts (e.g. rootkits ) depend on these and therefore cannot run. This is very rare (and kind of silly since there are plenty of potent rogue POSIX shell scripts). Note 2: Perl is slow (as is python). LibPCRE is much faster, but the simpler your regexps, the better performance you'll get. If possible, use grep alone (BRE, basic regexps) or else try grep -E (ERE, extended regexps) rather than diving deeper into PCRE land. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/188303",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22068/"
]
} |
188,327 | There's a similar question that deals with the 'wrapping' scenario, where you want to replace for example cd with a command that calls the builtin cd . However, in light of shellshock et al and knowing that bash imports functions from the environment, I've done a few tests and I can't find a way to safely call the builtin cd from my script. Consider this cd() { echo "muahaha"; }export -f cd Any scripts called in this environment using cd will break (consider the effects of something like cd dir && rm -rf . ). There are commands to check the type of a command (conveniently called type ) and commands for executing the builtin version rather than a function ( builtin and command ). But, lo and behold, these can be overridden using functions as well builtin() { "$@"; }command() { "$@"; }type() { echo "$1 is a shell builtin"; } Will yield the following: $ type cdcd is a shell builtin$ cd xmuahaha$ builtin cd xmuahaha$ command cd xmuahaha Is there any way to safely force bash to use the builtin command, or at least detect that a command isn't a builtin, without clearing the entire environment? I realize if someone controls your environment you're probably screwed anyway, but at least for aliases you've got the option to not call the alias by inserting a \ before it. | Olivier D is almost correct, but you must set POSIXLY_CORRECT=1 before running unset . POSIX has a notion of Special Built-ins , and bash supports this . unset is one such builtin. Search for SPECIAL_BUILTIN in builtins/*.c in the bash source for a list, it includes set , unset , export , eval and source . $ unset() { echo muahaha-unset; }$ unset unsetmuahaha-unset$ POSIXLY_CORRECT=1$ unset unset The rogue unset has now been removed from the environment, if you unset command , type , builtin then you should be able to proceed, but unset POSIXLY_CORRECT if you are relying on non-POSIX behaviour or advanced bash features. This does not address aliases though, so you must use \unset to be sure it works in interactive shell (or always, in case expand_aliases is in effect). For the paranoid, this should fix everything, I think: POSIXLY_CORRECT=1\unset -f help read unset\unset POSIXLY_CORRECTre='^([a-z:.\[]+):' # =~ is troublesome to escapewhile \read cmd; do [[ "$cmd" =~ $re ]] && \unset -f ${BASH_REMATCH[1]}; done < <( \help -s "*" ) ( while , do , done and [[ are reserved words and don't need precautions.)Note we are using unset -f to be sure to unset functions, although variables and functions share the same namespace it's possible for both to exist simultaneously (thanks to Etan Reisner) in which case unset-ing twice would also do the trick. You can mark a function readonly, bash does not prevent you unsetting a readonly function up to and including bash-4.2, bash-4.3 does prevent you but it still honours the special builtins when POSIXLY_CORRECT is set. A readonly POSIXLY_CORRECT is not a real problem, this is not a boolean or flag its presence enables POSIX mode, so if it exists as a readonly you can rely on POSIX features, even if the value is empty or 0. You'll simply need to unset problematic functions a different way than above, perhaps with some cut-and-paste: \help -s "*" | while IFS=": " read cmd junk; do echo \\unset -f $cmd; done (and ignore any errors) or engage in some other scriptobatics . Other notes: function is a reserved word, it can be aliased but not overridden with a function. (Aliasing function is mildly troublesome because \function is not acceptable as a way of bypassing it) [[ , ]] are reserved words, they can be aliased (which will be ignored) but not overridden with a function (though functions can be so named) (( is not a valid name for a function, nor an alias | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/188327",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66346/"
]
} |
188,352 | -bash-3.2$ pstree 27108Script.sh---java---15*[{java}] What does this 15* mean here, also any meanings defined for [] & {} in the context of command? | Its right there in the man page: pstree shows running processes as a tree. The tree is rooted at either pid or init if pid is omitted. If a user name is specified, all process trees rooted at pro- cesses owned by that user are shown. pstree visually merges identical branches by putting them in square brackets and prefixing them with the repetition count, e.g. init-+-getty |-getty |-getty ‘-getty becomes init---4*[getty] Child threads of a process are found under the parent process and are shown with the process name in curly braces, e.g. icecast2---13*[{icecast2}] In your case, the process 27108 was started by the script Script.sh . The Script.sh created a java process which spawned another 15 java threads. A ps -eLf | grep java | wc -l should returned you a count around the number 15. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/188352",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74910/"
]
} |
188,367 | I wanted to know the names of all the devices on the network. I've already tried without great success many commands found on the web, but nothing worked like I wanted to. Basically, when I enter my router settings, I can get the devices' names that are connected to my net. I can get it also on some applications so I guess it can be done in some way. I want a list of names of all devices connected to my Wi-Fi network via the commandline. Thanks pi@raspberrypi ~ $ nmap -sP 192.168.4.0/24Starting Nmap 6.00 ( http://nmap.org ) at 2015-03-05 13:55 UTCNmap scan report for 192.168.4.1Host is up (0.0055s latency).Nmap scan report for 192.168.4.2Host is up (0.42s latency).Nmap scan report for 192.168.4.4Host is up (0.045s latency).Nmap scan report for 192.168.4.5Host is up (0.47s latency).Nmap scan report for 192.168.4.6Host is up (0.0032s latency).Nmap scan report for 192.168.4.7Host is up (0.79s latency).Nmap scan report for 192.168.4.8Host is up (0.0024s latency).Nmap scan report for 192.168.4.9Host is up (0.038s latency).Nmap scan report for 192.168.4.10Host is up (0.034s latency).Nmap scan report for 192.168.4.11Host is up (0.029s latency).Nmap scan report for 192.168.4.22Host is up (0.12s latency).Nmap scan report for 192.168.4.27Host is up (0.031s latency).Nmap scan report for 192.168.4.28Host is up (0.012s latency).Nmap scan report for 192.168.4.100Host is up (0.0038s latency).Nmap done: 256 IP addresses (14 hosts up) scanned in 49.30 seconds | I tend to use fing-cli from fing development toolkit for this. It is a scanner that scans the subnet you are on and it tries to extract hostnames and display them alongside ip and MAC. You should install it manually from the site, or with your package manager ( brew install fing-cli ) and run with administrator privileges to make it work. Ex: sudo fing 14:19:05 > Discovery profile: Default discovery profile14:19:05 > Discovery class: data-link (data-link layer)14:19:05 > Discovery on: 192.168.1.0/2414:19:05 > Discovery round starting.14:19:05 > Host is up: 192.168.1.151 HW Address: XX:XX:XX:XX:XX:XX Hostname: My-laptop-hostname14:19:05 > Host is up: 192.168.1.1 HW Address: YY:YY:YY:YY:YY:YY Hostname: router.asus.com14:19:06 > Discovery progress 25%14:19:07 > Discovery progress 50%14:19:08 > Discovery progress 75%14:19:05 > Host is up: 192.168.1.10 HW Address: AA:BB:CC:DD:EE:FF (ASUSTek COMPUTER)14:19:05 > Host is up: 192.168.1.11 HW Address: GG:HH:II:JJ:KK:LL14:19:06 > Host is up: 192.168.1.99 HW Address: MM:NN:OO:PP:QQ:RR (Apple) Hostname: iPhoneOfSomeone As you can see not all devices give out their hostname; for example some peripherals like printers do not always provide hostnames, but most devices do.It even tries to guess the manufacturer by analysing the id-part of the MAC It runs on the Raspberry Pi, i installed it on mine a while ago and it works as expected. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/188367",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61495/"
]
} |
188,383 | Need to have the process of command typically an update to be shown using yad and at the same time log all the output to a given log file set up.This is what I have apt-get update | yad --width=400 --height=300 \--title="Updating debian package list ..." --progress \--pulsate --text="Updating debian package list ..." \--auto-kill --auto-close \--percentage=10 The above command creates a pulsating indicator for the process and closes on finish but I need to have it log all the output it gives I have tried apt-get update >>${logfile} 2>&1 | yad --width=400 --height=300 \--title="Updating debian package list ..." --progress \--pulsate --text="Updating debian package list ..." \--auto-kill --auto-close \--percentage=10 but that gives me an error and hangs from there on no dialog and no logging just freezing. Here is the error GLib-CRITICAL **: g_source_remove: assertion `tag > 0' failed Help appreciated | The error is because you are redirecting all output to $logfile so there is no output for yad to process. The tool you're looking for is tee : NAME tee - read from standard input and write to standard output and filesSYNOPSIS tee [OPTION]... [FILE]...DESCRIPTION Copy standard input to each FILE, and also to standard output. So, you could do: apt-get update 2>&1 | tee -a ${logfile} | yad --width=400 --height=300 \ --title="Updating debian package list ..." --progress \ --pulsate --text="Updating debian package list ..." \ --auto-kill --auto-close \ --percentage=10 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/188383",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96711/"
]
} |
188,393 | So far I know,the /dev/urandom file is one of the special files, it's purpose is to generate random characters.When I execute cat /dev/urandom a stream of strange characters, some even Chinese ideograms are displayed continuously.However, if I pipe this stream of strange characters into tr with the option -dc it makes a random stream of 0 and 1 or whatever characters are put into quotation marks in tr -dc "setofcharacters" . I tried to read the manual for tr, but under -d and -c I get explanations that I do not understand or could make sense of, like -c, -C, --complement use the complement of SET1 -d, --delete delete characters in SET1, do not translate could someone please be so kind and deliver a step by step explanation of the logic behind cat /dev/urandom | tr -dc "01" | I think that the biggest problem is to understand what “complement” means in the description of the -c option. It refers to complement as in set theory, read about it on Wikipedia : In set theory, a complement of a set A refers to things not in (that is, things outside of) A. Complement of set 01 means all characters except 0 and 1. Thus, the -d option will remove all characters that are neither 0 nor 1. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/188393",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102788/"
]
} |
188,402 | I have a program which outputs dates in the following form: Thu Mar 5 09:15:27 2015 This is very close to the output of date in Linux, but this format does not include time zone. Assuming I can capture the string Thu Mar 5 09:15:27 2015 as date1 and the string Thu Mar 5 09:30:58 2015 as date2 in bash , how can I get the number of seconds between those two dates (without having to write my own bash/python/etc. script to do the calculation)? | Not sure what you mean by "without calculation". The following does the calculation... date1="Thu Mar 5 09:15:27 2015"date2="Thu Mar 5 09:30:58 2015"printf "%s\n" $(( $(date -d "$date2" "+%s") - $(date -d "$date1" "+%s") )) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/188402",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32234/"
]
} |
188,501 | How do I temporarily disable one or more users' cron jobs? In particular, I do not want to suspend the user's cron rights - merely not fire any of their jobs. I am on SLES 11 SP2 and SP3 systems | touch /var/spool/cron/crontabs/$username; chmod 0 /var/spool/cron/crontabs/$username should do the trick. Restore with chmod 600 and touch (you need to change the file's mtime to make cron (attempt to) reload it). On at least Debian and probably with Vixie cron in general, chmod 400 /var/spool/cron/crontabs/$username also does the trick, because that implementation insists on permissions being exactly 600. However this only lasts until the user runs the crontab command. If you want a robust way, I don't think there's anything better than temporarily moving their crontab out of the way or changing the permissions, and temporarily adding them to /etc/cron.deny . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/188501",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91672/"
]
} |
188,516 | From my home pc using putty, I ssh'ed into a remote server, and I ran a python program that takes hours to complete, and as it runs it prints stuff. Now after a while, my internet disconnected, and I had to close and re-open putty and ssh back in. If I type 'top' I can see the python program running in the background with its PID number. Is there a command I can use to basically re-open that process and see it printing its stuff again? Thanks | touch /var/spool/cron/crontabs/$username; chmod 0 /var/spool/cron/crontabs/$username should do the trick. Restore with chmod 600 and touch (you need to change the file's mtime to make cron (attempt to) reload it). On at least Debian and probably with Vixie cron in general, chmod 400 /var/spool/cron/crontabs/$username also does the trick, because that implementation insists on permissions being exactly 600. However this only lasts until the user runs the crontab command. If you want a robust way, I don't think there's anything better than temporarily moving their crontab out of the way or changing the permissions, and temporarily adding them to /etc/cron.deny . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/188516",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55077/"
]
} |
188,536 | I have a script that will pipe its output to |tee scriptnameYYMMDD.txt . After each cycle of the for loop in which the output is generated, I'll be reversing the file contents with tac scriptnameYYYYMMDD.txt > /var/www/html/logs/scriptname.txt so that the log output is visible in a browser window with the newest lines at the top. I'll have several scripts doing this in parallel. I'm trying to minimize the disk activity, so output from |tee scriptnameYYYYMMDD.txt to a RAMdisk would be best. mktemp creates a file in the /tmp folder, but that doesn't appear to be off-disk. | You can mount a tmpfs partititon and write the file there: mount -t tmpfs -o size=500m tmpfs /mountpoint This partition now is limited to 500 MB. If your temporary file grows larger than 500 MB an error will occur: no space left on device . But, it doesn't matter when you specify a larger amount of space than your systems RAM has. tmpfs uses swap space too, so you cannot force a system crash, as opposed to ramfs . You can now write your file into /mountpoint : command | tee /mountpoint/scriptnameYYYYMMDD.txt | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/188536",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38537/"
]
} |
188,553 | When in nautilus or caja I click on the icon of an encrypted disk and enter my password, the underlying block device gets mapped to /dev/mapper/luks-$UUID and it gets mounted at /media/$USER/$DISK , no root password required.Is there a way to invoke this process from the command line, without GUI,including obviating sudo and having the mountpoint able to get unmounted again from GUI. | I don't know of a single-command way to do this. The GUI programs are doing a fair bit of interrogation of the disk to take the "right" approach and you'll need to do some of that work yourself. You don't need sudo, though, and I think the resulting sequence of events is relatively painless. The Short Answer Use udisksctl from the udisks2 package: udisksctl unlock -b /path/to/disk/partitionudisksctl mount -b /path/to/unlocked/device Your user account will need to be appropriately authorized in order for the above to work. On Debian and Ubuntu, that means adding your account to the plugdev group. When you're done with the disk: udisksctl unmount -b /path/to/unlocked/deviceudisksctl lock -b /path/to/disk/partitionudisksctl power-off -b /path/to/disk/or/partition How to Set Things Up Here's how you can set things up (via the command line) to make the process of using the disk as painless as possible. I'll assume you want to use the entirety of the USB drive as a single filesystem. Other configurations will require modifications to the instructions. Caveat on variations: I haven't found a way to use LVM in the encrypted container that will allow an unprivileged account to disconnect everything. (I don't see a way to deactivate a volume group via udisksctl .) For purposes of illustration, we'll say that the disk is /dev/sda . You'll need a name for the filesystem to make it easier to reference later. I'll use " example ". Partition the Disk Run sudo parted /dev/sda and run the following commands: mklabel gptmkpart example-part 1MiB -1squit The mkpart command will probably prompt you to adjust the parameters slightly. You should be okay accepting its recommended numbers. The partition will now be available via /dev/disk/by-partlabel/example-part . Create and Mount the LUKS Partition sudo cryptsetup luksFormat /dev/disk/by-partlabel/example-part Go through the prompts. sudo cryptsetup luksOpen /dev/disk/by-partlabel/example-part example-unlocked The encrypted device is now available at /dev/mapper/example-unlocked . This is not going to be a permanent thing; it's just for the setup process. Create Your Filesystem Let's assume that the filesystem you're using is XFS. Pretty much any other traditional filesystem will work the same way. The important thing is to add a label that you can reference later: sudo mkfs -t xfs -L example /dev/mapper/example-unlocked The filesystem's block device can now be accessed via /dev/disk/by-label/example . Set Filesystem Permissions By default, the filesystem will be only accessible by root. In most cases, you probably want the files to be accessible by your user account. Assuming your account name is " user ": udisksctl mount -b /dev/disk/by-label/examplesudo chown user:user /media/user/example Close Everything Down udisksctl unmount -b /dev/disks/by-label/examplesudo cryptsetup luksClose example-unlocked Use Your Filesystem This is what you'll do regularly. After plugging in the USB drive, udisksctl unlock -b /dev/disks/by-partlabel/example-partudisksctl mount -b /dev/disks/by-label/example If your user account is " user ", the filesystem will now be mounted at /media/user/example . To unmount the filesystem: udisksctl unmount -b /dev/disks/by-label/exampleudisksctl lock -b /dev/disks/by-partlabel/example-partudisksctl power-off -b /dev/disks/by-partlabel/example-part Now you can disconnect the USB drive. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/188553",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23692/"
]
} |
188,555 | I have a dedicated server with Ubuntu 14.10 installed. The server should connect to an OpenVPN server and the traffic should only go through the VPN, except for SSH traffic. My idea is to implement this with iptables , but I am not a specialist. What exactly should be handled with iptables ? Supposing below are the basic conditions: Allow only traffic through VPN. When my server loses connection to the VPN, there should be no traffic leak. Allow SSH without VPN. I want to connect to my server with SSH and its normal IP from the server provider (but only SSH traffic). Since I am not the only user of the VPN, I want to hide my server in the VPN from other computers. I started to create my iptables rules but it always blocks all my connections: # flush old rulesiptables -F# accept SSH traffic with non vpn connectioniptables -A INPUT -d X.X.X.X -p tcp --dport 22 -j ACCEPT iptables -A OUTPUT -s X.X.X.X -p tcp --sport 22 -j ACCEPT # block everything except my rulesiptables -P INPUT DROPiptables -P FORWARD DROPiptables -P OUTPUT DROP# allow loopbackiptables -A INPUT -i lo -j ACCEPTiptables -A OUTPUT -o lo -j ACCEPT# allow vpniptables -A INPUT -j ACCEPT -p udp -s Y.Y.Y.Y --sport 1194iptables -A OUTPUT -j ACCEPT -p udp -d Y.Y.Y.Y --dport 1194 X.X.X.X is the server IP from the provider. Y.Y.Y.Y is the IP of the VPN server. The rules always kick me out of my current SSH connection and I can't create new SSH connection, although it should accept traffic through the port 22. | I don't know of a single-command way to do this. The GUI programs are doing a fair bit of interrogation of the disk to take the "right" approach and you'll need to do some of that work yourself. You don't need sudo, though, and I think the resulting sequence of events is relatively painless. The Short Answer Use udisksctl from the udisks2 package: udisksctl unlock -b /path/to/disk/partitionudisksctl mount -b /path/to/unlocked/device Your user account will need to be appropriately authorized in order for the above to work. On Debian and Ubuntu, that means adding your account to the plugdev group. When you're done with the disk: udisksctl unmount -b /path/to/unlocked/deviceudisksctl lock -b /path/to/disk/partitionudisksctl power-off -b /path/to/disk/or/partition How to Set Things Up Here's how you can set things up (via the command line) to make the process of using the disk as painless as possible. I'll assume you want to use the entirety of the USB drive as a single filesystem. Other configurations will require modifications to the instructions. Caveat on variations: I haven't found a way to use LVM in the encrypted container that will allow an unprivileged account to disconnect everything. (I don't see a way to deactivate a volume group via udisksctl .) For purposes of illustration, we'll say that the disk is /dev/sda . You'll need a name for the filesystem to make it easier to reference later. I'll use " example ". Partition the Disk Run sudo parted /dev/sda and run the following commands: mklabel gptmkpart example-part 1MiB -1squit The mkpart command will probably prompt you to adjust the parameters slightly. You should be okay accepting its recommended numbers. The partition will now be available via /dev/disk/by-partlabel/example-part . Create and Mount the LUKS Partition sudo cryptsetup luksFormat /dev/disk/by-partlabel/example-part Go through the prompts. sudo cryptsetup luksOpen /dev/disk/by-partlabel/example-part example-unlocked The encrypted device is now available at /dev/mapper/example-unlocked . This is not going to be a permanent thing; it's just for the setup process. Create Your Filesystem Let's assume that the filesystem you're using is XFS. Pretty much any other traditional filesystem will work the same way. The important thing is to add a label that you can reference later: sudo mkfs -t xfs -L example /dev/mapper/example-unlocked The filesystem's block device can now be accessed via /dev/disk/by-label/example . Set Filesystem Permissions By default, the filesystem will be only accessible by root. In most cases, you probably want the files to be accessible by your user account. Assuming your account name is " user ": udisksctl mount -b /dev/disk/by-label/examplesudo chown user:user /media/user/example Close Everything Down udisksctl unmount -b /dev/disks/by-label/examplesudo cryptsetup luksClose example-unlocked Use Your Filesystem This is what you'll do regularly. After plugging in the USB drive, udisksctl unlock -b /dev/disks/by-partlabel/example-partudisksctl mount -b /dev/disks/by-label/example If your user account is " user ", the filesystem will now be mounted at /media/user/example . To unmount the filesystem: udisksctl unmount -b /dev/disks/by-label/exampleudisksctl lock -b /dev/disks/by-partlabel/example-partudisksctl power-off -b /dev/disks/by-partlabel/example-part Now you can disconnect the USB drive. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/188555",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105586/"
]
} |
188,556 | Where can I find the syslog config file under SLES 12? rsyslog and syslog-service are installed according to YaST2 and rcsyslog status outputs: ServerName:~ # rcsyslog statusUsage: /sbin/rcsyslog {start|stop|status|try-restart|restart|force-reload|reload}rsyslog.service - System Logging Service Loaded: loaded (/usr/lib/systemd/system/rsyslog.service; enabled) Active: active (running) since Wed 2015-03-04 16:05:46 CET; 1 day 17h ago Main PID: 787 (rsyslogd) CGroup: /system.slice/rsyslog.service ââ787 /usr/sbin/rsyslogd -n | I don't know of a single-command way to do this. The GUI programs are doing a fair bit of interrogation of the disk to take the "right" approach and you'll need to do some of that work yourself. You don't need sudo, though, and I think the resulting sequence of events is relatively painless. The Short Answer Use udisksctl from the udisks2 package: udisksctl unlock -b /path/to/disk/partitionudisksctl mount -b /path/to/unlocked/device Your user account will need to be appropriately authorized in order for the above to work. On Debian and Ubuntu, that means adding your account to the plugdev group. When you're done with the disk: udisksctl unmount -b /path/to/unlocked/deviceudisksctl lock -b /path/to/disk/partitionudisksctl power-off -b /path/to/disk/or/partition How to Set Things Up Here's how you can set things up (via the command line) to make the process of using the disk as painless as possible. I'll assume you want to use the entirety of the USB drive as a single filesystem. Other configurations will require modifications to the instructions. Caveat on variations: I haven't found a way to use LVM in the encrypted container that will allow an unprivileged account to disconnect everything. (I don't see a way to deactivate a volume group via udisksctl .) For purposes of illustration, we'll say that the disk is /dev/sda . You'll need a name for the filesystem to make it easier to reference later. I'll use " example ". Partition the Disk Run sudo parted /dev/sda and run the following commands: mklabel gptmkpart example-part 1MiB -1squit The mkpart command will probably prompt you to adjust the parameters slightly. You should be okay accepting its recommended numbers. The partition will now be available via /dev/disk/by-partlabel/example-part . Create and Mount the LUKS Partition sudo cryptsetup luksFormat /dev/disk/by-partlabel/example-part Go through the prompts. sudo cryptsetup luksOpen /dev/disk/by-partlabel/example-part example-unlocked The encrypted device is now available at /dev/mapper/example-unlocked . This is not going to be a permanent thing; it's just for the setup process. Create Your Filesystem Let's assume that the filesystem you're using is XFS. Pretty much any other traditional filesystem will work the same way. The important thing is to add a label that you can reference later: sudo mkfs -t xfs -L example /dev/mapper/example-unlocked The filesystem's block device can now be accessed via /dev/disk/by-label/example . Set Filesystem Permissions By default, the filesystem will be only accessible by root. In most cases, you probably want the files to be accessible by your user account. Assuming your account name is " user ": udisksctl mount -b /dev/disk/by-label/examplesudo chown user:user /media/user/example Close Everything Down udisksctl unmount -b /dev/disks/by-label/examplesudo cryptsetup luksClose example-unlocked Use Your Filesystem This is what you'll do regularly. After plugging in the USB drive, udisksctl unlock -b /dev/disks/by-partlabel/example-partudisksctl mount -b /dev/disks/by-label/example If your user account is " user ", the filesystem will now be mounted at /media/user/example . To unmount the filesystem: udisksctl unmount -b /dev/disks/by-label/exampleudisksctl lock -b /dev/disks/by-partlabel/example-partudisksctl power-off -b /dev/disks/by-partlabel/example-part Now you can disconnect the USB drive. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/188556",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94195/"
]
} |
188,557 | This is the first time this has happened to me. I don't use Linux on a regular basis, except for some project work. I had logged in today, finished my work and restarted it so I could go into Windows (I have both Windows and Linux in my computer). But when I rebooted it gave me this error: error: attempt to read or write outside of partition. grub rescue> I have not tweaked with the software. As I said I don't use it so frequently. Please help me resolve this problem. | I don't know of a single-command way to do this. The GUI programs are doing a fair bit of interrogation of the disk to take the "right" approach and you'll need to do some of that work yourself. You don't need sudo, though, and I think the resulting sequence of events is relatively painless. The Short Answer Use udisksctl from the udisks2 package: udisksctl unlock -b /path/to/disk/partitionudisksctl mount -b /path/to/unlocked/device Your user account will need to be appropriately authorized in order for the above to work. On Debian and Ubuntu, that means adding your account to the plugdev group. When you're done with the disk: udisksctl unmount -b /path/to/unlocked/deviceudisksctl lock -b /path/to/disk/partitionudisksctl power-off -b /path/to/disk/or/partition How to Set Things Up Here's how you can set things up (via the command line) to make the process of using the disk as painless as possible. I'll assume you want to use the entirety of the USB drive as a single filesystem. Other configurations will require modifications to the instructions. Caveat on variations: I haven't found a way to use LVM in the encrypted container that will allow an unprivileged account to disconnect everything. (I don't see a way to deactivate a volume group via udisksctl .) For purposes of illustration, we'll say that the disk is /dev/sda . You'll need a name for the filesystem to make it easier to reference later. I'll use " example ". Partition the Disk Run sudo parted /dev/sda and run the following commands: mklabel gptmkpart example-part 1MiB -1squit The mkpart command will probably prompt you to adjust the parameters slightly. You should be okay accepting its recommended numbers. The partition will now be available via /dev/disk/by-partlabel/example-part . Create and Mount the LUKS Partition sudo cryptsetup luksFormat /dev/disk/by-partlabel/example-part Go through the prompts. sudo cryptsetup luksOpen /dev/disk/by-partlabel/example-part example-unlocked The encrypted device is now available at /dev/mapper/example-unlocked . This is not going to be a permanent thing; it's just for the setup process. Create Your Filesystem Let's assume that the filesystem you're using is XFS. Pretty much any other traditional filesystem will work the same way. The important thing is to add a label that you can reference later: sudo mkfs -t xfs -L example /dev/mapper/example-unlocked The filesystem's block device can now be accessed via /dev/disk/by-label/example . Set Filesystem Permissions By default, the filesystem will be only accessible by root. In most cases, you probably want the files to be accessible by your user account. Assuming your account name is " user ": udisksctl mount -b /dev/disk/by-label/examplesudo chown user:user /media/user/example Close Everything Down udisksctl unmount -b /dev/disks/by-label/examplesudo cryptsetup luksClose example-unlocked Use Your Filesystem This is what you'll do regularly. After plugging in the USB drive, udisksctl unlock -b /dev/disks/by-partlabel/example-partudisksctl mount -b /dev/disks/by-label/example If your user account is " user ", the filesystem will now be mounted at /media/user/example . To unmount the filesystem: udisksctl unmount -b /dev/disks/by-label/exampleudisksctl lock -b /dev/disks/by-partlabel/example-partudisksctl power-off -b /dev/disks/by-partlabel/example-part Now you can disconnect the USB drive. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/188557",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105587/"
]
} |
188,575 | I'm trying to use du to investigate disk usage in a directory like so: du -hd1 | sort -rh This gives me a list that starts as follows 61G .7.9G ./A5.1G ./B2.7G ./.C1.6G ./.D1.2G ./.E1.2G ./F850M ./.G724M ./H666M ./I281M ./J249M ./.K150M ./.L The rest of the list sums up to less than 1GB and there are no large files directly contained in that directory: ls -Slhtotal 1.8M... What is the source of the discrepancy between the total 61GB and the sum of less than 25GB of the directory sums? | The calls above miss large hidden files. Here is the result with du -a du -ahd1 . | sort -rh | head61G .38G ./.xsession-errors7.9G ./A5.1G ./B... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/188575",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105598/"
]
} |
188,584 | On my PC I have to following routing table: Destination Gateway Genmask Flags MSS Window irtt Iface0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 wlan0192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0 I don't understand how it is analyzed, I mean from top-down or bottom-up? If it is analyzed from top-down then everything will always be sent to the router in my home even though the IP destination was 192.168.1.15; but what I knew (wrongly?) was that if a PC is inside my same local network then once I recovered the MAC destination through a broadcast message then my PC could send directly the message to the destination. | The routing table is used in order of most specific to least specific. However on linux it's a bit more complicated than you might expect. Firstly there is more than one routing table, and when which routing table is used is dependent on a number of rules. To get the full picture: $ ip rule show0: from all lookup local 32766: from all lookup main 32767: from all lookup default$ ip route show table localbroadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 broadcast 192.168.0.0 dev eth0 proto kernel scope link src 192.168.1.27 local 192.168.1.27 dev eth0 proto kernel scope host src 192.168.1.27 broadcast 192.168.1.255 dev eth0 proto kernel scope link src 192.168.1.27 $ ip route show table maindefault via 192.168.1.254 dev eth0 192.168.0.0/23 dev eth0 proto kernel scope link src 192.168.1.27 $ ip route show table default$ The local table is the special routing table containing high priority control routes for local and broadcast addresses. The main table is the normal routing table containing all non-policy routes. This is also the table you get to see if you simply execute ip route show (or ip ro for short). I recommend not using the old route command anymore, as it only shows the main table and its output format is somewhat archaic. The table default is empty and reserved for post-processing if previous default rules did not select the packet. You can add your own tables and add rules to use those in specific cases. One example is if you have two internet connections, but one host or subnet must always be routed via one particular internet connection. The Policy Routing with Linux book explains all this in exquisite detail. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/188584",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66485/"
]
} |
188,597 | The following function is called as the first line in every other function in order to handle optional debugging, context sensitive help, etc. Because of this, calling a function that in turn calls another function can (usually will) result in a circular reference. How can the circular reference be avoided without losing functionality? function fnInit () { ### ### on return from fnInit... ### 0 implies "safe to continue" ### 1 implies "do NOT continue" ### # local _fn= local _msg= # ### handle optional debugging, context sensitive help, etc. # [[ "$INSPECT" ]] && { TIMELAPSE= ...; } ### fnInit --inspect # [[ "$1" == --help ]] && { ... ; return 1; } ### fnInit --help # : # : return 0} | The routing table is used in order of most specific to least specific. However on linux it's a bit more complicated than you might expect. Firstly there is more than one routing table, and when which routing table is used is dependent on a number of rules. To get the full picture: $ ip rule show0: from all lookup local 32766: from all lookup main 32767: from all lookup default$ ip route show table localbroadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 broadcast 192.168.0.0 dev eth0 proto kernel scope link src 192.168.1.27 local 192.168.1.27 dev eth0 proto kernel scope host src 192.168.1.27 broadcast 192.168.1.255 dev eth0 proto kernel scope link src 192.168.1.27 $ ip route show table maindefault via 192.168.1.254 dev eth0 192.168.0.0/23 dev eth0 proto kernel scope link src 192.168.1.27 $ ip route show table default$ The local table is the special routing table containing high priority control routes for local and broadcast addresses. The main table is the normal routing table containing all non-policy routes. This is also the table you get to see if you simply execute ip route show (or ip ro for short). I recommend not using the old route command anymore, as it only shows the main table and its output format is somewhat archaic. The table default is empty and reserved for post-processing if previous default rules did not select the packet. You can add your own tables and add rules to use those in specific cases. One example is if you have two internet connections, but one host or subnet must always be routed via one particular internet connection. The Policy Routing with Linux book explains all this in exquisite detail. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/188597",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27437/"
]
} |
188,644 | Is there a way to monitor and get a listing of URLs accessed over HTTP by a process running in my local machine? Failing that, what would be closer to the above information? My use case is that I have an effectively opaque utility that I suspect is downloading some information over the web and I would like to see what exactly it is fetching over the network. | You can use tcpflow to do this. From the website: tcpflow is a program that captures data transmitted as part of TCP connections (flows), and stores the data in a way that is convenient for protocol analysis or debugging. A program like 'tcpdump' shows a summary of packets seen on the wire, but usually doesn't store the data that's actually being transmitted. In contrast, tcpflow reconstructs the actual data streams and stores each flow in a separate file for later analysis. It will dump logs into the current working directory in the format [ip].[port]-[ip].[port]. # mkdir http_logs# cd http_logs# tcpflow dst port 80 This example logs all TCP packets going to port 80, and saves them in the current directory for easy debugging. You can also filter further using pcap filters . You might also find the -a switch useful, which enables post processing. For example, it puts the HTTP headers and bodies in separate files. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/188644",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24044/"
]
} |
188,658 | I found the following command to repeat a character in Linux: printf 'H%.0s' {1..5000} > H.txt I want, for example, H to repeat 5000 times. What does %.0s mean here? | That command depends on the shell generating 5000 arguments, and passing them to printf which then ignores them. While it may seem pretty quick - and is relative to some things - the shell must still generate all of those strings as args (and delimit them) and so on. Besides the fact that the generated Hs can't be printed until the shell first iterates to 5000, that command also costs in memory all that it takes to store and delimit the numeric string arguments to printf plus the Hs. Just as simply you can do: printf %05000s|tr \ H ...which generates a string of 5000 spaces - which, at least, are usually only a single byte per and cost nothing to delimit because they are not delimited. A few tests indicate that even for as few as 5000 bytes the cost of the fork and the pipe required for tr is worth it even in this case, and it almost always is when the numbers get higher. I ran... time bash -c 'printf H%.0s {1..5000}' >/dev/null ...and... time bash -c 'printf %05000s|tr \ H' >/dev/null Each about 5 times a piece (nothing scientific here - only anecdotal) and the brace expansion version averaged a little over .02 seconds in total processing time, but the tr version came in at around .012 seconds total on average - and the tr version beat it every time. I can't say I'm surprised - {brace expansion} is a useful interactive shell shorthand feature, but is usually a rather wasteful thing to do where any kind of scripting is concerned. The common form: for i in {[num]..[num]}; do ... ...when you think about it, is really two for loops - the first is internal and implied in that the shell must loop in some way to generate those iterators before saving them all and iterating them again for your for loop. Such things are usually better done like: iterator=$startuntil [ "$((iterator+=interval))" -gt "$end" ]; do ... ...because you store only a very few values and overwrite them as you go as well as doing the iteration while you generate the iterables. Anyway, like the space padding mentioned before, you can also use printf to zeropad an arbitrary number of digits, of course, like: printf %05000d I do both without arguments because for every argument specified in printf 's format string when an argument is not found the null string is used - which is interpreted as a zero for a digit argument or an empty string for a string. This is the other (and - in my opinion - more efficient) side of the coin when compared with the command in the question - while it is possible to get nothing from something as you do when you printf %.0 length strings for each argument, so also is it possible to get something from nothing. Quicker still for large amounts of generated bytes you can use dd like: printf \\0| dd bs=64k conv=sync ...and w/ regular files dd 's seek=[num] argument can be used to greater advantage. You can get 64k newlines rather than nulls if you add ,unblock cbs=1 to the above and from there could inject arbitrary strings per line with paste and /dev/null - but in that case, if it is available to you, you might as well use: yes 'output string forever' Here are some more dd examples anyway: dd bs=5000 seek=1 if=/dev/null of=./H.txt ...which creates (or truncates) a \0NUL filled file in the current directory named H.txt of size 5000 bytes. dd seeks straight to the offset and NUL-fills all behind it. <&1 dd bs=5000 conv=sync,noerror count=1 | tr \\0 H >./H.txt ...which creates a file of same name and size but filled w/ H chars. It takes advantage of dd 's spec'd behavior of writing out at least one full null-block in case of a read error when noerror and sync conversions are specified (and - without count= - would likely go on longer than you could want) , and intentionally redirects a writeonly file descriptor at dd 's stdin. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/188658",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83681/"
]
} |
188,659 | I have gone through plenty of documents and spent a lot of time trying to configure it, but so far unsuccessfully. I have keyboard with the 3rd level keys (probably ISO_LEVEL3_shift ?)So when I press Caps Lock and at the same time A , I get a acute (á). Is it possible to map level 3 shift key to Space instead of Caps Lock ?I imagine it as when it is pressed with the key, then it acts as level 3 shift, otherwise it is just space.I am not against experimenting but show me, please, at least direction (if it is possible). | That command depends on the shell generating 5000 arguments, and passing them to printf which then ignores them. While it may seem pretty quick - and is relative to some things - the shell must still generate all of those strings as args (and delimit them) and so on. Besides the fact that the generated Hs can't be printed until the shell first iterates to 5000, that command also costs in memory all that it takes to store and delimit the numeric string arguments to printf plus the Hs. Just as simply you can do: printf %05000s|tr \ H ...which generates a string of 5000 spaces - which, at least, are usually only a single byte per and cost nothing to delimit because they are not delimited. A few tests indicate that even for as few as 5000 bytes the cost of the fork and the pipe required for tr is worth it even in this case, and it almost always is when the numbers get higher. I ran... time bash -c 'printf H%.0s {1..5000}' >/dev/null ...and... time bash -c 'printf %05000s|tr \ H' >/dev/null Each about 5 times a piece (nothing scientific here - only anecdotal) and the brace expansion version averaged a little over .02 seconds in total processing time, but the tr version came in at around .012 seconds total on average - and the tr version beat it every time. I can't say I'm surprised - {brace expansion} is a useful interactive shell shorthand feature, but is usually a rather wasteful thing to do where any kind of scripting is concerned. The common form: for i in {[num]..[num]}; do ... ...when you think about it, is really two for loops - the first is internal and implied in that the shell must loop in some way to generate those iterators before saving them all and iterating them again for your for loop. Such things are usually better done like: iterator=$startuntil [ "$((iterator+=interval))" -gt "$end" ]; do ... ...because you store only a very few values and overwrite them as you go as well as doing the iteration while you generate the iterables. Anyway, like the space padding mentioned before, you can also use printf to zeropad an arbitrary number of digits, of course, like: printf %05000d I do both without arguments because for every argument specified in printf 's format string when an argument is not found the null string is used - which is interpreted as a zero for a digit argument or an empty string for a string. This is the other (and - in my opinion - more efficient) side of the coin when compared with the command in the question - while it is possible to get nothing from something as you do when you printf %.0 length strings for each argument, so also is it possible to get something from nothing. Quicker still for large amounts of generated bytes you can use dd like: printf \\0| dd bs=64k conv=sync ...and w/ regular files dd 's seek=[num] argument can be used to greater advantage. You can get 64k newlines rather than nulls if you add ,unblock cbs=1 to the above and from there could inject arbitrary strings per line with paste and /dev/null - but in that case, if it is available to you, you might as well use: yes 'output string forever' Here are some more dd examples anyway: dd bs=5000 seek=1 if=/dev/null of=./H.txt ...which creates (or truncates) a \0NUL filled file in the current directory named H.txt of size 5000 bytes. dd seeks straight to the offset and NUL-fills all behind it. <&1 dd bs=5000 conv=sync,noerror count=1 | tr \\0 H >./H.txt ...which creates a file of same name and size but filled w/ H chars. It takes advantage of dd 's spec'd behavior of writing out at least one full null-block in case of a read error when noerror and sync conversions are specified (and - without count= - would likely go on longer than you could want) , and intentionally redirects a writeonly file descriptor at dd 's stdin. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/188659",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91007/"
]
} |
188,667 | I cd to a directory that has a large number of files, 40,948 to be exact. When I issue ls it takes a long time and before the command finally prints the results to the screen it tells me "ls: cannot access : No such file or directory" for a few files. It's not the same file names every time. When I do ls -l some files have no posix permissions just "???????" preceding the file name. I tried to chown -R the directory because I wanted to change the group and I get a mixture of "chown: cannot access" or "chown: changing ownership of" + ": No such file or directory" on even more files than when I use ls . This highlighted for me that I don't even know if a directory in Linux has a table of contents, I thought that maybe the toc was corrupted. But the fact that the results are always different suggests otherwise. Could this be a "nofile" issue in /etc/security/limits.conf ? | It sounds to me like it is in heavy use by other processes. Gathering the information you see is not necessarily atomic. It might get a list of filenames, then go and look up the information on the file (either for a long listing or to color the output). If the file is deleted between those two actions, then you'll get output similar to what you describe. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/188667",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89086/"
]
} |
188,674 | I have set of linux folders and I need to get the permissions of the folders numerically. For example in the below directory, I need to what is the permission value of the folder... Whether 755, 644 or 622 etc... drwxrwsr-x 2 dev puser 4096 Jul 7 2014 fonts | To get the octal permission notation. stat -c "%a" file644 See the manpage of stat, -c specifies the format and %a prints the permissions in octal. Or for multiple files and folders: stat -c "%a %n" *755 dir644 file1600 file2 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/188674",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65920/"
]
} |
188,715 | I'm learning file operation calls under Linux. The read() and write() and many other functions use cache to increase performance, and I know fsync() can transfer data from cache to disk device. However, is there any commands or system calls that can determine whether the data is cached or written to disk? | Read data is (directly) read from the cache only if it is already there. That implies that cached data was previously accessed by a process and kept in cache. There is no system call or any method for a process to know if some piece of data to be read is already in cache or not. On the other hand, a process can select if it wants written data to be immediately stored on the disk or only after a variable delay which is the general case. This is done by using the O_SYNC flag when opening the file. There is also the O_DIRECT flag which when supported force all I/Os to bypass the read and write cache and go directly to the disk. Finally, the hard-disk itself is free to implement its own cache so even after a synchronous write call has returned, there is no guarantee data is already on the disk platters. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/188715",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105696/"
]
} |
188,721 | I have a linux machine and a windows machine, the linux machine has a samba share with a .exe file on it. I can read and write files from the windows machine to the samba share, but I cannot execute the .exe file. How can I setup samba to allow me to execute it? | This behavior because of a security policy of the modern Samba. Fix by adding this line to your /etc/samba/smb.conf under [global] section: [global]acl allow execute always = True Source: Samba's Wiki . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/188721",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105705/"
]
} |
188,733 | How can I make the following command substitution work? $ timereal 0m0.000suser 0m0.000ssys 0m0.000s$ oldtime="$(time)"bash: command substitution: line 23: syntax error near unexpected token `)'bash: command substitution: line 23: `time)"' I guess it doesn't work because the output of the command has multiple lines, because one line output works: $ oldtime="$(echo hello)"$ echo $oldtimehello | This behavior because of a security policy of the modern Samba. Fix by adding this line to your /etc/samba/smb.conf under [global] section: [global]acl allow execute always = True Source: Samba's Wiki . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/188733",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
188,737 | In rsync , --compress or -z will compress file data during the transfer. If I understand correctly, it compresses files before transfer and then decompress them after transfer. Does the time reduced during transfer due to compression outweight the time for compression and decompression? Does the answer to the question depend on if I backup to an external HDD via usb (2.0 or 3.0), or to a server by SSH over the Internet? | It's a general question. Does compression and decompression at endpoints improve the effective bandwidth of a link? The effective (perceived) bandwith of a link doing compression and decompression at endpoints is a function of: how fast you can compress (your CPU speed) your network's actual bandwidth The function is described with this 3D graph, which you might want to consult for your particular situation: The graph originates with the Compression Tools Compared 2005 article by http://www.linuxjournal.com/ . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/188737",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
188,741 | How can I install extra versions of python on Debian (jessie). On Ubuntu I can use the "deadsnakes" PPA which will give me any python version I want, with the version name in the command (e.g. python33 for python 3.3). This allows me to install them all beside each other. I can use virtualenvs to install specific python packages for specific versions without messing with the system packages. I maintain some python libraries, and they need to work on many versions of python. If I have the python binary installed, then tox will take care of using virtualenvs for each python version. So what's the debian equivalent of Ubuntu's deadsnakes PPA? UPDATE I want to install python: 2.6, 2.7, 3.3, 3.4 and 3.5. | Using the PPA You can use the PPA on Debian. Pick an Ubuntu version that's from slightly before your Debian version, and it should have all the necessary libraries. For wheezy, the oneiric PPA seems ok (but it lacks more recent Python versions). For jessie, the trusty PPA should work. To add a PPA on Debian, Download and add the PPA signing key with: gpg --keyserver keyserver.ubuntu.com --recv-keys F23C5A6CF475977595C89F51BA6932366A755776gpg --export F23C5A6CF475977595C89F51BA6932366A755776 | sudo tee /usr/share/keyrings/ppa-deadsnakes.gpg > /dev/null Then create a file /etc/apt/sources.list.d/ppa-deadsnakes.list containing: deb [signed-by=/usr/share/keyrings/ppa-deadsnakes.gpg] https://ppa.launchpadcontent.net/deadsnakes/ppa/ubuntu/ trusty main deb-src [signed-by=/usr/share/keyrings/ppa-deadsnakes.gpg] https://ppa.launchpadcontent.net/deadsnakes/ppa/ubuntu/ trusty main Finally run apt-get update and install the desired packages. If you can't get the PPA working for some reasons (maybe you can't find a version that works with the libraries you have), you can download the source and recompile them for your distribution. Using a chrooted system What I usually do to test compatibility with other versions is to run older or newer distributions in a chrooted system. For example, you could install various versions of Ubuntu with the Python versions you're interested in, or you could install trusty in a chroot and install the PPA there. For more information, see my schroot guide . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/188741",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4691/"
]
} |
188,744 | The implementation of traceroute(tracert) differs on Windows and Unix. I wanted to compare both with Wireshark. I am on Windows 7 now and I wanted to get Unix traceroute implementation quickly. My first idea was to get it using MSYS or Cygwin. I installed Cygwin with "inetutils*" packages checked, but there is no traceroute command and corresponding executable in /usr/bin/ . I also tried searching for "traceroute" with Cygwin package search and found this substring in list of "zsh" files. I installed zsh and tried traceroute and tcptraceroute with no results. Which package should i check for installation of traceroute and is there traceroute for Cygwin at all? | There is no traceroute in the Cygwin packages, because tracert is always available on Windows. See https://cygwin.com/ml/cygwin/2005-12/msg00443.html for a thread briefly discussing this. You can try compiling a Unix-style traceroute from source usign Cygwin. If you want to compare Windows-style tracert to Unix-style traceroute though, I'd recommend running traceroute on Unix or Linux, because the network stacks are different; so running a Unix-style traceroute on Windows won't give you quite the same network traces as Unix-style traceroute on Unix. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/188744",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85258/"
]
} |
188,829 | I need to convert a bunch of .wv files to .flac but I can't seem to find a program to do it. Does anybody know how I can do this? P.S.: I was wondering why Audacity does not support the importing .wv format if it is open source and lossless. Does anybody know? Update: Somewhere I read about converting .ape to .flac using ffmpeg , so I decided to try replacing the .ape with .wv and at first it seems to work but then I get this at the end: [wv @ 0x8e7c200] Invalid block header.te= 836.1kbits/s audiofile.wv: Invalid data found when processing input So my question is: what is wrong here? By the way, the command used was ffmpeg -i audiofile.wv audiofile.flac . Thanks for the help. | The ffmpeg error you're getting makes me think you might just have a corrupted file. You could try sox audiofile.wv audiofile.flac . Alternatively, you could use the wavpack tools: wvunpack audiofile.wv -o - | flac - -o audiofile.flac Note that wiill not copy over any metadata; you'll need to do that separately. If even the wavpack tools can't successfully read the file, then your file is probably just corrupt. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/188829",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105775/"
]
} |
188,836 | Why does Unix allow files with a period at the end of the name? Is there any use for this? For example: filename. I am asking because I have a simple function that echoes the extension of a file. ext() { echo ${1##*.}} But knowing it will print nothing if the filename ends in a . , I wondered whether it would be more reliable to write: ext() { extension=${1##*.} if [ -z "$extension" ]; then echo "$1" else echo "$extension" fi} Clearly this depends on what you are trying to accomplish, but if a . at the end of the file name were not allowed, I wouldn't have wondered anything in the first place. | Unix filenames are just sequences of bytes , and can contain any byte except / and NUL in any position. There is no built-in concept of an "extension" as there is in Windows and its filesystems, and so no reason not to allow filenames to end (or start) with any character that can appear in them generally — a . is no more special than an x . Why does Unix allow files with a period at the end of the name? "A sequence of bytes" is a simple and non-exclusionary definition of a name when there's no motivating reason to count something out, which there wasn't. Making and applying a rule to exclude something specifically is more work. Is there a use for it? If you want to make a file with that name, sure. Is there a use for a filename ending with x ? I can't say I would generally make a filename with a . at the end, but both . and x are explicitly part of the portable filename character set that is required to be universally supported, and neither is special in any way, so if I had a use for it (maybe for a mechanically-generated encoding) then I could, and I could rely on it working. As well, the special filenames . (dot) and .. (dot-dot), which refer to the current and parent directories, are mandated by POSIX, and both end with a . . Any code dealing with filenames in general needs to address those anyway. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/188836",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90327/"
]
} |
188,903 | Reportedly, Debian devs need to squash 54 bugs more. These are termed as 'release critical bugs'. My question is, if this bug-squashing takes this much time, then how come Ubuntu releases each version in such a short time? I mean, how do they squash the bugs in this time period? And if they really do, then why doesn't Debian get the debugged code from Ubuntu? Shouldn't these "release critical bugs" be debugged by now? Since Ubuntu uses Debian's testing/unstable as base, and then make their release; and obviously Ubuntu doesn't release a buggy version. It just doesn't make sense to me. | The release process between Debian and Ubuntu is very different. Ubuntu releases are based on a time schedule (set release date), while Debian uses a "when it's ready" model. Here are some key points that make a difference in release speed: Most packages Ubuntu pulls in from Debian are not officially supported (universe repository) Ubuntu supports 2 architectures while Debian supports 13 (some release critical bugs are specific to an architecture) Ubuntu does not have a direct concept of a "release critical" bug, although it does have a "critical" bug severity Only every 4th Ubuntu release (LTS) is recommended for production use. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/188903",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
188,930 | As far as I know, I can use the tee command to split the standard output onto the screen and further files: command -option1 -option2 argument | tee file1 file2 file3 Is it possible to redirect the output to commands instead of files using tee, so that I could theoretically create a chain of commands? | You could use named pipes ( http://linux.die.net/man/1/mkfifo ) on the command line of tee and have the commands reading on the named pipes. mkfifo /tmp/data0 /tmp/data1 /tmp/data2cmd0 < /tmp/data0 & cmd1 < /tmp/data1 & cmd2 < /tmp/data2 &command -option1 -option2 argument | tee /tmp/data0 /tmp/data1 /tmp/data2 When command finishes, tee will close the named pipes, which will signal an EOF (read of 0 bytes) on each of the /tmp/dataN which would normally terminate the cmdN processes. Real example: $ mkfifo /tmp/data0 /tmp/data1 /tmp/data2$ wc -l < /tmp/data0 & wc -w < /tmp/data1 & wc -c < /tmp/data2 &$ tee /tmp/data0 /tmp/data1 /tmp/data2 < /etc/passwd >/dev/null$ 61197437 Because of the background processes, the shell returned a prompt before the program output. All three instances of wc terminated normally. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/188930",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102788/"
]
} |
188,943 | I'm new to embedded and am reading 'Embedded Linux Primer' at the moment. I tried to build an xscale arm kernel: make ARCH=arm CROSS_COMPILE=xscale_be- ixp4xx_defconfig## configuration written to .config followed by the make: ~/linux-stable$ make ARCH=arm CROSS_COMPILE=xscale_be- zImagemake: xscale_be-gcc: Command not found CHK include/config/kernel.release CHK include/generated/uapi/linux/version.h CHK include/generated/utsrelease.hmake[1]: `include/generated/mach-types.h' is up to date. CC kernel/bounds.s/bin/sh: 1: xscale_be-gcc: not foundmake[1]: *** [kernel/bounds.s] Error 127make: *** [prepare0] Error 2 I had downloaded and extracted gcc-arm-none-eabi-4_9-2014q4 from https://launchpad.net/gcc-arm-embedded and set the path PATH=/opt/gcc-arm-none-eabi-4_9-2014q4/bin/ Do I need another compiler for the xscale architecture? Any ideas where I can find xscale_be-gcc? | You could use named pipes ( http://linux.die.net/man/1/mkfifo ) on the command line of tee and have the commands reading on the named pipes. mkfifo /tmp/data0 /tmp/data1 /tmp/data2cmd0 < /tmp/data0 & cmd1 < /tmp/data1 & cmd2 < /tmp/data2 &command -option1 -option2 argument | tee /tmp/data0 /tmp/data1 /tmp/data2 When command finishes, tee will close the named pipes, which will signal an EOF (read of 0 bytes) on each of the /tmp/dataN which would normally terminate the cmdN processes. Real example: $ mkfifo /tmp/data0 /tmp/data1 /tmp/data2$ wc -l < /tmp/data0 & wc -w < /tmp/data1 & wc -c < /tmp/data2 &$ tee /tmp/data0 /tmp/data1 /tmp/data2 < /etc/passwd >/dev/null$ 61197437 Because of the background processes, the shell returned a prompt before the program output. All three instances of wc terminated normally. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/188943",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105845/"
]
} |
189,023 | If I do gshuf -e $(seq 1 10) in bash it will print the numbers 1 thru 10 in random order. But if I do: a=$(shuf -e $(seq 1 10))for i in "${a[@]}"do echo $i echo "next"done It prints all ten numbers followed by "next". How do I loop over the output from shuf (or gshuf in os x)? The variable, a , seems to be a string, so I could split it. But that seems sort of sloppy. How do I get shuf to output an array? | You are using a scalar assignment. Either use an array a=( $(shuf -e $(seq 1 10)) )for i in "${a[@]}"do echo $i echo "next"done or let the shell split the scalar a=$(shuf -e $(seq 1 10))for i in ${a}do echo $i echo "next"done | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/189023",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18366/"
]
} |
189,030 | In online tutorials it is often suggested to use the following command to copy a CDROM to an iso image: $ dd if=/dev/dvd of=foobar.iso bs=2048 Why must the byte size be specified? I notice that in fact 2048 is the standard byte size for CDROM images but it seems that dd without specifying bs= or count= works as well. Under what circumstances would it be problematic to not specify bs= or count= when copying from a device of finite size? | When is dd suitable for copying data? (or, when are read() and write() partial) points out an important caveat when using count : dd can copy partial blocks, so when given count it will stop after the given number of blocks, even if some of the blocks were incomplete. You may therefore end up with fewer than bs * count bytes copied, unless you specify iflag=fullblock . The default block size for dd is 512 bytes. count is a limit; as your question hints it isn't required when copying a device of finite size, and is really intended to copy only part of a device. I think there are two aspects to consider here: performance and data recovery. As far as performance is concerned, you ideally want the block size to be at least equal to, and a multiple of, the underlying physical block size (hence 2048 bytes when reading a CD-ROM). In fact nowadays you may as well specify larger block sizes to give the underlying caching systems a chance to buffer things for you. But increasing the block size means dd has to use that much more memory, and it could be counter-productive if you're copying over a network because of packet fragmentation. As far as data recovery is concerned, you may retrieve more data from a failing hard disk if you use smaller block sizes; this is what programs such as dd-rescue do automatically: they read large blocks initially, but if a block fails they re-read it with smaller block sizes. dd won't do this, it will just fail the whole block. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189030",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9760/"
]
} |
189,050 | I wanted to download all graphic files from our organisation's graphics repository web page. They are Illustrator ( .ai) format and Corel Draw ( .cdr) format. They are directly hyperlinked (i.e. <a href="http://server/path-to-file.ai">...</a> . | wget includes features to support this directly: wget -r -A "*.ai,*.cdr" 'address-of-page-with-hyperlinks' -r enables recursive mode so it will download more than the given URL, and -A limits the files it will download and keep in the end. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189050",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44992/"
]
} |
189,069 | I have a folder called home/homeLife I have a file called home1 home2 and home3 stored in /home I want to move all files that start with home* to home/homeLife/. . I typed mv home* /home/homeLifecannot move homeLife into subdirectory of itself My question: How can I exclude directories? | With zsh , use glob qualifiers: mv home*(.) dst moves only regular files. While mv home*(^/) dst moves files of any type except directories. mv home*(^-/) dst would also exclude symlinks to directories. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189069",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105920/"
]
} |
189,073 | I know that contents of double quotes are expanded, whereas the contents of single quotes are not, such that echo '$1' gives $1 where as echo "$1" gives <the argument - blank in this particular example> the same as echo $1 However, the question Bash: How to create an alias in .bashrc for awk with parameters led me to wonder why double quotes and not single quotes were used when declaring the alias, thus: alias cutf="_cutf" instead of alias cutf='_cutf' I would have employed single quotes as in the latter example. Most examples on the web use single too, unless there is some expansion required. However, in this case, no expansion, to my eyes, is apparent. Is it that, in this case, they are interchangeable, or are the double quotes necessary because a function definition is employed? | This is an answer to the question posed by the title (which brought me here), rather than to the OP's particular question. Single quotes are evaluated dynamically: alias QS='echo $PWD' Double quotes are evaluated at time of creation and, thereafter, never changes: alias QD="echo $PWD" The same behaviour is found in bash, zsh and, I guess, other shells. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189073",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/97255/"
]
} |
189,104 | Is there a way to back up and restore file ownership and permissions (the things that can be changed with chown and chmod )? You can do this in Windows using icacls . What about access control lists? | You can do this with the commands from the acl package (which should be available on all mainstream distributions, but might not be part of the base installation). They back up and restore ACL when ACL are present, but they also work for basic permissions even on systems that don't support ACL. To back up permissions in the current directory and its subdirectories recursively: getfacl -R . >permissions.facl To restore permissions: setfacl --restore=permissions.facl | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/189104",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3570/"
]
} |
189,211 | I am investigating how to have iproute2 commands replace the old ifconfig and ifup ifdown command, and I found out something interesting. My NIC setup is: [16:07:41 root@vm network-scripts ]# cat /etc/sysconfig/network-scripts/ifcfg-eth2DEVICE=eth2ONBOOT=noBOOTPROTO=dhcp To bring up and down an interface, the old way will be: ifup eth2 ifdown eth2 [16:25:10 root@vm network-scripts ]# ip a show eth24: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether 08:00:27:b8:13:b4 brd ff:ff:ff:ff:ff:ff[16:25:14 root@vm network-scripts ]# ifup eth2Determining IP information for eth2... done.[16:25:22 root@vm network-scripts ]# ip a show eth24: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:b8:13:b4 brd ff:ff:ff:ff:ff:ff inet 192.168.1.4/24 brd 192.168.1.255 scope global eth2 inet6 fe80::a00:27ff:feb8:13b4/64 scope link valid_lft forever preferred_lft forever[16:25:26 root@vm-cention network-scripts ]# ifdown eth2[16:27:51 root@vm-cention network-scripts ]# ip a show eth24: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether 08:00:27:b8:13:b4 brd ff:ff:ff:ff:ff:ff To use iproute2 command to do this, normally we use ip link set eth2 up , but apparently the iproute2 can only bring up the link layer of the NIC, but not the IP address: [16:36:25 root@vm network-scripts ]# ip link set eth2 up[16:37:16 root@vm network-scripts ]# ip a show eth24: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:b8:13:b4 brd ff:ff:ff:ff:ff:ff inet6 fe80::a00:27ff:feb8:13b4/64 scope link valid_lft forever preferred_lft forever[16:37:20 root@vm network-scripts ]# ping yahoo.comping: unknown host yahoo.com But the traditional ifup can do that: [16:37:39 root@vm network-scripts ]# ifup eth2Determining IP information for eth2... done.[16:39:59 root@vm network-scripts ]# ip a show eth24: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:b8:13:b4 brd ff:ff:ff:ff:ff:ff inet 192.168.1.4/24 brd 192.168.1.255 scope global eth2 inet6 fe80::a00:27ff:feb8:13b4/64 scope link valid_lft forever preferred_lft forever[16:40:04 root@vm network-scripts ]# ping yahoo.comPING yahoo.com (98.139.183.24) 56(84) bytes of data.64 bytes from ir2.fp.vip.bf1.yahoo.com (98.139.183.24): icmp_seq=1 ttl=43 time=243 ms64 bytes from ir2.fp.vip.bf1.yahoo.com (98.139.183.24): icmp_seq=2 ttl=43 time=341 ms I think this is due to ifup bring up the link layer, and also the IPv4 address together. So my question is: how do we use iproute2 to enable the IPv4 address as well ? Side note: Interestingly, when iproute2 bring down the link layer, it doesn't disable the IPv4 address: [16:42:50 root@vm network-scripts ]# ip a show eth24: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:b8:13:b4 brd ff:ff:ff:ff:ff:ff inet 192.168.1.4/24 brd 192.168.1.255 scope global eth2 inet6 fe80::a00:27ff:feb8:13b4/64 scope link valid_lft forever preferred_lft forever[16:42:58 root@vm network-scripts ]# ip link set eth2 down[16:43:04 root@vm network-scripts ]# ip a show eth24: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether 08:00:27:b8:13:b4 brd ff:ff:ff:ff:ff:ff inet 192.168.1.4/24 brd 192.168.1.255 scope global eth2[16:43:09 root@vm network-scripts ]# ping yahoo.comping: unknown host yahoo.com | ip and ifup serve different purposes, and are complimentary. ip should not be used to replace ifup . Actually, ifup operates at a higher level. ifconfig (traditional, portable) and ip (Linux-only, but much better interface) are two commands that serve the same purpose. They are for setting interface configuration directly. ip does indeed fully replace ifconfig (and route and some of netstat ) because of its much nicer interface and much wider capabilities, except that ifconfig remains for compatibility. Neither ip nor ifconfig contain or manage persistent configuration. They apply the request they get on the command line, and that's it. ifup and ifdown are for bringing interfaces up and down according to the system coniguration. On some systems this configuration is kept in /etc/network/interfaces , on others it's in /etc/sysconfig/something . Their job is to read the whole configuration, including IP addresses, routes, DNS servers, custom scripts, etc..., and apply it to the system. They do this (at least conceptually) by calling ip or ifconfig . You can manually execute all of the ip commands that ifup would have used to bring up an interface, but beware that the ifup / ifdown persistent status information will be out of sync with reality. ifup will continue thinking the interface is down even after you bring it up with ip . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189211",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73101/"
]
} |
189,251 | How can I read a dash file from the terminal other than delimiting it with ./ For example to read a - file we can read it by cat ./-file_name Q: Is there an alternative way to achieve the same thing? | Use double -- to mark end of options: cat -- -<FILENAME> Other programs such as touch , rm or git checkout also follow this convention: $ touch -- -file$ lltotal 0-rw-r--r-- 1 ja ja 0 Mar 10 13:13 -file$ echo hi! >> -file$ cat -- -filehi!$ rm -- -file$ echo $?0 WARNING: It's good practice to always use -- after rm in scripts. An attacker could place --rf file in a directory and rm * would take it as run parameters. See this: $ touch A$ touch B$ mkdir dir$ touch dir/C$ touch -- -rf$ rm *$ lltotal 0-rw-r--r-- 1 ja ja 0 Mar 10 13:21 -rf Oops, this is not we meant, we didn't want to remove directories. We should have used -- : $ touch A$ touch B$ mkdir dir$ touch dir/C$ touch -- -rf$ rm -- *rm: cannot remove `dir': Is a directory$ lltotal 4.0Kdrwxr-xr-x 2 ja ja 4.0K Mar 10 13:22 dir | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/189251",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104071/"
]
} |
189,309 | I'm quite new in Linux. I have had a Windows and two drives: C and D . Windows was on C . Then I installed Ubuntu instead of Windows, and I expected my files on D will stay. But I can't find it. As far as I understand, for this purpose I should mount some sda* , but with sudo lsblk I didn't found any suitable device. I tried to mount all of them with no results. Sorry, I can't provide to you the output of sudo lsblk , because it all happened on my friend's laptop, she is far now. So, is there any chances that files are still existing somewhere? | As @Celeo shared I think you chances are small. What you should have done before installation are two things :- a. Make a backup of the content on the D drive before doing that. b. Make D partition smaller and then make E which is free, blank and has nothing. Then when you install choose E to install Ubuntu or whichever GNU/Linux distribution you want to do Then you will be able to see your MS-Windows partitions once you have installed the ntfs-3g driver $ aptitude show ntfs-3gPackage: ntfs-3g State: installedAutomatically installed: yesVersion: 1:2014.2.15AR.3-1Priority: optionalSection: otherosfsMaintainer: Laszlo Boszormenyi (GCS) <[email protected]>Architecture: amd64Uncompressed Size: 1,542 kDepends: libc6 (>= 2.17), libgcrypt20 (>= 1.6.1), libgnutls-deb0-28 (>= 3.3.0), libgpg-error0 (>= 1.14)PreDepends: multiarch-support, fuseProvides: libntfs-3g853Description: read/write NTFS driver for FUSE NTFS-3G uses FUSE (Filesystem in Userspace) to provide support for the NTFS filesystem used by Microsoft Windows.Homepage: http://www.tuxera.com/community/ntfs-3g-advanced/ There may be some forensic tools which might help you but as shared by @Celeo it all depends on how you did the installation. At my end, an NTFS partition :- $ mount | grep Data/dev/sda5 on /media/shirish/Data type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,uhelper=udisks2) Actually even fdisk should give you some output, this is from a dual-boot machine :- $ sudo fdisk -lDisk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisklabel type: dosDisk identifier: xxxxxxxxxDevice Boot Start End Sectors Size Id Type/dev/sda1 63 102398309 102398247 48.8G 7 HPFS/NTFS/exFAT/dev/sda2 102398371 1953523711 1851125341 882.7G f W95 Ext'd (LBA)/dev/sda5 102398373 204796619 102398247 48.8G 7 HPFS/NTFS/exFAT/dev/sda6 * 204797952 595421183 390623232 186.3G 83 Linux/dev/sda7 595423232 790732799 195309568 93.1G 83 Linux/dev/sda8 790734848 1943076863 1152342016 549.5G 83 Linux/dev/sda9 1943078912 1953523711 10444800 5G 82 Linux swap / SolarisPartition 2 does not start on physical sector boundary.Partition 3 does not start on physical sector boundary.Partition 6 does not start on physical sector boundary.Even this should give you some output, see that /dev/sda1 and /dev/sda5 are both NTFS partitions. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189309",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106090/"
]
} |
189,312 | I am trying to install a new package ( glibc-static ), and I get the following error ---> Package nss-softokn-freebl.i686 0:3.14.3-18.el6_6 will be installed--> Finished Dependency ResolutionError: Package: glibc-2.12-1.149.el6.i686 (CentOS-OS) Requires: glibc-common = 2.12-1.149.el6 Installed: glibc-common-2.12-1.149.el6_6.5.x86_64 (@updates) glibc-common = 2.12-1.149.el6_6.5 Available: glibc-common-2.12-1.149.el6.x86_64 (CentOS-OS) glibc-common = 2.12-1.149.el6Error: Package: glibc-devel-2.12-1.149.el6.i686 (CentOS-OS) Requires: glibc-headers = 2.12-1.149.el6 Installed: glibc-headers-2.12-1.149.el6_6.5.x86_64 (@updates) glibc-headers = 2.12-1.149.el6_6.5 Available: glibc-headers-2.12-1.149.el6.x86_64 (CentOS-OS) glibc-headers = 2.12-1.149.el6You could try using --skip-broken to work around the problemYou could try running: rpm -Va --nofiles --nodigest Notice that glibc-common-2.12-1.149.el6_6.5.x86_64 and glibc-common-2.12-1.149.el6.x86_64 don't match. But glibc-common-2.12-1.149.el6_6.5.x86_64 is listed as being available. I would think that yum would happily install it. Unforunately, yum didn't install it. So I tried to do it myself. ~ $> sudo yum install glibc-common-2.12-1.149.el6Loaded plugins: fastestmirror, prestoSetting up Install ProcessLoading mirror speeds from cached hostfile * epel: mirror.us.leaseweb.netPackage matching glibc-common-2.12-1.149.el6.x86_64 already installed. Checking for update.Nothing to do That didn't work. It thinks it is already installed. So I tried to reinstall it. ~ $> sudo yum reinstall glibc-common-2.12-1.149.el6Loaded plugins: fastestmirror, prestoSetting up Reinstall ProcessLoading mirror speeds from cached hostfileNo Match for argument: glibc-common-2.12-1.149.el6Package(s) glibc-common-2.12-1.149.el6 available, but not installed.Nothing to do How can I resolve this? | Messing with the RPM database didn't yield any particularly good results. I ended up noticing that some of the glibc packages were i686 and others were x86_64 . For instance: Package: glibc-2.12-1.149.el6.i686 (CentOS-OS) Requires: glibc-common = 2.12-1.149.el6 Installed: glibc-common-2.12-1.149.el6_6.5.x86_64 I didn't like that, and I expect that Yum didn't like that either. Running yum downgrade glibc glibc-headers glibc-common glibc-devel brought all of the packages to the same architecture ( x86_64 ). Then, yum install glibc-static worked like a charm. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189312",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105961/"
]
} |
189,364 | I would like to create a shell script that will check to make sure all files in a directory that appear superficially to be image files (e.g. have typical image file extensions like .jpg, .bmp etc.) are actually image files. We recently had an issue where a hacker was able to generate a file in a directory and mask it as a .jpg file. I would like to create a shell script to check all files in the directory to make sure they are real jpg, gif or png files. | I think you want to be very careful about using file in a circumstance where you give it completely untrusted input. For instance, RHEL 5 file will identify this: GIF87a<?phpecho "Hello from PHP!\n";?> As "GIF image data, version 87a, 15370 x 28735". The PHP interpreter has no trouble executing that input. That lack of trouble is the basis for " local file inclusion " (LFI) problems. Second, file (and even strings ) actually parse input files to tell you what you want to know. These parsers are complicated and have problems . I'm going to suggest the identify command out of the ImageMagick suite. It isn't fooled by my simple example above, and it only parses image files correctly, so it should be less prone to security flaws than file . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189364",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78937/"
]
} |
189,412 | When using LUKS full disk encryption, how would you go about protecting against evil maids ? The evil maid attack is when someone gets physical access to your computer while you're away and compromises the unencrypted /boot partition to capture your FDE password the next time you start your computer One of the solutions is to leave your /boot partition on an USB stick that's always with you (the maid can't get to it), but which filesystem should I use on it, and how do I configure my system to gracefully handle removal of the USB stick (and thus the /boot partition itself) ? I'm using CentOS, but generic, distro-agnostic answers are of course welcome. Thanks. | Finally figured it out. This still feels really hacky and dirty because the system is never aware that /boot may not be mounted and you'll have to manually mount it before doing anything that might write to it (think system updates, etc), but other than that it works perfectly. prepare your flash drive with a single partition with the boot flag set on it. You may run shred -n 1 -v /dev/sdX on it to erase it completely, including any previous boot sectors; once that's done run fdisk to create a partition and mkfs your filesystem of choice on it. mount your flash drive somewhere, /mnt/boot or even /newboot will do just fine. move over everything from /boot to the flash drive with mv /boot/* /newboot . edit your /etc/fstab and change the original boot partition's UUID (or create an entry if there isn't any) to match the one of your flash drive. You can get the UUID with lsblk -o name,uuid . Also add the noauto option so that the drive won't be mounted automatically to be able to remove it as soon as the system starts booting (once the kernel is loaded) without risking corrupting the FS on it. unmount the original boot partition and the flash drive ( umount /boot && umount /newboot ) and mount the flash drive; if your fstab entry is correct you can just run mount /boot and it'll automatically mount it based on the UUID specified in the fstab. regenerate your bootloader's configuration to reflect the new partition's UUIDs and "physical" position, for GRUB the flash drive will actually appear as the first hard drive in the computer ( hd0 ). If you're okay with using the default GRUB configuration scripts supplied by most distros, you can run grub-mkconfig -o /path/to/grub.cfg and it'll generate the file according to the currently mounted partitions and/or fstab. Note that for CentOS 7, the correct grub.cfg is actually located in /boot/grub2/grub.cfg . When doing any operation that may access the boot partition, connect your USB stick and run mount /boot . Once done, you may run umount /boot . Note that the latter command can take moments to complete because it's flushing the buffers to the disk (the disk itself is slow so the kernel buffers some write operations to speed things up). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189412",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
189,426 | I would like to use variable substitution on a particular string that I access via a command. For example, if I copy something into my clipboard, I can access it like this. $ xclip -o -selection clipboardHere's a string I just copied. If I assign it to a variable, then I can do variable substitution on it. $ var=$(xclip -o -selection clipboard)$ echo $varHere's a string I just copied.$ echo ${var/copi/knott}Here's a string I just knotted. However, is there a way to do variable substitution without assigning it to a variable? Conceptually, something like this. $ echo ${$(xclip -o -selection clipboard)/copi/knott}bash: ${$(xclip -o -selection clipboard)/copi/knott}: bad substitution This syntax fails, because var should be a variable name, not a string. | No, you can't. bash and most other shells (except zsh ) don't allow nested substitution. With zsh , you can do nested substitution : $ echo ${$(echo 123)/123/456} 456 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189426",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18887/"
]
} |
189,445 | The command "brew install boost" works for me on my MacOSX, but it installs the latest boost 1.57. How to use brew to install the older 1.55? | You'll want to run brew install [email protected] $ brew search boost==> Formulaeboost ✔ boost-build boost-python [email protected] [email protected] [email protected] ✔boost-bcp boost-mpi boost-python3 [email protected] [email protected]==> Casksboostnote focus-booster iboostup nosqlbooster-for-mongodb turbo-boost-switcher | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/189445",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9164/"
]
} |
189,451 | So in bash,When I do echo \** This seems right, as * is escaped and taken literally. But I can't understand that,when I do echo \\*\* I thought the first backslash escaped the second one thus two backslash "\\" will give me one "\" in literal. and * followed carrying its special meaning. I was expecting: echo \\*\file1 file2 file3 ANSWER SUMMARY:Since \ is taken literally, echo \* will behave just as echo a* , which will find any file that starts with literal "a". Follow up question, If I want to print out exactly like \file1 file2 file3 What command should I use?e.g. like the following but I want no space echo \\ * \ file1 file2 file3 | If you don't have a file in the current directory whose name starts with a backslash, this is expected behaviour. Bash expands * to match any existing file names, but : If the pattern does not match any existing filenames or pathnames, the pattern string shall be left unchanged. Because there was no filename starting with \ , the pattern was left as-is and echo is given the argument \* . This behaviour is often confusing, and some other shells, such as zsh , do not have it. You can change it in Bash using shopt -o failglob , which will then give an error as zsh does and help you diagnose the problem instead of misbehaving. The * and ? pattern characters can appear anywhere in the word, and characters before and after are matched literally. That is why echo \\* and echo \\ * give such different output: the first matches anything that starts with \ (and fails) and the second outputs a \ , and then all filenames. The most straightforward way of getting the output you want safely is probably to use printf : printf '\\'; printf "%s " * echo * is unsafe in the case of unusual filenames with - or \ in them in any case. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189451",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80794/"
]
} |
189,454 | I'm trying to send a message through netcat . After sending the message, netcat must terminate. I've tried the following: cat tsmmessage.bin | nc -u localhost 4300nc -u localhost 4300 < message.bin The -q option states: -q seconds after EOF on stdin, wait the specified number of seconds and then quit. If seconds is negative, wait forever. But nc -q0 -u localhost 4300 < message.bin also doesn't work. What am I missing? | Assuming that after sending EOF connection will stay idle, you can use -w timeout option, which works for timeout being equal to zero (unlike stupid -q option...) cat tsmmessage.bin | nc -u localhost 4300 -w0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189454",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39137/"
]
} |
189,466 | I've two documents that are the result of SQL Queries. For exemple i've my Doc1.lst that is : John1 0 Julien 10 Jules3 0 Julie 30 On this Doc1.lst, I used sed to have only the name part, so Doc2.lst is : John1 Julien Jules3 Julie I don't master enough the sed command to catch only the numbers, so I would like to know if there is a way to create a Doc3.lst that take the Doc1.lst and delete into it the content of Doc2.lst It would be a sort of reversed catenate .By the way, if you have the sed command to catch only the numbers that would be great. | To get the output you want, you could either use awk , cut or sed . The former is preferable. If your file Doc1.lst is as follow John1 0Julien 10Jules3 0Julie 30 The following awk command will get the output you want. Assuming field separator is a space. awk '{print $1}' Doc1.lst Using cut cut -d' ' -f1 Doc1.lst Or using sed . Note. sed is a stream editor and you don't want to use sed for this task. But here is the line you want anyway. sed 's/\([a-zA-Z]*\).*/\1/' Doc1.lst | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189466",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103428/"
]
} |
189,545 | Do no confuse the question here with how to list processes graphically, that is not what I am asking. In the Terminal, how can i see which processes have a GUI?as in things like firefox, vlc, geany, nautilus etc... all have a gui.I would like more information about which processes are using window manager resources, and I would like to do that from the terminal. How do I get more information about THOSE types of process? I've been trying to use the ps command, but I would entertain any terminal command to help me solve this. UPDATE:I see something I like in pstree which is the tree from which all the graphical process I am interested are spawned from: $pstreeinit─┬─ │ ├─lightdm─┬─Xorg │ ├─lightdm─┬─init─┬─ │ │ │ ├─firefox───55*[{firefox}] │ │ │ ├─geany─┬─bash │ │ │ │ │ │ │ ├─gnome-terminal─┬─bash───pstree | Try xrestop or xrestop -b . It is intended to measure how many X resources each X window consumes, but as a small bonus identifies name of the windows and corresponding PIDs. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189545",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41051/"
]
} |
189,550 | I have a java file called Friends.java . There are 5 variables and each variable has its own getter. Assume that all the functions below follow standard java standard, i.e public methodName(){//Some code here;} Here is the file Friends.java //Variablesint numberOfFriends;String firstName;String lastName;String address;boolean isWorker;//MethodsFriends();int getNumberOfFriends(){//Some Code here;}public String getFirstName(){//Some Code here;}public String getLastName(){//Some Code here;}public String getAddress(){//Some Code here;}private String getIsWorker(){//Some Code here;}public String toString(){ return getNumberOfFriends + " " + getFirstName() + " " + getLastName() + " " + getAddress() + " " + getIsWorker() + " ";} I want to grep through this file to get a list of all the getters. This is what I have so far : grep "get*{$" Friends.java But I do not get any result. | Try xrestop or xrestop -b . It is intended to measure how many X resources each X window consumes, but as a small bonus identifies name of the windows and corresponding PIDs. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189550",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68738/"
]
} |
189,560 | In a document created by a former coworker there is this command: cat /dev/null > /var/spool/mail/root It says next to it that it will clean out mailbox. Can someone please explain how/why these commands do that. I need to know what will happen before I run the command. We are trying to clean up space on var, as of right now it's at 91%. | The command will output the data from device /dev/null to the given file (mailbox of the root account). Since /dev/null responds just with end-of-file when reading from it nothing will be written to the file, but with the redirection > the shell will have cleared the file already. Actually this is equivalent to writing just > /var/spool/mail/root (i.e., the same without cat or /dev/null ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189560",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106228/"
]
} |
189,591 | I'm trying to setup ssh-host in Cygwin and am getting the below error: *** Warning: The permissions on the directory /var are not correct.*** Warning: They must match the regexp d..x..x..[xt]*** ERROR: Problem with /var directory. Exiting. As of now, the /var directory has the below permissions. $ ls -ld /vardrws--Srwx+ 1 Prashant Prashant 0 Mar 11 22:29 /var How do I set d..x..x..[xt] permissions for /var ? | In Cygwin, it's not possible to change group permissions unless the group is Users or Root . Refer to 'chmod' cannot change group permission on Cygwin . You won't be able to change the group permission until you change var's group owner to Users , so the best solution is: chown :Users /varchmod 757 /varchmod ug-s /varchmod +t /var The last step of setting sticky bit is not really necessary though. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189591",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106015/"
]
} |
189,620 | I have a semi-working rsync command that I have combined with the find command on a linux box to do a simple file transfer of a specific type and date. Here's the command: rsync -avx --timeout=30 --ignore-existing admin@host:'`find /results/analysis/ -mtime -1 -type f -iname "*.xq"`' /home/serverdir/ Here's the problem: The above command works perfectly if any files are found. It breaks when no files are found and actually sends a file that is located on the home directory of the remote machine for some reason, which by the way is much older than 1 day as indicated by -mtime. The error message: receiving incremental file list./rsync: send_files failed to open "/home/admin/.viminfo": Permission denied (13)file.txt It's as if when the find command reports nothing, rsync just defaults to sending everything in the home directory. Any ideas of how to fix this? | In Cygwin, it's not possible to change group permissions unless the group is Users or Root . Refer to 'chmod' cannot change group permission on Cygwin . You won't be able to change the group permission until you change var's group owner to Users , so the best solution is: chown :Users /varchmod 757 /varchmod ug-s /varchmod +t /var The last step of setting sticky bit is not really necessary though. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189620",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100423/"
]
} |
189,647 | I was wondering if it is possible to pull log messages for a particular log with systemd's journal logging. For example, when I open a log in C, openlog('slog', LOG_CONS | LOG_PID, LOG_LOCAL1) , to pull just messages logged under ' slog ' or LOCAL1 ? When I do something like journalctl -u slog or journalctl -u LOG_LOCAL1 , it just tells me when the log begins and ends, not the actual log messages. | Yes, it is possible, but you passed the wrong switch to journalctl . According to journalctl(1) man page: To read messages with a given syslog identifier (say, "foo"), issue journalctl -t foo or journalctl SYSLOG_IDENTIFIER=foo ; To read messages with a given syslog facility, issue journalctl SYSLOG_FACILITY=1 (note that facilities are stored and matched using their numeric values). More generally, the syslog identifier and facility are stored in the journal as separate fields ( SYSLOG_IDENTIFIER and SYSLOG_FACILITY ). If you ever need to access the journal from, say, the C API, you will have to add matches on these fields directly. The journalctl -u switch is used to add a match on the name of a systemd unit which owned the process which has generated the message. So this is the wrong switch to use. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/189647",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105689/"
]
} |
189,651 | I want to replace a string found in a file with another string, but both have a special character (in this example it is a . char) e.g 1.0 and 2.0 . So this is the command currently used: sed -i 's/1\.0/2\.0/g' /home/user1/file1.txt What if 1.0 and 2.0 were saved in variables $i and $j ? where i has the value 1.0 and j has the value 2.0 , how can I still replace i with j ? | Yes, it is possible, but you passed the wrong switch to journalctl . According to journalctl(1) man page: To read messages with a given syslog identifier (say, "foo"), issue journalctl -t foo or journalctl SYSLOG_IDENTIFIER=foo ; To read messages with a given syslog facility, issue journalctl SYSLOG_FACILITY=1 (note that facilities are stored and matched using their numeric values). More generally, the syslog identifier and facility are stored in the journal as separate fields ( SYSLOG_IDENTIFIER and SYSLOG_FACILITY ). If you ever need to access the journal from, say, the C API, you will have to add matches on these fields directly. The journalctl -u switch is used to add a match on the name of a systemd unit which owned the process which has generated the message. So this is the wrong switch to use. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/189651",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85865/"
]
} |
189,675 | Using the softbuttons is annoying over time. I mean the real brightness of the backlight (Not X11 gamma). Which protocols are cabable of this? (DVI, HDMI, DP, guess VGA is not) | Actually, all of these interfaces are capable of backlight control (and more), as long as both, graphics card and the monitor support the Display Data Channel . DDC is based on I²C, so you have to install and load appropriate kernel modules to make it work. # Debiansudo apt-get install i2c-toolssudo modprobe i2c-dev# RHELsudo dnf install i2c-tools After that, you have to find out which I²C bus is connected to the monitor using sudo i2cdetect -l . # Example output for Intel graphics cardi2c-0 i2c i915 gmbus dpc I2C adapteri2c-1 i2c i915 gmbus dpb I2C adapteri2c-2 i2c i915 gmbus dpd I2C adapteri2c-3 i2c DPDDC-B I2C adapteri2c-4 i2c DPDDC-C I2C adapter# Example output for AMD graphics cardi2c-0 i2c Radeon i2c bit bus 0x90 I2C adapteri2c-1 i2c Radeon i2c bit bus 0x91 I2C adapteri2c-2 i2c Radeon i2c bit bus 0x92 I2C adapteri2c-3 i2c Radeon i2c bit bus 0x93 I2C adapteri2c-4 i2c Radeon i2c bit bus 0x94 I2C adapteri2c-5 i2c Radeon i2c bit bus 0x95 I2C adapteri2c-6 i2c card0-eDP-1 I2C adapteri2c-7 i2c card0-VGA-1 I2C adapter In Intel case, the right bus is one of DPDDCs ( Display Port DDC ), depending which on port are you using. In my case both, HDMI and DP are displayed as DP. In AMD case, the bus is called card0- interface - n . If there are no interfaces listed, then your card/driver doesn't not support DDC in standard way. Now we have to probe, whether monitor supports DDC and does it allow to set brightness this way. First, install ddccontrol : # Debiansudo apt-get install ddccontrol# RHELsudo dnf install ddccontrol Then, list get list of supported DDC parameters using it. This example assumes your DDC interface is bound to i2c-3 bus. # sudo ddccontrol dev:/dev/i2c-3 ddccontrol version 0.4.2Copyright 2004-2005 Oleg I. Vdovikin ([email protected])Copyright 2004-2006 Nicolas Boichat ([email protected])This program comes with ABSOLUTELY NO WARRANTY.You may redistribute copies of this program under the terms of the GNU General Public License.Reading EDID and initializing DDC/CI at bus dev:/dev/i2c-3...I/O warning : failed to load external entity "/usr/share/ddccontrol-db/monitor/DELA0A2.xml"Document not parsed successfully.I/O warning : failed to load external entity "/usr/share/ddccontrol-db/monitor/DELlcd.xml"Document not parsed successfully.EDID readings: Plug and Play ID: DELA0A2 [VESA standard monitor] Input type: Digital= VESA standard monitor> Color settings > Brightness and Contrast > id=brightness, name=Brightness, address=0x10, delay=-1ms, type=0 supported, value=45, maximum=100 > id=contrast, name=Contrast, address=0x12, delay=-1ms, type=0 supported, value=75, maximum=100--- [snip] --- That's it, if everything goes right, brightness value should report exactly the same brightness as set in monitor. You might now set 50% brightness using this command (replace 0x10 with address of brightness value found above): sudo ddccontrol dev:/dev/i2c-3 -r 0x10 -w 50 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/189675",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27330/"
]
} |
189,684 | I've got a [csv] file with duplicate datum reprinted ie the same data printed twice. I've tried using sort's uniq by sort myfile.csv | uniq -u however there is no change in the myfile.csv , also I've tried sudo sort myfile.csv | uniq -u but no difference. So currently my csv file looks like this aaabbccccc I would like to look like it abc | The reason the myfile.csv is not changing is because the -u option for uniq will only print unique lines. In this file, all lines are duplicates so they will not be printed out. However, more importantly, the output will not be saved in myfile.csv because uniq will just print it out to stdout (by default, your console). You would need to do something like this: $ sort -u myfile.csv -o myfile.csv The options mean: -u - keep only unique lines -o - output to this file instead of stdout You should view man sort for more information. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/189684",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102813/"
]
} |
189,687 | All, Is there any tools like Norton Ghost for Linux to help backup and restore SUSE when necessary? Thanks. | The reason the myfile.csv is not changing is because the -u option for uniq will only print unique lines. In this file, all lines are duplicates so they will not be printed out. However, more importantly, the output will not be saved in myfile.csv because uniq will just print it out to stdout (by default, your console). You would need to do something like this: $ sort -u myfile.csv -o myfile.csv The options mean: -u - keep only unique lines -o - output to this file instead of stdout You should view man sort for more information. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/189687",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53618/"
]
} |
189,688 | I use df -kh to get the file system usage, but the trouble is if there is any NFS mount failure in any of the boxes the command will hang. Is there a better way to monitor file systems other than df , or is it possible to mention a timeout to df command ? | I found the answer here The trick is to use the command timeout So the ideal way is timeout 2 df -kh , here 2 is the timeout in seconds. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189688",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13605/"
]
} |
189,722 | My flash is partitioned as /dev/mtd0 /dev/mtd1 etc....And I want to scp one of the partitions over to my pc so I can analyze it with a hex editor, but everytime I try to copy it with scp I get -not a regular file. How can I scp the contents of a flash partition? I think I did it once with WinSCP on a windows machine but it only worked for small partitions of < 10mb and anything bigger would disconnect from the device. | A combination of dd and ssh can probably help here: # dd if=/dev/mtd0 | ssh me@myhost "dd of=mtd0.img" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/189722",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106331/"
]
} |
189,749 | I am trying to understand how exactly Bash treats the following line: $(< "$FILE") According to the Bash man page, this is equivalent to: $(cat "$FILE") and I can follow the line of reasoning for this second line. Bash performs variable expansion on $FILE , enters command substitution, passes the value of $FILE to cat , cat outputs the contents of $FILE to standard output, command substitution finishes by replacing the entire line with the standard output resulting from the command inside, and Bash attempts to execute it like a simple command. However, for the first line I mentioned above, I understand it as: Bash performs variable substitution on $FILE , Bash opens $FILE for reading on standard input, somehow standard input is copied to standard output , command substitution finishes, and Bash attempts to execute the resulting standard output. Can someone please explain to me how the contents of $FILE goes from stdin to stdout? | $(<file) (also works with `<file` as well as ${<file;} in ksh93) is a special operator of the Korn shell copied by zsh and bash . It does look a lot like command substitution but it's not really. In POSIX shells, a simple command is: < file var1=value1 > file2 cmd 2> file3 args 3> file4 All parts are optional, you can have redirections only, command only, assignment only or combinations. If there are redirections but no command, the redirections are performed (so a > file would open and truncate file ), but then nothing happens. So < file Opens file for reading, but then nothing happens as there's no command. So the file is then closed and that's it. If $(< file) was a simple command substitution , then it would expand to nothing. In the POSIX specification , in $(script) , if script consists only of redirections, that produces unspecified results . That's to allow that special behaviour of the Korn shell. In ksh (here tested with ksh93u+ ), if the script consists of one and only one simple command (though comments are allowed before and after) that consists only of redirections (no command, no assignment) and if the first redirection is a stdin (fd 0) input only ( < , << or <<< ) redirection, so: $(< file) $(0< file) $(<&3) (also $(0>&3) actually as that's in effect the same operator) $(< file > foo 2> $(whatever)) but not: $(> foo < file) nor $(0<> file) nor $(< file; sleep 1) nor $(< file; < file2) then all but the first redirection are ignored (they are parsed away) and it expands to the content of the file/heredoc/herestring (or whatever can be read from the file descriptor if using things like <&3 ) minus the trailing newline characters. as if using $(cat < file) except that the reading is done internally by the shell and not by cat no pipe nor extra process is involved as a consequence of the above, since the code inside is not run in a subshell, any modification remain thereafter (as in $(<${file=foo.txt}) or $(<file$((++n))) ) read errors (though not errors while opening files or duplicating file descriptors) are silently ignored. In zsh , it's the same except that that special behaviour is only triggered when there's only one file input redirection ( <file or 0< file , no <&3 , <<<here , < a < b ...) However, except when emulating other shells, in < file that is when there's only one input redirection without commands, outside of command substitution, zsh runs the $READNULLCMD (a pager by default), and when there are more redirections or redirections other than < file ( <&3 , <<<text , <a <b , >file , <a >b ...) the $NULLCMD ( cat by default), so even if $(<&3) is not recognized as that special operator, it will still work like in ksh by invoking cat to do it. However while ksh 's $(< a < b) would expand to the contents of a , in zsh , it expands to the content of a and b (or just b if the multios option is disabled), $(< a > b) would copy a to b and expand to nothing, etc. bash has a similar operator but with a few differences: comments are allowed before but not after: echo "$( # getting the content of file < file)" works but: echo "$(< file # getting the content of file)" expands to nothing. like in zsh , only one file stdin redirection, though there's no fall back to a $READNULLCMD , so $(<&3) , $(< a < b) do perform the redirections but expand to nothing. for some reason, while bash does not invoke cat , it still forks a process that feeds the content of the file through a pipe making it much less of an optimisation than in other shells. It's in effect like a $(cat < file) where cat would be a builtin cat . as a consequence of the above, any change made within are lost afterwards (in the $(<${file=foo.txt}) , mentioned above for instance, that $file assignment is lost afterwards). In bash , IFS= read -rd '' var < file (also works in zsh ) is a more effective way to read the content of a text file into a variable. It also has the benefit of preserving the trailing newline characters. See also $mapfile[file] in zsh (in the zsh/mapfile module and only for regular files) which also works with binary files. Note that the pdksh-based variants of ksh have a few variations compared to ksh93. Of interest, in mksh (one of those pdksh-derived shells), in var=$(<<'EOF'That's multi-linetest with *all* sorts of "special"charactersEOF) is optimised in that the content of the here document (without the trailing newline characters) is expanded without a temporary file or pipe being used as is otherwise the case for here documents, which makes it an effective multi-line quoting syntax. To be portable to all versions of ksh , zsh and bash , best is to limit to only $(<file) avoiding comments and bearing in mind that modifications to variables made within may or may not be preserved. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/189749",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106343/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.