source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
242,423
I am connected through VNC to a CentOS 6.4 machine at my workplace. Every five minutes a box pops up that says: Authentication is required to set the network proxy used for downloading packages An application is attempting to perform an action that requires privleges. Authentication as the super user is required to perform this action Password for root: Details Role unknown Action: org.freedesktop.packagekit.system-network-proxy-configure Vendor: The PackageKit Project [Cancel] [Authenticate] I don't have the root password, so usually I just click it an make it go away but it tends to come back a few minutes later. My local sysadmin has tried to deal with the problem a few times and given up and told me just to keep closing the popup box. That said, its driving me nuts. Is there some way I can make it so I don't have to see the popup, even if the problem isn't itself fixed? Less preferably, is there some very easy thing I can tell the sysadmin to do to actually fix the problem?
I hope you're not one of my users haha! I manage a cluster and this particular warning has been bugging me for a while. I've been trying to figure out a way to fix this programatically on the command line with little success. This error comes from something bundled in gnome-packagekit . I have come across three solutions to this problem disable /yum/pluginconf.d [main]enabled=0 This has not worked for me. Today I found a different answer on the redhat solutions page and I believe that this one works! just add X-GNOME-Autostart-enabled=false to the end of the /etc/xdg/autostart/gpk-update-icon.desktop file. I restarted vnc after this and have the popup has not returned. Unfortunately both solutions so far have required root on the box. I do not believe that the following procedure requires root. But I never tried it since it's done via the GUI: Launch a Terminal Console and type gnome-session-properties and then uncheck the PackageKit Update Applet . sources : http://linuxtoolkit.blogspot.com/2013/11/fixing-authentication-is-requried-to.html https://access.redhat.com/solutions/195833
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/242423", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98821/" ] }
242,496
I want to make a bash script to delete the older file form a folder. Every time when I run the script will be deleted only one file, the older one. Can you help me with this?Thanks
As Kos pointed out, It might not be possible to know the oldest file (as per creation date). If modification time are good for you, and if file name have no new line: rm "$(ls -t | tail -1)"
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/242496", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/108074/" ] }
242,503
For example: RXOTG-1388 holds 3 object RM4FD1,RM4FD2,RM4FD3 RXOTG-1398 holds 3 object VT08D1 VT08D2,VT08D3 and so on. Based on this text file I would like to count, using awk , how many object each RXOTG holds. RXOTG-1388 RM4FD1 0 RM4FD2 0 RM4FD3 0ENDRXOTG-1398 VT08D1 0 VT08D2 0 VT08D3 0ENDRXOTG-1400 VT08S1 0 VT08S2 0 VT08S3 0END
As Kos pointed out, It might not be possible to know the oldest file (as per creation date). If modification time are good for you, and if file name have no new line: rm "$(ls -t | tail -1)"
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/242503", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/142803/" ] }
242,539
If you are running apt-get commands on terminal and want to install stuff on the software center, the center says it waits until apt-get finishes. I wanted to know if it is possible to do the same but on the terminal, i.e., make apt-get on the terminal wait until the lock is released. I found this link , that uses aptdcon to install stuff. I would like to know if: Is it really not possible to do with apt-get ? Is aptdcon compatible with apt-get , i.e., can I use both to install stuff without borking the system?
apt 1.9.11 This was solved in Debian bug #754103 in this commit . The fix is in versions of apt newer than 1.9.11. apt(8): Wait for lock (Closes: #754103 ) You can enable this option by setting -o DPkg::Lock::Timeout=60 as an argument to apt or apt-get . Where 60 is the time to wait in seconds for the lock. apt -o DPkg::Lock::Timeout=60 install FOOapt-get -o DPkg::Lock::Timeout=60 install FOO You can test this by running two identical commands and simply not answering immediately on the first one to Do you want to continue? [Y/n] ? On the second command you run, it'll tell you, Waiting for cache lock: Could not get lock /var/lib/dpkg/lock-frontend . It is held by process 946299 (apt)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/242539", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112554/" ] }
242,546
I would like to bind/unbind my usb device - a wireless adapter. echo -n "1-1:1.0" > /sys/bus/usb/drivers/ub/unbind So to able to do that, I need the bus ID. lsusb prints out the following: Bus 001 Device 002: ID 0424:9512 Standard Microsystems Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. Bus 001 Device 004: ID 148f:2573 Ralink Technology, Corp. RT2501/RT2573 Wireless Adapter And lsusb -t : /: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=dwc_otg/1p, 480M |__ Port 1: Dev 2, If 0, Class=hub, Driver=hub/3p, 480M |__ Port 1: Dev 3, If 0, Class=vend., Driver=smsc95xx, 480M |__ Port 2: Dev 4, If 0, Class=vend., Driver=rt73usb, 480 So where can I find this bus ID? Thanks! Update: here is the detailed info about the wireless devide: ( lsusb -v | grep -E '\<(Bus|iProduct|bDeviceClass|bDeviceProtocol)' 2>/dev/null ) Bus 001 Device 004: ID 148f:2573 Ralink Technology, Corp. RT2501/RT2573 Wireless Adapter bDeviceClass 0 (Defined at Interface level) bDeviceProtocol 0 iProduct 2
You can read off the sequence from the device tree you get with lsusb -t . The number before the hyphen is the bus, the numbers after the hyphen are the port sequence. Your device is on bus 01 , on port 1 of the root hub for this bus is another hub, and on port 3 of this hub is your device: So you get 1-1.3 . If you know the vendor id from lsusb (like 148f for Ralink), you can also grep for it with grep 148f /sys/bus/usb/devices/*/idVendor and you'll get something like /sys/bus/usb/devices/1-1.3/idVendor:148f as answer. If there are several devices from the same vendor, you can narrow it down with idProduct .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/242546", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40434/" ] }
242,551
I use Bash as my interactive shell and I was wondering if there was an easy way to get Bash to run a system command instead of a shell builtin command in the case where they both share the same name. For example, use the system kill (from util-linux ) to print the process id (pid) of the named process(es) instead of sending a signal: $ /bin/kill -p httpd2617... Without specifying the full path of the system command, the Bash builtin is used instead of the system command. The kill builtin doesn’t have the -p option so the command fails: $ kill -p httpdbash: kill: p: invalid signal specification I tried the answers listed in Make bash use external `time` command rather than shell built-in but most of them only work because time is actually a shell keyword – not a shell builtin . Other than temporarily disabling the Bash builtin with enable -n kill , the best solution I’ve seen so far is to use: $(which kill) -p httpd Are there other easier (involve less typing) ways to execute an external command instead of a shell builtin? Note that kill is just an example and I’d like a generalised solution similar to the way that prefixing with the command builtin prevents functions which have the same name as an external command from being run. In most cases, I usually prefer to use the builtin version as it saves forking a new process and some times the builtin has features that the external command doesn’t.
Assuming env is in your path: env kill -p http env runs the executable file named by its first argument in a (possibly) modified environment; as such, it does not know about or work with shell built-in commands. This produces some shell job control cruft, but doesn't rely on an external command: exec kill -p bash & exec requires an executable to replace the current shell, so doesn't use any built-ins. The job is run in the background so that you replace the forked background shell, not your current shell.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/242551", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22812/" ] }
242,578
Sometimes when I remove SD card I can notice that when I reinsert it - it doesn't mount by itself.Also I've notice that umount process is stalled and I cannot kill it (not even with -9).I use Ubuntu 14.04. Any idea what I do wrong?
First things first: get out from that directory :) Joking.. but not so much, thinking how many times I run umount and it gives me back an error because I'm indeed inside the mounted folder. Try running lsof | grep <your_sd_card_directory_here> , to check if that directory is in use. Example output of lsof | grep /mnt/share , /mnt/share is mounted: COMMAND PID TID USER FD TYPE DEVICE SIZE/OFF NODE NAMElsof 11930 root cwd DIR 253,2 15 213678 /mnt/share This is showing that lsof is being run exactly from /mnt/share with FD (File Descriptor) cwd , the C urrent W orking D irectory. If you see the same.. get out from that directory ;)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/242578", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/116320/" ] }
242,666
The man page for systemctl says: Unit Commands list-units [PATTERN...] List known units (subject to limitations specified with -t). If one or more PATTERNs are specified, only units matching one of them are shown. This is the default command. My question is what does it mean by [PATTERN] . When I execute systemctl list-units I get relatively long list of the loaded units. But if I add a third argument I get an error message Too many arguments. So I am curious as to what parameters are valid for the [PATTERN] argument listed in the man page. (I'm running Arch Linux and have version 227 of systemd)
From the same page: Parameter Syntax Unit commands listed above take either a single unit name (designated as NAME), or multiple unit specifications (designated as PATTERN...). In the first case, [...] In the second case, shell-style globs will be matched against currently loaded units; literal unit names, with or without a suffix, will be treated as in the first case. This means that literal unit names always refer to exactly one unit, but globs may match zero units and this is not considered an error. Glob patterns use fnmatch(3) , so normal shell-style globbing rules are used, and " * ", " ? ", " [] " may be used. See glob(7) for more details. The patterns are matched against the names of currently loaded units, and patterns which do not match anything are silently skipped. For example: # systemctl stop sshd@*.service will stop all [email protected] instances.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/242666", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18797/" ] }
242,722
I have a command that gives me a list of files, one on each line. Filenames are "normal" - no spaces, no need to escape parentheses etc. Now I want to pipe that command to something like test -f and return true if and only if all of the files exist. (Behaviour with 0 lines can be undefined, I don't really care.) So, something like make_list_of_files | test -f but actually working. "Bashisms" are allowed, since I need it in Bash. The files are not in the same directory, but they are in subdirectories of a current directory, and the paths have directory names in them, so for example dir/file1dir/file2dir2/file3
allExist(){ while IFS= read -r f; do test -e "$f" || return 1 done}make_list_of_files | allExist This should work in all POSIX shells.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/242722", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10393/" ] }
242,744
This is a bit tricky; I'm trying to work out the best approach to this problem. I have a couple of approaches, but they seem really hacky and I'd like something a little more elegant. I want to parse a whitespace delimited file, ignoring #comment lines and complaining of any non-empty lines that don't have exactly 4 fields. This is easy enough in awk : awk '/^#/ {next}; NF == 0 {next}; NF != 4 {exit 1}; (dostuff)' The trick is what I want to do with the data, is actually set it as variables in bash and then run a bash function, unless $2 contains a specific value. Here is some pseudocode (mostly real but mixed languages) to explain what I mean: # awk/^#/ {next}NF == 0 {next}NF != 4 {exit 1}$2 == "manual" {next}# bashNAME=$1METHOD=$2URL=$3TAG=$4complicated_bash_function_that_calls_lots_of_external_commands# then magically parse the next line with awk. I don't know how to do this without some ugly workarounds, such as calling awk or sed separately for each line of the file. (Originally I put the question as "How to call bash function from within awk or each output line of awk from within bash?") Possibly it would work to modify the bash function into its own script, and make it accept arguments 1, 2, 3, 4 as above. I'm not sure how to call that from within awk, though; hence my question title. What I would actually prefer to do, is have the whole thing in one file and make it a bash script - calling awk from within bash rather than bash from within awk . But I will still need to call the bash function from within awk--once for each non-comment line of the input file. How can I do this?
You may be able to do what you want by piping awk 's output into a while read loop. For example: awk '/^#/ {next}; NF == 0 {next}; NF != 4 {exit 1} ; {print}' | while read -r NAME METHOD URL TAG ; do : # do stuff with $NAME, $METHOD, $URL, $TAG echo "$NAME:$METHOD:$URL:$TAG" doneif [ "$PIPESTATUS" -eq 1 ] ; then : # do something to handle awk's exit codefi Tested with: $ cat input.txt # commentNAME METHOD URL TAGa b c d1 2 3 4x y za b c d$ ./testawk.sh input.txt NAME:METHOD:URL:TAGa:b:c:d1:2:3:4 Note that it correctly exits on the fifth x y z input line. It's worth pointing out that because the while loop is the target of a pipe, it executes in a sub-shell and is therefore unable to alter the environment (including environment variables) of its parent script. If that is required, then don't use a pipe, use redirection and process substitution instead: while read -r NAME METHOD URL TAG ; do : # do stuff with $NAME, $METHOD, $URL, $TAG echo "$NAME:$METHOD:$URL:$TAG"done < <(awk '(/^#/ || NF == 0) {next}; NF != 4 { printf "%s:%s:Wrong number of fields\n", FILENAME, NR > "/dev/stderr"; exit 1 }; {print}' input.txt)# getting the exit code from the <(...) requires bash 4.4 or newer:wait $!if [ "$?" -ne 0 ] ; then : # something went wrong in the process substitution, deal with itfi Alternatively, you can use the coproc built-in to run the awk script in the background as a co-process: # By default, array var $COPROC holds the co-process' stdout and# stdin file descriptors. See `help coproc`.coproc { awk '(/^#/ || NF == 0) {next}; NF != 4 { printf "%s:%s:Wrong number of fields\n", FILENAME, NR > "/dev/stderr"; exit 1 }; {print}' input.txt}awkpid="$!"#declare -p COPROC # uncomment to see the FDswhile read -r NAME METHOD URL TAG ; do echo "$NAME:$METHOD:$URL:$TAG"done <&"${COPROC[0]}"wait "$awkpid"echo "$?"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/242744", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135943/" ] }
242,946
I am trying to sum certain numbers in a column using awk . I would like to sum just column 3 of the "smiths" to get a total of 212. I can sum the whole column using awk but not just the "smiths". I have: awk 'BEGIN {FS = "|"} ; {sum+=$3} END {print sum}' filename.txt Also I am using putty. Thank you for any help. smiths|Login|2olivert|Login|10denniss|Payroll|100smiths|Time|200smiths|Logout|10
awk -F '|' '$1 ~ /smiths/ {sum += $3} END {print sum}' inputfilename The -F flag sets the field separator; I put it in single quotes because it is a special shell character. Then $1 ~ /smiths/ applies the following {code block} only to lines where the first field matches the regex /smiths/ . The rest is the same as your code. Note that since you're not really using a regex here, just a specific value, you could just as easily use: awk -F '|' '$1 == "smiths" {sum += $3} END {print sum}' inputfilename Which checks string equality. This is equivalent to using the regex /^smiths$/ , as mentioned in another answer, which includes the ^ anchor to only match the start of the string (the start of field 1) and the $ anchor to only match the end of the string. Not sure how familiar you are with regexes. They are very powerful, but for this case you could use a string equality check just as easily.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/242946", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/142178/" ] }
242,995
Say I have a folder: ./folder/ Inside it there are many files and even sub-directories. When I execute: mkdir -p folder I won't see any errors even warnings. So just want to confirm, is there anything lost or changed in result of this command?
mkdir -p would not give you an error if the directory already exists and the contents for the directory will not change. Manual entry for mkdir
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/242995", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45317/" ] }
243,001
Title was "Running yum update && yum upgrade without root-privileges?" I want to allow an unprivileged user to start a system update on a RHEL-based system (CentOS 7). yum-cron is not an alternative, because the user should be flexible e.g. he should be able to decide when to shutdown the machine. Any idea?
You can simply setup your sudoers file and allow a user or a group to execute this one specific command. Syntax would look somehow like this in file /etc/sudoers (edit with visudo !): user ALL=(root) NOPASSWD: /usr/bin/yum update root If you omit the NOPASSWD part, the user will have to provide his password. The users will get to run this simply as sudo yum update . For more information, you can consult manual pages for sudoers(5)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/243001", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/120227/" ] }
243,083
I have a text file named abd shown below. 48878 128.206.6.13634782 128.206.6.13712817 23.234.22.106 I want to extract only IP address from the text and store it in a variable and use for other purpose. I have tried this. for line in `cat abd`doip=`grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' $line`echo $ipdone I am getting an error as follows grep: 34782: No such file or directorygrep: 128.206.6.137: No such file or directorygrep: 12817: No such file or directorygrep: 23.234.22.106: No such file or directory I don't know what is going wrong here. Any help would be appreciated.
You almost had it right the first time. The awk answer is good for your specific case, but the reason you were receiving an error is because you were trying to use grep as if it were searching for a file instead of a variable. Also, when using regular expressions, I always use grep -E just to be safe. I have also heard that backticks are deprecated and should be replaced with $() . The correct way to grep a variable with on shells that support herestrings is using input redirection with 3 of these guys: < , so your grep command ( $ip variable) should actually read as follows: ip="$(grep -oE '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' <<< "$line")" If it is a file you are searching, I always use a while loop, since it is guaranteed to go line-by-line, whereas for loops often get thrown off if there is any weird spacing. You are also implementing a useless use of cat which could be replace by input redirection as well. Try this: while read line; do ip="$(grep -oE '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' <<< "$line")" echo "$ip"done < "abd" Also, I don't know what OS or version of grep you are using, but the escape character you had before the curly braces is usually not required whenever I have used this command in the past. It could be from using grep -E or because I use it in quotes and without backticks -- I don't know. You can try it with or without and just see what happens. Whether you use a for loop or a while loop, that is based on which one works for you in your specific situation and if execution time is of utmost importance. It doesn't appear to me as if OP is trying to assign separate variables to each IP address, but that he wants to assign a variable to each IP address within the line so that he can use it within the loop itself -- in which case he only needs a single $ip variable per iteration. I'm sticking to my guns on this one.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/243083", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138913/" ] }
243,095
If I have a directory containing some files whose names have spaces, e.g. $ ls -1 dir1file 1file 2file 3 I can successfully copy all of them to another directory like this: $ find dir1 -mindepth 1 -exec cp -t dir2 {} + However, the output of find dir1 -mindepth 1 contains un-escaped spaces: $ find dir1 mindepth 1dir1/file 1dir1/file 3dir1/file 3 If I use print0 instead of print , the output still contains un-escaped spaces: $ find dir1 mindepth 1 -print0dir1/file 1dir1/file 2dir1/file 3 To copy these files manually using cp , I would need to escape the spaces; but it seems that this is unnecessary when cp 's aguments come from find , irrespective of whether I use + or \; at the end of the command. What's the reason for this?
The find command executes the command directly. The command, including the filename argument, will not be processed by the shell or anything else that might modify the filename. It's very safe. You are correct that there's no need to escape filenames which are represented by {} on the find command line. find passes the raw filename from disk directly into the internal argument list of the -exec command, in your case, the cp command.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/243095", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85900/" ] }
243,134
I am trying to download two files by the following syntax: curl -O http://domain/path/to/{file1,file2} The problem is that only the first file is actually saved locally, and the second was simply printed to stdout. I do realized that if I add a -O it works just fine: curl -OO http://domain/path/to/{file1,file2} But isn't this impractical if the number of files grows too big? For example, curl -O http://domain/path/to/file[1,100] My question is, is there really no way to download multiple individual files at once with curl (without adding a correct number of -O )?
This has been implemented in curl 7.19.0. See @Besworks answer. According to the man page there is no way to keep the original file name except using multiple O s. Alternatively you could use your own file names: curl http://{one,two}.site.example -o "file_#1.txt" resulting in http://one.site.example being saved to file_one.txt and http://two.site.example being saved to file_two.txt . Multiple variables even work. Like: curl http://{site,host}.host[1-5].example -o "#1_#2" resulting in http://site.host1.example being saved to site_1 , http://host.host1.example being saved to host_1 and so on.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/243134", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91981/" ] }
243,142
ps -o command shows each command on a separate line, with space separated, unquoted arguments: $ ps -o commandCOMMANDbashps -o command This can be a problem when checking whether the quoting was correct or to copy and paste a command to run it again. For example: $ xss-lock --notifier="notify-send -- 'foo bar'" slock &[1] 20172$ ps -o command | grep [x]ss-lockxss-lock --notifier=notify-send -- 'foo bar' slock The output of ps is misleading - if you try to copy and paste it, the command will not do the same thing as the original. So is there a way, similar to Bash's printf %q , to print a list of running commands with correctly escaped or quoted arguments ?
On Linux, you can get a slightly more raw list of args to a command from /proc/$pid/cmdline for a given process id. The args are separated by the nul char. Try cat -v /proc/$pid/cmdline to see the nuls as ^@ , in your case: xss-lock^@--notifier=notify-send -- 'foo bar'^@slock^@ . The following perl script can read the proc file and replace the nuls by a newline and tab giving for your example: xss-lock --notifier=notify-send -- 'foo bar' slock Alternatively, you can get a requoted command like this: xss-lock '--notifier=notify-send -- '\''foo bar'\''' 'slock' if you replace the if(1) by if(0) : perl -e ' $_ = <STDIN>; if(1){ s/\000/\n\t/g; s/\t$//; # remove last added tab }else{ s/'\''/'\''\\'\'\''/g; # escape all single quotes s/\000/ '\''/; # do first nul s/(.*)\000/\1'\''/; # do last nul s/\000/'"' '"'/g; # all other nuls } print "$_\n";' </proc/$pid/cmdline
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/243142", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3645/" ] }
243,195
From many docs, I read that startx is starting LXDE in Raspbian OS. I am a little bit confused. Will always startx run LXDE GUI? Also I have seen example with using startlxde command. How is that command different and why startx and startlxde are running the same GUI(LXDE)? Or maybe it runs it because it is the default GUI? How can I choose default GUI if I have multiple ones? Could you please explain more details around the GUI in Linux systems?
startx runs xinit which starts an X server and a client session. The client session is ~/.xinitrc if present, and otherwise /etc/X11/xinit/xinitrc (the location may vary between distributions). What this script does varies between distributions. On Debian (including derivatives such as Raspbian), /etc/X11/xinit/xinitrc runs /etc/X11/Xsession which in turn runs scripts in /etc/X11/Xsession.d . The Debian scripts look for a user session in other files ( ~/.xsession , ~/.xsessionrc , ~/.Xsession ) and, if no user setting is applicable, runs x-session-manager (falling back to x-window-manager if no [session manager] is installed, falling back to x-terminal-emulator in the unlikely case that no window manager is installed). If you want control over what gets executed, you can create one of the user files, either ~/.xsession or ~/.xinitrc . The file ~/.xsession is also used if you log in on a display manager (i.e. if you type your password in a GUI window). The file ~/.xinitrc is specific to xinit and startx . Using ~/.xsession goes through /etc/X11/Xsession so it sets up things like input methods, resources, password agents, etc. If you use .xinitrc , you'll have to do all of these manually. Once again, I'm describing Debian here, other Unix variants might set things up differently. The use of ~/.xinitrc to specify what gets executed when you run startx or xinit is universal. Whether you use ~/.xinitrc or ~/.xsession , this file (usually a shell script, but it doesn't have to be if you really want to use something else) must prepare whatever needs to be prepared (e.g. keyboard settings, resources, applets that aren't started by the window manager, etc.), and then at the end run the program that manages the session. When the script ends, the session terminates. Typically, you would use exec at the end of the script, to replace the script by the session manager or window manager. Your system presumably has /usr/bin/startlxde as the system-wide default session manager. On Debian and derivatives, you can check the available session managers with update-alternatives --list x-session-manager or get a more verbose description indicating which one is current with update-alternatives --display x-session-manager If LXDE wasn't the system-wide default and you wanted to make it the default for your account, you could use the following ~/.xsession file: #!/bin/shexec startlxde On some Unix variants, that would only run for graphical logins, not for startx , so you'd also need to create an identical ~/.xinitrc . (Or not identical: in ~/.xsession , you might want to do other things, because that's the first file that's executed in a graphical session; for example you might put . ~/.profile near the top, to set some environment variables.) If you want to try out other environments as a one-off, you can specify a different program to run on the command line of startx itself. The startx program has a quirk: you need to use the full path to the program. startx /usr/bin/startkde The startx command also lets you specify arguments to pass to the server. For example, if you want to run multiple GUI sessions at the same time, you can pass a different display number each time. Pass server arguments after -- on the command line of startx . startx /usr/bin/startkde -- :1
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/243195", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39990/" ] }
243,207
In the following file: Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Ut eu metus id lectus vestibulum ultrices. Maecenas rhoncus. I want to delete everything before consectetuer and everything after elit . My desired output: consectetuer adipiscing elit. How can I do this?
I'd use sed sed 's/^.*\(consectetuer.*elit\).*$/\1/' file Decoded the sed s/find/replace/ syntax: s/^.* -- substitute starting at the beginning of the line ( ^ ) followed by anything ( .* ) up to... \( - start a named block consectetuer.*elit\. - match the first word, everything ( .* ) up to the last word (in this case, including the trailing (escaped)dot) you want to match \) - end the named block match everything else ( .* ) to the end of the line ( $ ) / - end the substitute find section \1 - replace with the name block between the \( and the \) above / - end the replace
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/243207", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143276/" ] }
243,241
I am working on CentOS 6.4 and I am new to this operating system. I was downloading a 5 GB file using wget command. I observed that it was trying to download the file from different IP addresses (54.240.168.41), which was blocked by the proxy server. So I got this specific IP address opened by the network support and the download started working. Since it was a huge file, I left it to complete the execution overnight. Next morning, due to some network error, the download stopped. only 42% was downloaded. I tried to download the file using -c option of the wget command. However, wget keeps trying to connect to different IP addresses starting with 54.xxx.xxx.xxx , except the IP address 54.240.168.41 . My question is, how I would tell wget to download from a specific IP addresses which is NOT blocked by the network?. This is the command that I am executing wget --continue http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.3.2.0/HDP-2.3.2.0-centos6-rpm.tar.gz
This worked for me when switching DNS, and needed to access old server by IP but specified host header to route to my account at old server. wget http://198.38.82.5/something.tar.gz --header "Host: domain-at-server.net"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/243241", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59999/" ] }
243,317
I need to run a command, and then run the same command again with just one string changed. For example, I run the command $ ./myscript.sh xxx.xxx.xxx.xxx:8080/code -c code1 -t query Now from there, without going back in the command history (via the up arrow), I need to replace code1 with mycode or some other string. Can it be done in Bash?
I renamed your script, but here's an option: $ ./myscript.sh xxx.xxx.xxx.xxx:8080/code -c code1 -t query after executing the script, use: $ ^code1^code2 ... which results in: ./myscript.sh xxx.xxx.xxx.xxx:8080/code -c code2 -t query man bash and search for "Event Designators": ^string1^string2^ Quick substitution. Repeat the last command, replacing string1 with string2. Equivalent to !!:s/string1/string2/ Editing to add global replacement, which I learned just now from @slm's answer at https://unix.stackexchange.com/a/116626/117549 : $ !!:gs/string1/string2 which says: !! - recall the last commandg - perform the substitution over the whole lines/string1/string2 - replace string1 with string2
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/243317", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19072/" ] }
243,350
According to its documentation, bash waits until all commands in a pipeline have finished running before continuing The shell waits for all commands in the pipeline to terminate before returning a value. So why does the command yes | true finish immediately? Shouldn't the yes loop forever and cause the pipeline to never return? And a subquestion: according to the POSIX spec , shell pipelines may choose to either return after the last command finishes or wait until all the commands finish. Do common shells have different behavior in this sense? Are there any shells where yes | true will loop forever?
When true exits, the read side of the pipe is closed, but yes continues trying to write to the write side. This condition is called a "broken pipe", and it causes the kernel to send a SIGPIPE signal to yes . Since yes does nothing special about this signal, it will be killed. If it ignored the signal, its write call would fail with error code EPIPE . Programs that do that have to be prepared to notice EPIPE and stop writing, or they will go into an infinite loop. If you do strace yes | true 1 you can see the kernel preparing for both possibilities: write(1, "y\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\n"..., 4096) = -1 EPIPE (Broken pipe)--- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, si_pid=17556, si_uid=1000} ---+++ killed by SIGPIPE +++ strace is watching events via the debugger API, which first tells it about the system call returning with an error, and then about the signal. From yes 's perspective, though, the signal happens first. (Technically, the signal is delivered after the kernel returns control to user space, but before any more machine instructions are executed, so the write "wrapper" function in the C library does not get a chance to set errno and return to the application.) 1 Sadly, strace is Linux-specific. Most modern Unixes have some command that does something similar, but it often has a different name, it probably doesn't decode syscall arguments as thoroughly, and sometimes it only works for root.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/243350", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23960/" ] }
243,357
I have list of parent folders; inside every parent folder I have sub folders and files. How can I empty the parent folders -- i.e remove all the files and sub folders and leave the parent folders empty? Parent folder A subfolder aa file aParent folder B file b file vvParent folder C subfolder s subfolder n file x....
With GNU find : find "Parent folder A" "Parent folder B" ... -mindepth 1 -delete
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/243357", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
243,428
I have been searching for a solution for my question but didn't find a or better said I did not get it with what I found. My problem is:I am using a Smart Home Control Software on a Raspberry Pi. Using pilight-receive ,I can capture the data from my outdoor temperature sensor. The output of pilight-receive looks like that: { "message": { "id": 4095, "temperature": 409.5 }, "origin": "receiver", "protocol": "alecto_wsd17", "uuid": "0000-b8-27-eb-0f3db7", "repeats": 3}{ "message": { "id": 1490, "temperature": 25.1, "humidity": 40.0, "battery": 1 }, "origin": "receiver", "protocol": "alecto_ws1700", "uuid": "0000-b8-27-eb-0f3db7", "repeats": 3}{ "message": { "id": 2039, "temperature": 409.5 }, "origin": "receiver", "protocol": "alecto_wsd17", "uuid": "0000-b8-27-eb-0f3db7", "repeats": 4} Now my question is:How the can I extract the temperature and humidity from messages where the id is 1490? And how would you recommend me to do check this frequently? By a cron job that runs every 10 minutes, creates an output of the pilight-receive ,extracts the data of the output and pushes it to the Smart Home Control API?
You can use jq to process json files in shell. For example, I saved your sample json file as raul.json and then ran: $ jq .message.temperature raul.json 409.525.1409.5$ jq .message.humidity raul.json null40null jq is available pre-packaged for most linux distros. There's probably a way to do it in jq itself, but the simplest way I found to get both the wanted values on one line is to use xargs . For example: $ jq 'select(.message.id == 1490) | .message.temperature, .message.humidity' raul.json | xargs25.1 40 or, if you want to loop through each .message.id instance, we can add .message.id to the output and use xargs -n 3 as we know that there will be three fields (id, temperature, humidity): jq '.message.id, .message.temperature, .message.humidity' raul.json | xargs -n 34095 409.5 null1490 25.1 402039 409.5 null You could then post-process that output with awk or whatever. Finally, both python and perl have excellent libraries for parsing and manipulating json data. As do several other languages, including php and java.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/243428", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143439/" ] }
243,484
I want to convert the output of the command ps to JSON in order to process it as structured data (with jq in this particular case). How do I do that? The output looks like the following: PID TTY TIME CMD20162 pts/2 00:00:00 ps28280 pts/2 00:00:02 zsh The header row is always present.
There are two obvious ways to represent columnar data output in JSON: as an array of arrays and as an array of objects. In the former case you convert each line of the input to an array; in the latter, to an object. The commands listed bellow work at least with the output of procps-ng on Linux for the commands ps and ps -l . Option #1: array of arrays Using Perl You can convert the output using Perl and the CPAN module JSON::XS . # ps | perl -MJSON -lane 'my @a = @F; push @data, \@a; END { print encode_json \@data }'[["PID","TTY","TIME","CMD"],["12921","pts/2","00:00:00","ps"],["12922","pts/2","00:00:00","perl"],["28280","pts/2","00:00:01","zsh"]] Using jq Alternatively, you can use jq itself to perform the conversion. # ps | jq -sR '[sub("\n$";"") | splits("\n") | sub("^ +";"") | [splits(" +")]]' [ [ "PID", "TTY", "TIME", "CMD" ], [ "16694", "pts/2", "00:00:00", "ps" ], [ "16695", "pts/2", "00:00:00", "jq" ], [ "28280", "pts/2", "00:00:02", "zsh" ]] Option #2: array of objects You can convert the input to an array of JSON objects with meaningfully named keys by taking the key names from the header row. This requires a little more effort and is slightly trickier in jq in particular. However, the result is arguably more human-readable. Using Perl # ps | perl -MJSON -lane 'if (!@keys) { @keys = @F } else { my %h = map {($keys[$_], $F[$_])} 0..$#keys; push @data, \%h } END { print encode_json \@data }'[{"TTY":"pts/2","CMD":"ps","TIME":"00:00:00","PID":"11030"},{"CMD":"perl","TIME":"00:00:00","PID":"11031","TTY":"pts/2"},{"TTY":"pts/2","CMD":"zsh","TIME":"00:00:01","PID":"28280"}] Note that the keys are in arbitrary order for each entry. This is an artifact of how Perl's hashes work. Using jq # ps | jq -sR '[sub("\n$";"") | splits("\n") | sub("^ +";"") | [splits(" +")]] | .[0] as $header | .[1:] | [.[] | [. as $x | range($header | length) | {"key": $header[.], "value": $x[.]}] | from_entries]'[ { "PID": "19978", "TTY": "pts/2", "TIME": "00:00:00", "CMD": "ps" }, { "PID": "19979", "TTY": "pts/2", "TIME": "00:00:00", "CMD": "jq" }, { "PID": "28280", "TTY": "pts/2", "TIME": "00:00:02", "CMD": "zsh" }]
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/243484", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61562/" ] }
243,490
I'm being trolled by China, and don't know why I can't block their request to my server. //host.deny ALL: item.taobao.comALL: 117.25.128.* But when I watch the error log on my webserver tail -f /var/log/apache2/error.log the requests are still being allowed through. Question: Why isn't my /etc/hosts.deny config working?
The file is called /etc/hosts.deny , not host.deny Not all services use tcp-wrappers. sshd , for example, doesn't by default. Neither does apache. You can use iptables to block all packets from 117.25.128/24, e.g.: iptables -I INPUT -s 117.25.128.0/24 -j DROP Even better, you can use fail2ban to monitor a log file (such as apache's access.log and/or error.log) and automatically block IP addresses trying to attack your server. From the debian fail2ban package description: Fail2ban monitors log files (e.g. /var/log/auth.log, /var/log/apache/access.log) and temporarily or persistently bans failure-prone addresses by updating existing firewall rules. Fail2ban allows easy specification of different actions to be taken such as to ban an IP using iptables or hosts.deny rules, or simply to send a notification email. By default, it comes with filter expressions for various services (sshd, apache, qmail, proftpd, sasl etc.) but configuration can be easily extended for monitoring any other text file. All filters and actions are given in the config files, thus fail2ban can be adopted to be used with a variety of files and firewalls.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/243490", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143473/" ] }
243,509
I am using Trisquel 7.0 with Nautilus 3.10.1 installed. Whenever I display properties of a file, I've one file-specific tab like: Image,Audio/Video,Document etc. which displays special information about it. Example for a Image: Example for a PDF Document: How does Nautilus get this type of file-specific information? And how do I print this information (MetaData) with te command-line ?
For the first level of information in the command line, you can use file . $ file gtu.pdf gtu.pdf: PDF document, version 1.4 For most formats, and more detailed information, you can also use Exiftool : NAME exiftool - Read and write meta information in filesSYNOPSIS exiftool [OPTIONS] [-TAG...] [--TAG...] FILE... exiftool [OPTIONS] -TAG[+-<]=[VALUE]... FILE... exiftool [OPTIONS] -tagsFromFile SRCFILE [-SRCTAG[>DSTTAG]...] FILE... exiftool [ -ver | -list[w|f|r|wf|g[NUM]|d|x] ] For specific examples, see the EXAMPLES sections below. This documentation is displayed if exiftool is run without an input FILE when one is expected.DESCRIPTION A command-line interface to Image::ExifTool, used for reading and writing meta information in a variety of file types. FILE is one or more source file names, directory names, or "-" for the standard input. Information is read from source files and printed in readable form to the console (or written to output text files with -w). Example: $ exiftool IMG_20151104_102543.jpg ExifTool Version Number : 9.46File Name : IMG_20151104_102543.jpgDirectory : .File Size : 2.8 MBFile Modification Date/Time : 2015:11:04 10:25:44+05:30File Access Date/Time : 2015:11:17 18:56:49+05:30File Inode Change Date/Time : 2015:11:11 14:55:43+05:30File Permissions : rwxrwxrwxFile Type : JPEGMIME Type : image/jpegExif Byte Order : Big-endian (Motorola, MM)GPS Img Direction : 0GPS Date Stamp : 2015:11:04GPS Img Direction Ref : Magnetic NorthGPS Time Stamp : 04:55:43Camera Model Name : Micromax A121Aperture Value : 2.1Interoperability Index : R98 - DCF basic file (sRGB)Interoperability Version : 0100Create Date : 2002:12:08 12:00:00Shutter Speed Value : 1/808Color Space : sRGBDate/Time Original : 2015:11:04 10:25:44Flashpix Version : 0100Exif Image Height : 2400Exif Version : 0220Exif Image Width : 3200Focal Length : 3.5 mmFlash : Auto, Did not fireExposure Time : 1/809ISO : 100Components Configuration : Y, Cb, Cr, -Y Cb Cr Positioning : CenteredY Resolution : 72Resolution Unit : inchesX Resolution : 72Make : MicromaxCompression : JPEG (old-style)Thumbnail Offset : 640Thumbnail Length : 12029Image Width : 3200Image Height : 2400Encoding Process : Baseline DCT, Huffman codingBits Per Sample : 8Color Components : 3Y Cb Cr Sub Sampling : YCbCr4:2:0 (2 2)Aperture : 2.1GPS Date/Time : 2015:11:04 04:55:43ZImage Size : 3200x2400Shutter Speed : 1/809Thumbnail Image : (Binary data 12029 bytes, use -b option to extract)Focal Length : 3.5 mmLight Value : 11.9 There are also specific commands for some type of files, like pdf : $ pdfinfo gtu.pdf Title: Microsoft Word - Thermax LtdAuthor: UserCreator: PScript5.dll Version 5.2.2Producer: GPL Ghostscript 8.15CreationDate: Tue Jan 27 11:51:38 2015ModDate: Tue Jan 27 12:30:40 2015Tagged: noForm: nonePages: 1Encrypted: noPage size: 612 x 792 pts (letter)Page rot: 0File size: 64209 bytesOptimized: yesPDF version: 1.4
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/243509", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66803/" ] }
243,554
Am am currently visiting TU Wien and today I connected my Debian Linux laptop to their eduroam wlan using wpa_supplicant and the credentials of my home institute - as always when I am visiting another scientific institution. When I opened a terminal I noticed that my command promt was showing a different host name, and in fact, excecuting hostname gave me e244-082.eduroam.tuwien.ac.at instead of the usual host name of my machine x301 . I am very puzzled by this. How on earth can it be possible that connecting to a wlan changes my host name without my consent?
Some DHCP servers send out host names. Clients can accept or ignore such offers. Have a look at your local /etc/dhcp/dhclient.conf file to inspect your current configuration. There is a list of request entities one of which will probably read host-name . For more information check out the man page of dhclient.conf .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/243554", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64908/" ] }
243,558
I have just installed debian jessie and have installed laptop-mode-tools . However, looking on the Laptop Mode Tools page, not only is there a plethora of options, but it also says this software should be used "combined with acpid and CPU frequency scaling". This all seems overly complicated. Is there an easy way to set up some common defaults for a laptop? I don't need finely tuned settings specific to my hardware, just some common trade-offs between power and performance tuned towards power savings when I take the cable out of my laptop. How can I do this?
This page should cover some of your questions. http://wiki.yobi.be/wiki/Debian_on_laptop And of course, we need also to mention the official page from the Linux Documentation Project. http://www.tldp.org/HOWTO/html_single/Battery-Powered/ Frankly, nowadays with a laptop which battery lasts from 6 to 9 hours, I do not obsess so much over this stuff. It might be worth also for laptops and IoT devices to have a look at CPU Frequency Scaling to save laptop battery and prevent possible overheating issues in small Arm boards.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/243558", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65634/" ] }
243,613
Is there a difference between the sequences {1,2,3} and {1..3} ? For example if I have some files file.1file.2file.3 and I want to cat them together is it safe to use cat file.{1..3} > file ? What I know is that cat file.*>file could cause problems because the shell can expand the files in a random way sometimes (I think this depends on the inodes, does it?)
{1..3} and {1,2,3} produce the same result, but with different way. In general, {n1..n2} (which came first from zsh , bash and ksh copied it later) where n1 and n2 are integers produce all numbers between n1 and n2 . While {x,y,z} produce three characters x , y and z . In your case, you're safe to use cat file.{1..3} > file Now, in case of cat file.*>file , you used the shell globbing , which produce all file name start with file. and the result will be sorted base on the collation order in current locale. You are still safe, but not anymore when you have more than 10 files. {1..10} will give you 1 2 3 4 5 6 7 8 9 10 . While with globbing, you will get 1 10 2 3 4 5 6 7 8 9
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/243613", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41449/" ] }
243,657
I'm modifying a bunch of initramfs archives from different Linux distros in which normally only one file is being changed. I would like to automate the process without switching to root user to extract all files inside the initramfs image and packing them again. First I've tried to generate a list of files for gen_init_cpio without extracting all contents on the initramfs archive, i.e. parsing the output of cpio -tvn initrd.img (like ls -l output) through a script which changes all permissions to octal and arranges the output to the format gen_init_cpio wants, like: dir /dev 755 0 0nod /dev/console 644 0 0 c 5 1slink /bin/sh busybox 777 0 0file /bin/busybox initramfs/busybox 755 0 0 This involves some replacements and the script may be hard to write for me, so I've found a better way and I'm asking about how safe and portable is: In some distros we have an initramfs file with concatenated parts, and apparently the kernel parses the whole file extracting all parts packed in a 1-byte boundary, so there is no need to fill each part to a multiple of 512 bytes. I thought this 'feature' can be useful for me to avoid recreating the archive when modifying files inside it. Indeed it works, at least for Debian and CloneZilla . For example if we have modified the /init file on initrd.gz of Debian 8.2.0, we can append it to initrd.gz image with: $ echo ./init | cpio -H newc -o | gzip >> initrd.gz so initrd.gz has two concatenated archives, the original and its modifications. Let's see the result of binwalk : DECIMAL HEXADECIMAL DESCRIPTION--------------------------------------------------------------------------------0 0x0 gzip compressed data, maximum compression, has original file name: "initrd", from Unix, last modified: Tue Sep 1 09:33:08 20156299939 0x602123 gzip compressed data, from Unix, last modified: Tue Nov 17 16:06:13 2015 It works perfectly. But it is reliable? what restrictions do we have when appending data to initfamfs files? it is safe to append without padding the original archive to a multiple of 512 bytes? from which kernel version is this feature supported?
It's very reliable and supported by all kernel versions that support initrd, AFAIK. It's a feature of the cpio archives that initramfs are made up of. cpio just keeps on extracting its input....we might know the file is two cpio archives one after the other, but cpio just sees it as a single input stream. Debian advises use of exactly this method (appending another cpio to the initramfs) to add binary-blob firmware to their installer initramfs. For example: DebianInstaller / NetbootFirmware | Debian Wiki Initramfs is essentially a concatenation of gzipped cpio archives which are extracted into a ramdisk and used as an early userspace by the Linux kernel. Debian Installer's initrd.gz is in fact a single gzipped cpio archive containing all the files the installer needs at boot time. By simply appending another gzipped cpio archive - containing the firmware files we are missing - we get the show on the road!
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/243657", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/110057/" ] }
243,756
I haven't found a slam-dunk document on this, so let's start one. On a CentOS 7.1 host, I have gone through the linuxconfig HOW-TO , including the firewall-cmd entries, and I have an exportable filesystem. [root@<server> ~]# firewall-cmd --list-allinternal (default, active) interfaces: enp5s0 sources: 192.168.10.0/24 services: dhcpv6-client ipp-client mdns ssh ports: 2049/tcp masquerade: no forward-ports: rich rules: [root@<server> ~]# showmount -e localhostExport list for localhost:/export/home/<user> *.localdomain However, if I showmount from the client, I still have a problem. [root@<client> ~]# showmount -e <server>.localdomainclnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host) Now, how am I sure that this is a firewall problem? Easy. Turn off the firewall. Server side: [root@<server> ~]# systemctl stop firewalld And client side: [root@<client> ~]# showmount -e <server>.localdomainExport list for <server>.localdomain:/export/home/<server> *.localdomain Restart firewalld. Server side: [root@<server> ~]# systemctl start firewalld And client side: [root@<client> ~]# showmount -e <server>.localdomainclnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host) So, let's go to town, by adapting the iptables commands from a RHEL 6 NFS server HOW-TO ... [root@ ~]# firewall-cmd \> --add-port=111/tcp \> --add-port=111/udp \> --add-port=892/tcp \> --add-port=892/udp \> --add-port=875/tcp \> --add-port=875/udp \> --add-port=662/tcp \> --add-port=662/udp \> --add-port=32769/udp \> --add-port=32803/tcpsuccess[root@<server> ~]# firewall-cmd \> --add-port=111/tcp \> --add-port=111/udp \> --add-port=892/tcp \> --add-port=892/udp \> --add-port=875/tcp \> --add-port=875/udp \> --add-port=662/tcp \> --add-port=662/udp \> --add-port=32769/udp \> --add-port=32803/tcp \> --permanentsuccess[root@<server> ~]# firewall-cmd --list-allinternal (default, active) interfaces: enp5s0 sources: 192.168.0.0/24 services: dhcpv6-client ipp-client mdns ssh ports: 32803/tcp 662/udp 662/tcp 111/udp 875/udp 32769/udp 875/tcp 892/udp 2049/tcp 892/tcp 111/tcp masquerade: no forward-ports: rich rules: This time, I get a slightly different error message from the client: [root@<client> ~]# showmount -e <server>.localdomainrpc mount export: RPC: Unable to receive; errno = No route to host So, I know I'm on the right track. Having said that, why can't I find a definitive tutorial on this anywhere? I can't have been the first person to have to figure this out! What firewall-cmd entries am I missing? Oh, one other note. My /etc/sysconfig/nfs files on the CentOS 6 client and the CentOS 7 server are unmodified, so far. I would prefer to not have to change (and maintain!) them, if at all possible.
This should be enough: firewall-cmd --permanent --add-service=nfsfirewall-cmd --permanent --add-service=mountdfirewall-cmd --permanent --add-service=rpc-bindfirewall-cmd --reload
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/243756", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31778/" ] }
243,757
I use ubuntu 14. About three days ago computer started hanging when I work - I even can't move mouse, so I press power button to restart the computer. It happens about 1-7 times an hour. For example I work in open office and it hangs. Or I work with console - it hangs. The log is below. What does it mean and how to fix it? Nov 17 21:18:16 pavel-desktop kernel: [ 275.292875] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.292908] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.292929] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.292938] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0f04 data 0x3f800000Nov 17 21:18:16 pavel-desktop kernel: [ 275.293036] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.293048] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.293057] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010Nov 17 21:18:16 pavel-desktop kernel: [ 275.293071] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6380 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.304073] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.304093] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.304105] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.304120] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.304156] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.304187] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.304202] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0dc8 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.304376] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.304388] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.304397] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010Nov 17 21:18:16 pavel-desktop kernel: [ 275.304411] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6180 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.329457] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.329477] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.329488] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.329504] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.329541] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.329566] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.329575] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0dc8 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.329737] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.329750] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.329758] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010Nov 17 21:18:16 pavel-desktop kernel: [ 275.329772] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6380 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.338336] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.338357] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.338368] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.338384] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.338415] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.338443] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.338452] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0dc8 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.338618] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.338630] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.338639] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010Nov 17 21:18:16 pavel-desktop kernel: [ 275.338654] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6180 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.354981] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.355001] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.355013] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.355031] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.355061] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.355085] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.355097] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0f04 data 0x3f800000Nov 17 21:18:16 pavel-desktop kernel: [ 275.355195] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.355206] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.355214] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010Nov 17 21:18:16 pavel-desktop kernel: [ 275.355229] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6380 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.371365] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.371386] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.371397] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.371413] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.371440] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.371459] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.371465] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0dc8 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.371624] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.371636] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.371645] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010Nov 17 21:18:16 pavel-desktop kernel: [ 275.371659] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6180 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.387885] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.387905] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.387916] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.387931] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.387952] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.387962] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.387970] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0dc8 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.388133] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.388145] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.388153] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010Nov 17 21:18:16 pavel-desktop kernel: [ 275.388168] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6380 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.405478] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.405500] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.405514] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.405532] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.405554] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.405565] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.405573] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0dc8 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.405733] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.405745] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.405753] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010Nov 17 21:18:16 pavel-desktop kernel: [ 275.405767] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6180 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.424274] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.424290] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.424299] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.424313] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.424449] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.424459] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.424464] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010Nov 17 21:18:16 pavel-desktop kernel: [ 275.424475] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6380 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.437337] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.437351] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.437360] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.437373] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.437388] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.437396] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.437401] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0dc8 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.437561] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.437569] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.437575] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010Nov 17 21:18:16 pavel-desktop kernel: [ 275.437586] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6180 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.458990] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.459003] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.459008] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.459019] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.459041] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.459067] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.459072] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0dc8 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.459232] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.459239] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.459242] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010Nov 17 21:18:16 pavel-desktop kernel: [ 275.459252] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6380 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.471067] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.471088] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.471102] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.471119] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.471140] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.471151] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.471160] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0dc8 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.471320] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.471332] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.471340] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x1b0c data 0x1000f010Nov 17 21:18:16 pavel-desktop kernel: [ 275.471355] nouveau E[ PFB][0000:01:00.0] trapped read at 0x00204b6180 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.487181] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.487202] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.487215] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x15e0 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.487233] nouveau E[ PFB][0000:01:00.0] trapped read at 0x002093a000 on channel 0x0000f94c [compiz[2135]] PGRAPH/VFETCH/00 reason: DMAOBJ_LIMITNov 17 21:18:16 pavel-desktop kernel: [ 275.487264] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.487275] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.487280] nouveau E[ PGRAPH][0000:01:00.0] ch 4 [0x000f94c000 compiz[2135]] subc 3 class 0x8297 mthd 0x0dc8 data 0x00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.487444] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH FAULTNov 17 21:18:16 pavel-desktop kernel: [ 275.487456] nouveau E[ PGRAPH][0000:01:00.0] TRAP_VFETCH 00f00000 0000fe0c 00000000 00000000Nov 17 21:18:16 pavel-desktop kernel: [ 275.487465] nouveau E[ PGRAPH][0000:01
This should be enough: firewall-cmd --permanent --add-service=nfsfirewall-cmd --permanent --add-service=mountdfirewall-cmd --permanent --add-service=rpc-bindfirewall-cmd --reload
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/243757", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139986/" ] }
243,762
Ok so I got this: check="${PATH//:/'\n'}" The above gets me each path in a simplified manner, and formats it nicely into a variable. Now how do I check each line for existence, and state that for each line in combination with an echo (I think I would use echo.)? I know it probably involves a for or while loop, but I'm not sure how to do this. BASH only answers please.
while read -d: dirdo [ -d "$dir" ] || echo "Missing: $dir"done <<<"${PATH%:}:" read -d: dir reads input into variable dir , breaking the input at : . [ -d "$dir" ] tests for the existence of the directory || only executes the statement that follows if the preceding statement returned false. <<<"${PATH%:}:" provides input to the loop using a here-string . The form "${PATH%:}:" makes sure that one : follows the PATH string. This is done in two steps. The first uses suffix removal , ${PATH%:} , to remove a trailing : from the PATH if there is one. Secondly, one colon is added.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/243762", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140356/" ] }
243,766
I remember, back from my days with Windows Vista/7, that there was a tool called memclear or memclean that would free some memory by invoking the NT garbage collection API. Probably it cleared cache too. Very often when I use Ubuntu, after a while the system stays at a couple of gigabytes allocated memory, and when I perform memory-intensive tasks such as image editing, I have to wait quite a while for the extra gigabytes to swap. Is there a way to force something like a kernel GC to free memory that really isn't used? (when I start up, memory consumption is less than a gigabyte)
From what you have posted it doesn't seems like you under stand how memory works in Linux. I recommend reading http://www.linuxnix.com/find-ram-size-in-linuxunix/ http://www.itworld.com/article/2722141/it-management/making-sense-of-memory-usage-on-linux.html http://www.linuxatemyram.com/ The jist of those sites is that you have more "free" ram then you think.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/243766", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77985/" ] }
243,821
I want to ssh into a remote Ubuntu computer, source my .bashrc and run a command that depends on parameters set by that .bashrc . All that in an interactive shell that doesn't close after the command is done. What I tried until now is ssh user@remote_computer -t 'bash -l -c "my_alias;bash"' or just ssh user@remote_computer -t "my_alias;bash" This works for general commands (like ls for example) but when I try to run an alias defined in .bashrc I get an error: bash: my_alias: command not found But then when I write it manually again and run it, it works! So how can I make sure the the .bashrc is sourced before the command is called?
The problem is that you are trying to run an alias in a non-interactive shell. When you run ssh user@computer command , command is run non-interactively. Non interactive shells don't read aliases (from man bash): Aliases are not expanded when the shell is not interactive, unless the expand_aliases shell option is set using shopt (see the description of shopt under SHELL BUILTIN COMMANDS below). It works if you run it again manually because the final bash command starts an interactive shell so your aliases are now available. As an alternative, you could launch an interactive shell ( bash -i ) instead of a simple login shell ( bash -l ) on the remote machine to run your alias: ssh user@remote_computer -t 'bash -ic "my_alias;bash"' This seems a very complicated approach though. You haven't explained why exactly you need to do this but consider these alternatives: Just start a normal login interactive shell on the remote machine and run the command manually: user@local $ ssh user@remoteuser@remote $ my_alias If you always want that alias to be run when you connect to this computer, edit the ~/.profile (or ~/.bash_profile , if present) of the remote computer and add this line at the end: my_alias Because ~/.profile is read each time a login shell is started (so, each time you connect via ssh , for example), that will cause my_alias to be run each time you connect. Note that by default, login shells read ~/.profile or ~/.bash_profile and ignore ~/.bashrc . Some distributions (Debian and its derivatives and Arch, for example) distributions like Ubuntu have their default ~/.profile or ~/.bash_profile files source ~/.bashrc which means that your aliases defined in ~/.bashrc will also be available in a login shell. This isn't true for all distributions, so you might have to edit your ~/.profile manually to have it source ~/.bashrc . Also note that if ~/.bash_profile exists, ~/.profile will be ignored by bash.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/243821", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65817/" ] }
243,847
I'm running Archlinux. Recently, one of the package named icu got updated; however, Firefox depends libicuuc.so.56 , while R depends on libicuuc.so.55 . How can I solve this problem? Note: the R package was built against Intel MKL libiary, so it doesn't work on new version of dependencies. I tried to rebuild R -- it still depends on the old libicuuc.so.55
I assume you wish to run a specific executable with the old library. Let's call the executable myprogram . If you place libicuuc.so.55 in a different directory, for instance as /opt/oldlibs/libicuuc.so.55 it is possible to instruct myprogram to use the old library with a command like this: LD_LIBRARY_PATH=/opt/oldlibs myprogram The library files can be extracted from the package file (that you can probably find in /var/cache/pacman/pkg ). If this does not solve the issue for how you intend to use the application, you can consider running it in a restricted environment (using chroot ) or in a container instead.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/243847", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92712/" ] }
243,853
I am trying to set JAVA_HOME so that I can install Apache Solr with the help of this tutorial . I am connected to my server using ssh with root user To allow the running sh script to install Apache Solr: mount | grep noexec Re-mounting file system with exec option: mount -o remount,exec /dev/md1 Then every time I try to install it using the following commands bin/install_solr_service.sh /tmp/solr-5.3.1.tgz I get the following message: WARNING: /opt/solr-5.3.1 already exists! Skipping extract ...Creating /etc/init.d/solr script ...The currently defined JAVA_HOME (/usr/local/jdk) refersto a location where Java could not be found. Aborting.Either fix the JAVA_HOME variable or remove it from theenvironment so that the system PATH will be searched.The currently defined JAVA_HOME (/usr/local/jdk) refersto a location where Java could not be found. Aborting.Either fix the JAVA_HOME variable or remove it from theenvironment so that the system PATH will be searched.Service solr installed. This is what I tried so far: nano /root/.bash_profile nano /etc/profile I added the following to the files above at the end and saved them export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.65-0.b17.el6_7.x86_64export PATH=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.65-0.b17.el6_7.x86_64/bin:$PATH That didn't work. I created the following file /etc/profile.d/java.sh and put in it: export JRE_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.65-0.b17.el6_7.x86_64/jre/export PATH=$PATH:$JRE_HOME/binexport JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.65-0.b17.el6_7.x86_64export JAVA_PATH=$JAVA_HOMEexport PATH=$PATH:$JAVA_HOME/bin And ran the following command: source java.sh That also didn't work. I tried to run the following command: export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.65-0.b17.el6_7.x86_64 No luck at all. But when a run the following commands that is what I get echo $JAVA_HOME/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.65-0.b17.el6_7.x86_64echo $PATH/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.65-0.b17.el6_7.x86_64/bin:/usr/local/jdk/bin:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.65-0.b17.el6_7.x86_64/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.65-0.b17.el6_7.x86_64/jre//bin:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.65-0.b17.el6_7.x86_64/bin:/usr/local/bin:/usr/X11R6/bin:/root/bin
You want to point JAVA_HOME to the JRE directory, as in: JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64/jre/ If using bash, I recommend putting this in /etc/bashrc (RH based) or /etc/bash.bashrc (Debian based): export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:/bin/java::")
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/243853", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39451/" ] }
243,876
I'm on Manjaro and today I woke up to find my computer having problems due to a full disk. I have deleted many things but this did not solve the issue. I have no idea what is happening. Is there a way to repartition fast? Cause I don't have Gparted live CD at hand. Here is the full partition: But the filesystem is not full: Here is the output of df -ah (with virtual file systems omitted) Filesystem Size Used Avail Use% Mounted on/dev/mapper/ManjaroVG-ManjaroRoot 29G 29G 0 100% //dev/sda1 247M 56M 179M 24% /boot/dev/mapper/ManjaroVG-ManjaroHome 550G 296G 227G 57% /home Here is the output of df -i for the same partitions:~/Desktop Filesystem Inodes IUsed IFree IUse% Mounted on/dev/mapper/ManjaroVG-ManjaroRoot 1921360 441275 1480085 23% //dev/sda1 65280 368 64912 1% /boot/dev/mapper/ManjaroVG-ManjaroHome 36626432 320911 36305521 1% /home As a result of my full partition, mysql fails to start . Here is the output of lsblk : NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 596.2G 0 disk ├─sda1 8:1 0 255M 0 part /boot└─sda2 8:2 0 595.9G 0 part ├─ManjaroVG-ManjaroRoot 254:0 0 29.3G 0 lvm / ├─ManjaroVG-ManjaroHome 254:1 0 558.9G 0 lvm /home └─ManjaroVG-ManjaroSwap 254:2 0 7.8G 0 lvm [SWAP]sr0 11:0 1 1024M 0 rom And here is the output of du -shx /* (trimmed of extraneous entries) 54M /boot3.2G /data19M /etc296G /home4.0K /media4.0K /mnt1.1G /opt79M /root1.1M /run16K /srv28K /tmp7.6G /usr14G /var Drilling down into `/var/ shows the big disk space users are: 9.0G /var/cache 4.8G /var/lib
Your partition /dev/sda2 shows up as "full" because it is entirely allocated to LVM, which is managing your / and /home partitions. We don't need to look directly at /dev/sda2 as a result, but rather your LVM configuration. We can see from your lsblk output: └─sda2 8:2 0 595.9G 0 part ├─ManjaroVG-ManjaroRoot 254:0 0 29.3G 0 lvm / ├─ManjaroVG-ManjaroHome 254:1 0 558.9G 0 lvm /home └─ManjaroVG-ManjaroSwap 254:2 0 7.8G 0 lvm [SWAP] that it is likely your entire LVM is allocated to ManjaroRoot , ManjaroHome and ManjaroSwap . This means that growing your partitions is not an option without either first adding a new LVM PV or shrinking an existing LVM LV (not a straightforward task). However, those options are treating the symptom and not the problem. Your problem is that / on the device /dev/mapper/ManjaroVG-ManjaroRoot is full. Your /home partition is not full and is not relevant to your problem. We can see from your du output that the largest disk usage under / are: 3.2G /data1.1G /opt7.6G /usr14G /var The usages for /data , /opt and /usr look reasonable but the outlier is /var which is using a ton of space. Some updated information from you in chat shows this isn't a log issue as I suspected but rather a package cache issue with the pacman package cache. You can clean out old files from the cache with the command: pacman -Sc You can read more about cleaning the package cache on the Arch wiki.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/243876", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15433/" ] }
243,882
I am in need of simulating a high latency and low bandwidth connection for a performance test of my application. I have gone through a number of pages describing the tc command. But, I haven't been able to validate the numbers that I set. For example: I took the following command values from: https://www.excentis.com/blog/use-linux-traffic-control-impairment-node-test-environment-part-2 tc qdisc add dev eth0 root tbf rate 1mbit burst 32kbit latency 400ms With that applied on (say, machine A), according to the description on the page, I am assuming my output rate should be 128 kBps (at least approximately). To test this, I started transferring a 2 GB file using scp from machine A to another machine "B" which are in the same LAN. Transfer rates without any added impairment reach up to 12 MBps in this network. But, when the transfer started the rate was at 2 MBps, then it kept stalling and falling down until when it started to swing and stall between 11 kBps and 24 kBps. I used nmon to monitor network throughput on both sides during the transfer, but it never went above 24 kBps (except for a couple of values reading 54 and 62). I have also tried increasing the rate and bucket size, but the behavior during scp is the same. I tried the following command to increase the bucket size and the rate: tc qdisc add dev eth0 root tbf rate 1024kbps burst 1024kb latency 500 And scp still stalled and swung around the same rates (11-30 kBps). Am I inferring the term "rate" wrong here? I have looked at the man page for tc and it appears that my interpretation is correct. Could anyone explain to me what would be the best way to test the set parameters (assuming I did it correctly)?
Your partition /dev/sda2 shows up as "full" because it is entirely allocated to LVM, which is managing your / and /home partitions. We don't need to look directly at /dev/sda2 as a result, but rather your LVM configuration. We can see from your lsblk output: └─sda2 8:2 0 595.9G 0 part ├─ManjaroVG-ManjaroRoot 254:0 0 29.3G 0 lvm / ├─ManjaroVG-ManjaroHome 254:1 0 558.9G 0 lvm /home └─ManjaroVG-ManjaroSwap 254:2 0 7.8G 0 lvm [SWAP] that it is likely your entire LVM is allocated to ManjaroRoot , ManjaroHome and ManjaroSwap . This means that growing your partitions is not an option without either first adding a new LVM PV or shrinking an existing LVM LV (not a straightforward task). However, those options are treating the symptom and not the problem. Your problem is that / on the device /dev/mapper/ManjaroVG-ManjaroRoot is full. Your /home partition is not full and is not relevant to your problem. We can see from your du output that the largest disk usage under / are: 3.2G /data1.1G /opt7.6G /usr14G /var The usages for /data , /opt and /usr look reasonable but the outlier is /var which is using a ton of space. Some updated information from you in chat shows this isn't a log issue as I suspected but rather a package cache issue with the pacman package cache. You can clean out old files from the cache with the command: pacman -Sc You can read more about cleaning the package cache on the Arch wiki.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/243882", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143761/" ] }
243,996
I would like to know if, for instance, \{x,y\} in sed will try to match as much or as little as possible characters. Also, can someone explain to me the bellow unexpected behaviour of sed ? echo "baaab" | sed 's/a\{1,2\}//'babecho "baaab" | sed 's/a\{0,2\}//'baaab In the first line, sed becomes greedy, in the second apparently it doesn't, is there a reason for that? I'm using GNU sed version 4.2.1.
a\{0,2\} will match the empty string at the start of the line (actually, any empty string, but g wasn't specified): $ echo "baaab" | sed 's/a\{0,2\}/y/' ybaaab Since GNU sed does matching from left to right, and a global replacement wasn't specified, only the start of the line matched. If you'd used g : $ echo "baaab" | sed 's/a\{0,2\}/y/g'ybyyby The empty strings at the start and end matched, and the aa , and the remaining a .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/243996", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139259/" ] }
243,997
I have a file with long words without spaces, many lines. file.txt: data-number="210615"....... ....1280654445itemitemURLhttps://site.site.com/user-user/fooo/210615/file.name.jpg?1280654445name.................. #!/bin/bashfind_number=$(grep -Po 'data-number="\K[^"]*' file.txt)get-url= (copy from "https" to "fooo/" and add variable $find_number and add from "/" to end "jpg"maybe : get-url=("https*,*fooo/",$find-number,"/*.jpg") this is work or other idea?echo $get-url > result.txt result.txt: https://site.site.com/user-user/fooo/210615/file.name.jpg
a\{0,2\} will match the empty string at the start of the line (actually, any empty string, but g wasn't specified): $ echo "baaab" | sed 's/a\{0,2\}/y/' ybaaab Since GNU sed does matching from left to right, and a global replacement wasn't specified, only the start of the line matched. If you'd used g : $ echo "baaab" | sed 's/a\{0,2\}/y/g'ybyyby The empty strings at the start and end matched, and the aa , and the remaining a .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/243997", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143276/" ] }
244,012
this is a script I'm working on for my Linux class. I would like to add a while loop to rerun the case statement =. any help will be greatly appreciated. Here is my script #!/bin/bashDATE=$(date -d "$1" +"%m_%d_%Y");clearecho -n " Have you finished everything?"read responseif [ $response = "Y" ] || [ $response = "y" ]; then echo "Do you want a cookie?"exit elif [ $response = "N" ] || [ $response = "n" ]; then echo "1 - Update Linux debs"echo "2 - Upgrade Linux"echo "3 - Backup your Home directory"read answer case $answer in 1) echo "Updating!" sudo apt-get update;;2) echo "Upgrading!" sudo apt-get upgrade;;3) echo "Backing up!" tar -cvf backup_on_$DATE.tar /home;;esacecho "Would you like to choose another option?"read conditionfi
a\{0,2\} will match the empty string at the start of the line (actually, any empty string, but g wasn't specified): $ echo "baaab" | sed 's/a\{0,2\}/y/' ybaaab Since GNU sed does matching from left to right, and a global replacement wasn't specified, only the start of the line matched. If you'd used g : $ echo "baaab" | sed 's/a\{0,2\}/y/g'ybyyby The empty strings at the start and end matched, and the aa , and the remaining a .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/244012", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143854/" ] }
244,026
I was trying to write a script that looped through each xml file in a directory and run make NAME= where NM was the filename minus the .xml , the place where I got stuck was assigning the {} placeholder to a variable. As find . -iname "*.xml" -exec foo=$(echo {}); gmake NAME=$FOO \; does not work as nothing is assigned to $FOO .
After much searching on IRC someone pointed me to the following answer find . -iname "*.xml" -exec bash -c 'echo "$1"' bash {} \; or for my example (with the string cut removed to save confusion) find . -iname "*.xml" -exec bash -c 'gmake NAME="$1"' bash {} \; The way this works is bash takes the parameters after -c as arguments, bash {} is needed so that the contents of {} is assigned to $1 not $0 , and bash is used to fill in $0 . It's not only a placeholder as the contents of that $0 is used in error messages for instance so you don't want to use things like _ or '' To process more that one file per invocation of bash , you can do: find . -iname "*.xml" -exec bash -c ' ret=0 for file do gmake NAME="$file" || ret=$? done exit "$ret"' bash {} + That one has the added benefit that if any of the gmake invocations fails, it will reported in find 's exit status. More info can be taken from http://mywiki.wooledge.org/UsingFind#Complex_actions
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244026", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67477/" ] }
244,040
I have few sed command: to extract relevant information My file sample.log (format is ncsa.log) looks like: 2012_04_01_filename.log:29874:192.168.1.12 - - [16/Aug/2012:12:54:21 +0000] "GET /cxf/myservice01/v1/abc?anyparam=anything&anotherone=another HTTP/1.1" 200 3224 "-" "client name"2012_04_01_filename.log:29874:192.168.1.12 - - [16/Aug/2012:12:54:25 +0000] "GET /cxf/myservice02/v1/XYZ?anyparam=anything&anotherone=another HTTP/1.1" 200 3224 "-" "client name"2012_04_01_filename.log:29874:192.168.1.12 - - [16/Aug/2012:12:56:52 +0000] "GET /cxf/myservice01/v1/rsv/USER02?anyparam=anything&anotherone=another HTTP/1.1" 200 6456 "-" "client name"2012_04_01_filename.log:29874:192.168.1.12 - - [16/Aug/2012:12:58:52 +0000] "GET /cxf/myservice01/v2/upr/USER01?anyparam=anything&anotherone=another HTTP/1.1" 200 2424 "-" "client name"2012_04_01_filename.log:29874:192.168.1.12 - - [16/Aug/2012:12:59:11 +0000] "GET /cxf/myservice02/v1/xyz?anyparam=anything&anotherone=another HTTP/1.1" 200 233 "-" "client name" This set of piped sed's are extracting the url details I need (first sed: \1 = date in YYYY-MM-DD, \2 = service0x, \3 = trigram, \4 = optionnal entity id, \5 = HTTP response code, \6 = http response size) more sample.log | sed -r 's#^(...._.._..)_.*/cxf/(myservice.*)/v./(.{3})[/]*([a-Z0-9]*)?.*\sHTTP/1.1.\s(.{3})\s([0-9]*)\s.*#\1;\2;\L\3;\E\4;\5;\6#g' | sed -r 's!(.*;.*;.{3};)[a-Z0-9]+(;.*;.*)!\1retrieve\2!g' | sed -r 's!(.*);;(.*)!\1;list;\2!g' > request-by-operation.txt The result needed is the following: 2012_04_01;myservice01;abc;list;200;32242012_04_01;myservice02;xyz;list;200;32242012_04_01;myservice01;rsv;retrieve;200;64562012_04_01;myservice01;upr;retrieve;200;24242012_04_01;myservice02;xyz;list;200;233 I did not find another way to convert the list and retrieve operation than using two other sed's piped (that does the job). I heard sed does not supports commands in the replacement part (on a specific group) something like #\1;\2;\L\3;\Eifnull(\4, "list", "retrieve");\5;\6# but I am wondering if I can still do it another way using only one sed command.
After much searching on IRC someone pointed me to the following answer find . -iname "*.xml" -exec bash -c 'echo "$1"' bash {} \; or for my example (with the string cut removed to save confusion) find . -iname "*.xml" -exec bash -c 'gmake NAME="$1"' bash {} \; The way this works is bash takes the parameters after -c as arguments, bash {} is needed so that the contents of {} is assigned to $1 not $0 , and bash is used to fill in $0 . It's not only a placeholder as the contents of that $0 is used in error messages for instance so you don't want to use things like _ or '' To process more that one file per invocation of bash , you can do: find . -iname "*.xml" -exec bash -c ' ret=0 for file do gmake NAME="$file" || ret=$? done exit "$ret"' bash {} + That one has the added benefit that if any of the gmake invocations fails, it will reported in find 's exit status. More info can be taken from http://mywiki.wooledge.org/UsingFind#Complex_actions
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244040", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41302/" ] }
244,064
While setting up a sudo environment I noticed that the include directive is prefixed with the pound (#) character. Solaris shows this as: ## Read drop-in files from /etc/sudoers.d## (the '#' here does not indicate a comment)#includedir /etc/sudoers.d The manual (Linux as well as Solaris) states: Including other files from within sudoers It is possible to include other sudoers files from within the sudoers file currently being parsed using the #include and #includedir directives. And: Other special characters and reserved words The pound sign (`#') is used to indicate a comment (unless it is part of a #include directive or unless it occurs in the context of a user name and is followed by one or more digits, in which case it is treated as a uid). Both the comment character and any text after it, up to the end of the line, are ignored. Does anybody knows why the choice was made to use the pound character in the #include and #includedir directives? As a side note: I often use something like egrep -v '^#|^$' configfile to get the non-default/active configured settings, and this obviously does not work for the sudoers file.
#include was added in 2004 . It had to be compatible with what was already there. I don't think include /path/to/file would have been ambiguous, though, but it might have been a little harder to parse, because the parser would have to distinguish include /path/to/file (include directive) from include = foo (allow the user include to run the command foo ). But I think mostly the reason was to look like the C preprocessor, which the manual explicitly cites as inspiration.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/244064", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/107263/" ] }
244,103
Wikipedia says that current Debian 8.2 Jessie is based on kernel3.16.0, so I was wondering when a native version based on kernel 4.xwould be released and if the live kernel patching will be there asfeature with the 4.x. I searched for a Debian roadmap on Google, but I found nothing aboutthe kernel.
While it is still not a given, officially most probably the last quarter of 2016, with the release of Debian 9. In the meanwhile, you can start using testing, compile it yourself or using a version compiled by someone else. I am using armbian in a Raspberry Pi like-device (Lamobo R1), which is Jessie, and using a v4.x put together by the armbian guys. On my Intel servers at work I plan to go soon to v4 too with Debian 8.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244103", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143935/" ] }
244,169
I performed a fresh clone and copied/pasted a working directory into the cloned directory. Now have a list of changed files: $ git status --short | grep -v "??" | cut -d " " -f 3 GNUmakefileReadme.txtbase32.hbase64.h... When I try to get Git to add them, it results in an error (I don't care about adding 1 at a time): $ git status --short | grep -v "??" | cut -d " " -f 3 | git addNothing specified, nothing added.Maybe you wanted to say 'git add .'? Adding the - : $ git status --short | grep -v "??" | cut -d " " -f 3 | git add -fatal: pathspec '-' did not match any files And -- : $ git status --short | grep -v "??" | cut -d " " -f 3 | git add --Nothing specified, nothing added.Maybe you wanted to say 'git add .'? Trying to use interactive from the man page appears to have made a greater mess of things: $ git status --short | grep -v "??" | cut -d " " -f 3 | git add -i staged unstaged path 1: unchanged +1/-1 GNUmakefile 2: unchanged +11/-11 Readme.txt ...*** Commands *** 1: status 2: update 3: revert 4: add untracked 5: patch 6: diff 7: quit 8: helpHuh (GNUmakefile)?What now> *** Commands *** 1: status 2: update 3: revert 4: add untracked 5: patch 6: diff 7: quit 8: helpHuh (Readme.txt)? (I've already deleted the directory Git made a mess of, so I'm not trying to solve that issue). How do I tell Git to add the files piped into it?
git add is expecting the files to be listed as arguments, not piped into stdin . Try either git status --short | grep -v "??" | cut -d " " -f 3 | xargs git add or for file in $(git status --short | grep -v "??" | cut -d " " -f 3); do git add $file;done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244169", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
244,185
After removing a bay mounted SATA connected drive the kernel will most of the time remove the mount. However, sometimes the mount remains even though the disk has been removed. Is there a way to avoid this?
git add is expecting the files to be listed as arguments, not piped into stdin . Try either git status --short | grep -v "??" | cut -d " " -f 3 | xargs git add or for file in $(git status --short | grep -v "??" | cut -d " " -f 3); do git add $file;done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244185", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140222/" ] }
244,202
In my job, I support remote employees running Linux Mint via SSH and VNC. Each employee uses a USB headset, which is the only sound device we want to be active. The sound device we need to disable is the "Built-in Audio" device, and if I open up a terminal on the employee's desktop, I can check whether the device is disabled by running pacmd list-sinks | grep "Built-in Audio" . This command also works over SSH if I login with the employee's username and password, but if I try to SSH with our admin "IT" username, it gives me the error " No PulseAudio daemon running, or not running as session daemon. " Help! For security, I don't have the local passwords of each employee, but I can't seem to check whether Built-In Audio is active when I SSH via my IT username, even when I elevate IT to root privileges with su . I've tried using su - [employee] and then accessing the local display with the command export DISPLAY=:0 , but that didn't allow me to check the sound devices either. :(
git add is expecting the files to be listed as arguments, not piped into stdin . Try either git status --short | grep -v "??" | cut -d " " -f 3 | xargs git add or for file in $(git status --short | grep -v "??" | cut -d " " -f 3); do git add $file;done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244202", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111382/" ] }
244,205
I have a fresh install of Centos 7. I cannot seem to auto mount an NFS share located on 192.168.254.105:/srv/nfsshare from the Centos client. Mounting the share manually however, works perfectly. /etc/auto.master has been commented out completely to simplify the problem, save for the following line: /- /etc/auto.nfsshare /etc/auto.nfsshare holds the following line: /tests/nfsshare -fstype=nfs,credentials=/etc/credentials.txt 192.168.254.105:/srv/nfsshare /etc/credentials.txt holds: user=user password=password The expected behavior is that when I ls -l /tests/nfsshare , I will see a few files that my fileserver's /srv/nfsshare directory holds. It does not. Instead, it shows nothing. The logs from sudo journalctl --unit=autofs.service shows this when it starts (debug enabled): Nov 20 00:25:38 localhost.localdomain systemd[1]: Starting Automounts filesystems on demand... Nov 20 00:25:38 localhost.localdomain automount[21204]: Starting automounter version 5.0.7-48.el7, master map auto.master Nov 20 00:25:38 localhost.localdomain automount[21204]: using kernel protocol version 5.02 Nov 20 00:25:38 localhost.localdomain automount[21204]: lookup_nss_read_master: reading master files auto.master Nov 20 00:25:38 localhost.localdomain automount[21204]: parse_init: parse(sun): init gathered global options: (null) Nov 20 00:25:38 localhost.localdomain automount[21204]: spawn_mount: mtab link detected, passing -n to mount Nov 20 00:25:38 localhost.localdomain automount[21204]: spawn_umount: mtab link detected, passing -n to mount Nov 20 00:25:38 localhost.localdomain automount[21204]: lookup_read_master: lookup(file): read entry /- Nov 20 00:25:38 localhost.localdomain automount[21204]: master_do_mount: mounting /- Nov 20 00:25:38 localhost.localdomain automount[21204]: automount_path_to_fifo: fifo name /run/autofs.fifo-- Nov 20 00:25:38 localhost.localdomain automount[21204]: lookup_nss_read_map: reading map file /etc/auto.nfsshare Nov 20 00:25:38 localhost.localdomain automount[21204]: parse_init: parse(sun): init gathered global options: (null) Nov 20 00:25:38 localhost.localdomain automount[21204]: spawn_mount: mtab link detected, passing -n to mount Nov 20 00:25:38 localhost.localdomain automount[21204]: spawn_umount: mtab link detected, passing -n to mount Nov 20 00:25:38 localhost.localdomain automount[21204]: mounted direct on /tests/nfsshare with timeout 300, freq 75 seconds Nov 20 00:25:38 localhost.localdomain automount[21204]: do_mount_autofs_direct: mounted trigger /tests/nfsshare Nov 20 00:25:38 localhost.localdomain automount[21204]: st_ready: st_ready(): state = 0 path /- Nov 20 00:25:38 localhost.localdomain systemd[1]: Started Automounts filesystems on demand. The following appears in my logs when I attempt to force mounting of the nfs share via ls -l /tests/nfsshare : Nov 20 00:48:05 localhost.localdomain automount[22030]: handle_packet: type = 5 Nov 20 00:48:05 localhost.localdomain automount[22030]: handle_packet_missing_direct: token 21, name /tests/nfsshare, request pid 22057 Nov 20 00:48:05 localhost.localdomain automount[22030]: attempting to mount entry /tests/nfsshare Nov 20 00:48:05 localhost.localdomain automount[22030]: lookup_mount: lookup(file): looking up /tests/nfsshare Nov 20 00:48:05 localhost.localdomain automount[22030]: lookup_mount: lookup(file): /tests/nfsshare -> -fstype=nfs,credentials=/etc/credenti...fsshare Nov 20 00:48:05 localhost.localdomain automount[22030]: parse_mount: parse(sun): expanded entry: -fstype=nfs,credentials=/etc/credentials.tx...fsshare Nov 20 00:48:05 localhost.localdomain automount[22030]: parse_mount: parse(sun): gathered options: fstype=nfs,credentials=/etc/credentials.txt Nov 20 00:48:05 localhost.localdomain automount[22030]: [90B blob data] Nov 20 00:48:05 localhost.localdomain automount[22030]: dev_ioctl_send_fail: token = 21 Nov 20 00:48:05 localhost.localdomain automount[22030]: failed to mount /tests/nfsshare Nov 20 00:48:05 localhost.localdomain automount[22030]: handle_packet: type = 5 Nov 20 00:48:05 localhost.localdomain automount[22030]: handle_packet_missing_direct: token 22, name /tests/nfsshare, request pid 22057 Nov 20 00:48:05 localhost.localdomain automount[22030]: dev_ioctl_send_fail: token = 22 Additionally, ls -l /tests/nfsshare actually produces the error: ls: cannot access nfsshare/: No such file or directory How can I fix this issue? As stated before, manual mounting the share works fine. EDIT: as requested, output of ls -la /etc/auto.nfsshare -rw-r--r--. 1 root root 99 Nov 20 00:25 /etc/auto.nfsshare
git add is expecting the files to be listed as arguments, not piped into stdin . Try either git status --short | grep -v "??" | cut -d " " -f 3 | xargs git add or for file in $(git status --short | grep -v "??" | cut -d " " -f 3); do git add $file;done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244205", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64656/" ] }
244,233
Below is some sort of pseudo-code for what I'm trying to accomplish: #!/bin/bash# I already have the variable below figured out (positive integer):numlines=$([returns number of lines containing specific characters in a file])# This is basically what I want to do with it:for i in {1..$numlines}; do # the part below is already figured out as well: do some other stuffdone I can execute it fine from the command line by inserting the actual number in the `{1..n}' sequence. I just need to know if it's possible to include a variable here and how to go about doing it. I have tried export ing it I have tried putting the variable itself in curly braces inside the sequence: {1..${numlines}} I have tried putting it in double-quotes hoping it would expand: {1.."$numlines"} I have tried escaping the $ : {1..\$numlines} Do I need to use a set -[something] command in order for this variable to get expanded? I have even tried some forms of using eval ...all to no avail. I just need to know if there is something simple or obscure that I am missing or if this is even possible before I waste anymore time on it. I could throw together a really, really hackish way of doing it to make it work as needed, but I'd like to avoid that if at all possible and learn the right way to go about doing it.
Unfortunately, there is no way to use a variable in that expansion (AFAIK), since variable expansion happens after brace expansion. Fortunately, there's a tool that does the same job. for i in $(seq 1 $numlines); do # stuffdone seq is from GNU coreutils; no idea how to do it in POSIX.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/244233", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135610/" ] }
244,250
If you call GPG without input, it just says gpg: Go ahead and type your message ... You can enter text and everything, but how do you end the input? I've seen something like this in multiple different programs, but I've never known how.
You need to input EOF (End Of File). Do this with CTRL + D (or more generally, ^D ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/244250", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59616/" ] }
244,297
I have 3 users A , B and C inside a group 'admin'. I have another user ' D ' in whose home directory, there is a project folder. I have made D as the owner of that folder and assigned 'admin' as the group using chgrp . Group and owners have all the permissions, but still A , B or C are unable to access the folder. I have two question : Is it even possible for other users to access anything in another user's directory Giving rights to a group only makes the users in that group have access to files that are outside any user's home directory ? Edit : Here is how I had set the owner and group of the project sudo chown -R D projectsudo chgrp -R admin project I got an error while trying to get into the project folder within D 's home directory (while being logged in as A ) cd /home/D/project-bash: cd: /home/D/project: Permission denied Here is the output of ls -la command : drwxrwsr-x 7 D admin 4096 Nov 18 13:06 project Here is the description of the group admin : getent group adminadmin_users:x:501:A,B,C Also note that group admin is not being listed when I type groups from the user D , but was visible when I used cut -d: -f1 /etc/group . The user I am referring to as D is actually ec2-user (the default Fedora user on Amazon servers) Ultimately, I'm setting up a git repository on a server. I have created the repo in D 's home directory, but wish A , B and C to have access to it too (and clone them)
Yes it is possible for users to access files in another users home directory. No, there is no special treatment of home directories outside the system file permissions. " Giving rights to a group " involves two parts, granting group ownership, and setting group permissions. Your example deals only with ownership. You can set permissions recursively with the chmod command but that alone may not guarantee that new files created by one member of the group will be accessible to the others. Thank you for the account of what you've done so far, with adequate details, and for stating your goal to share git repositories. Let's first look at how to accomplish group file sharing on Unix and Linux in a general way, and then look at some considerations for git . Basic Group File Sharing Configuration Here are some basics of this type of file sharing in Unix and GNU, etc. This is a simple way to set up a one or more directories where all members of a group have read and write permissions on files created by other users in the group. Let's assume your common user group will be gitusers and the directory is repo . Put sharing members into a common group (for example gitusers ). Set umask 002 for all of the users in the group, indicating that most files will be created group-writable by default. This may be set in various shell startup files (such as /etc/bash.bashrc ). On GNU/Linux systems see man 8 pam_umask and man umask for more and better information. Set group ownership recursively ( -R ) on all shared files: chgrp -R gitusers repo Set group read and write permissions recursively on all files: chmod -R g+rw repo Set the set group id on execution bit ( setgid ) of repo and all subdirectories. The setgid indicates that newly created files and subdirectories in that directory inherit the same group ownership as the parent directory. New subdirectories also have the setgid bit set, for a recursive effect: find repo -type d -exec chmod g+s {} \; After the above steps, you should be well on your way for all users in the same group gitusers in the example, to read and write each other's files, while disallowing write permission for users not in the gitusers sharing group. You can replicate the chown , chmod and find command on as many directories as you wish, for any sharing purpose you wish, not just git . Sharing git Repositories It appears that git may work OK with only the above configuration changes. However git also knows about sharing and group permissions (and may actually do some of the above for you). If you're creating a shared git repository, consider these options: git init --bare --shared=group repo If you have an existing repository, consider these settings: core.bare core.sharedRepository=group See the git-init man page and the git-config man page for more details. Notes While a bare, group-shared git repository will take care of some of the same steps as in the general example, I would recommend performing the steps in the general example as well to ensure consistency and also ease in setting up any other shared directories.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244297", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144069/" ] }
244,323
When I check on my system's environment, a lot of environmental variables will pop up. How can I just search for a particular variable? A book I'm reading says: Sometimes the number of variables in your environment grows quite large, so much so that you don't want to see all of the values displayed when you are interested in just one. If this is the case, you can use the echo command to show an environment variable's current value. How do I do this in a Linux terminal?
Just: echo "$VARIABLENAME" For example for the environment variable $HOME , use: echo "$HOME" Which then prints something similar to: /home/username Edit : according to the comment of Stéphane Chazelas , it may be better if you use printenv instead of echo : printenv HOME
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/244323", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124780/" ] }
244,343
I am trying to debug a vagrant- or VirtualBox related problem (see taiga-vagrant fails to provide a working taiga environment #21 ). The command VAGRANT_LOG=debug vagrant up --debug prints out plenty of, probably, useful information. Where is this log stored however? Edit: VAGRANT_LOG=debug vagrant up is actually the same as vagrant up --debug . I work on/with: Funtoo-Linux, Vagrant 1.4.3 and VirtualBox 4.3.32.
Vagrant does not keep any logs. The output of, for example vagrant up --debug , can be redirected to a file like vagrant up --provision --debug &> debug_log Fragment from an IRC session in #vagrant at Freenode: [18:29] <NikosA> ada: really, where is the debug "file" stored, by default? Isn't there any? [18:29] <ada> vagrant does not write to log files .. [18:29] <ada> virtualbox does [18:29] <NikosA> I am even trying to >> debug_log in the command line, and it simply does not keep any of these valuable details. [18:29] <NikosA> ok, so I'd check for the VBox logs? [18:30] <ada> if you're using the vbox provider, yes .. [18:31] <dtrainor> NikosA, redirect the output with &>
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/244343", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13011/" ] }
244,356
I am trying to do something like ls -t | head -n 3 | xargs -I {} tar -cf t.tar {} to archive the 3 last modified files but it ends up running the tar command separately for each of the files and at the end I am left with one tar file containing the last of the 3 files (in their whatever order). I know I am not using 'xargs' correctly but searching did not help; I find examples that do not work either. Even the simpler command ls | xargs -I {} tar -cf t.tar {} ends up with a tar file that contains only one of the files in that directory.
ls -t | head -n 3 | xargs tar -cf t.tar Works for me. Is there a reason you need the -I flag set?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244356", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90815/" ] }
244,367
This is a rather low-level question, and I understand that it might not be the best place to ask. But, it seemed more appropriate than any other SE site, so here goes. I know that on the Linux filesystem, some files actually exist , for example: /usr/bin/bash is one that exists. However, (as far as I understand it), some also don't actually exist as such and are more virtual files, eg: /dev/sda , /proc/cpuinfo , etc. My questions are (they are two, but too closely related to be separate questions): How does the Linux kernel work out whether these files are real (and therefore read them from the disk) or not when a read command (or such) is issued? If the file isn't real: as an example, a read from /dev/random will return random data, and a read from /dev/null will return EOF . How does it work out what data to read from this virtual file (and therefore what to do when/if data written to the virtual file too) - is there some kind of map with pointers to separate read/write commands appropriate for each file, or even for the virtual directory itself? So, an entry for /dev/null could simply return an EOF .
So there are basically two different types of thing here: Normal filesystems, which hold files in directories with data and metadata, in the familiar manner (including soft links, hard links, and so on). These are often, but not always, backed by a block device for persistent storage (a tmpfs lives in RAM only, but is otherwise identical to a normal filesystem). The semantics of these are familiar; read, write, rename, and so forth, all work the way you expect them to. Virtual filesystems, of various kinds. /proc and /sys are examples here, as are FUSE custom filesystems like sshfs or ifuse . There's much more diversity in these, because really they just refer to a filesystem with semantics that are in some sense 'custom'. Thus, when you read from a file under /proc , you aren't actually accessing a specific piece of data that's been stored by something else writing it earlier, as under a normal filesystem. You're essentially doing a kernel call, requesting some information that's generated on-the-fly. And this code can do anything it likes, since it's just some function somewhere implementing read semantics. Thus, you have the weird behavior of files under /proc , like for instance pretending to be symlinks when they aren't really. The key is that /dev is actually, usually, one of the first kind. It's normal in modern distributions to have /dev be something like a tmpfs, but in older systems, it was normal to have it be a plain directory on disk, without any special attributes. The key is that the files under /dev are device nodes, a type of special file similar to FIFOs or Unix sockets; a device node has a major and minor number, and reading or writing them is doing a call to a kernel driver, much like reading or writing a FIFO is calling the kernel to buffer your output in a pipe. This driver can do whatever it wants, but it usually touches hardware somehow, e.g. to access a hard disk or play sound in the speakers. To answer the original questions: There are two questions relevant to whether the 'file exists' or not; these are whether the device node file literally exists, and whether the kernel code backing it is meaningful. The former is resolved just like anything on a normal filesystem. Modern systems use udev or something like it to watch for hardware events and automatically create and destroy the device nodes under /dev accordingly. But older systems, or light custom builds, can just have all their device nodes literally on the disk, created ahead of time. Meanwhile, when you read these files, you're doing a call to kernel code which is determined by the major and minor device numbers; if these aren't reasonable (for instance, you're trying to read a block device that doesn't exist), you'll just get some kind of I/O error. The way it works out what kernel code to call for which device file varies. For virtual filesystems like /proc , they implement their own read and write functions; the kernel just calls that code depending on which mount point it's in, and the filesystem implementation takes care of the rest. For device files, it's dispatched based on the major and minor device numbers.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/244367", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/142558/" ] }
244,372
I have tried doing: echo " mv /server/today/logfile1 /nfs/logs/ && gzip /nfs/logs/logfile1" | sed 's|logfile1|logfile2|g' It printed: mv /server/today/logfile2 /nfs/logs/ && gzip /nfs/logs/logfile2 which is a bash command. How can I make it get executed, instead ofjust printing it?
You could pipe your command into a shell so it gets executed: echo "mv ..." | bash Or you could pass it as an argument to a shell: bash -c "$(echo "mv ...")" Or you could use the bash built-in eval : eval "$(echo "mv ...")" Note, however, that all of those code-generating commands look a bit brittle to me (there are ways they will fail as soon as some of the paths contain spaces, etc.).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244372", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19072/" ] }
244,381
I want to delete 10 random lines from a text file that has 90 lines and then output this to a new file. I've been trying to do this using sed, but I have two problems. I'm using: sed -i $((1 + RANDOM & 90))d input.txt > output.txt and then running the command 10 times (I assume there is a better way to do this!) The first problem I have is that I get the error: sed: -e expression #1, char 2: invalid usage of line address 0 I assume this has something to do with the fact that it might have already deleted line 1 and it is trying again. The second problem is that sometimes nothing is written to the output file, even though it worked before using the same command.
You probably wanted to use RANDOM % 90 rather then & . That's where the zeroes come from (deleting line 1 is OK, on the next run, the lines will be numbered 1 .. 89). There is a problem, though: The formula could generate the same number several times. To prevent that, use a different approach: shuffle the numbers and pick the first ten: shuf -i1-90 -n10 | sed 's/$/d/' | sed -f- input > output If you don't like sed generating a sed script, you can use printf , too: sed -f <( printf %dd\; $(shuf -i1-90 -n10) ) input > output
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244381", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144134/" ] }
244,384
Accidentally, Port 22 got closed. I cannot ssh into the instance, though the instance are running well on other desired ports. Getting following error while doing SSH. ssh: connect to host X.X.X.X port 22: Connection refused I restarted the instance, but still ssh is not working.The security groups are open for port 22 from anywhere ( 0.0.0.0/0 ). I was trying to set the default welcome message after SSH on the machine by editing /etc/ssh/sshd_config file. Just after editing and reloading the ssh with the following command I was unable to ssh again. sudo service ssh reload
I did it by detaching the volume from the current instance than added it to the other instance as a secondary volume. Than the volume become readable, and I changed the ssh config file to default one. Detached the volume and added back to the original instance
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244384", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144137/" ] }
244,391
I want to block keyword "import" in any URL, including https websites. Example: http://www.abc.com/import/dfdsf https://xyz.com/import/hdovh How to create acl to do it?Thanks
I did it by detaching the volume from the current instance than added it to the other instance as a secondary volume. Than the volume become readable, and I changed the ssh config file to default one. Detached the volume and added back to the original instance
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244391", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144140/" ] }
244,465
(For simplicity I'll assume the file to read is the first argument - $1 .) I can do what I want externally with: tempfile=$(mktemp)awk '/^#/ {next}; NF == 0 {next}; {print}' "$1" > $tempfilewhile read var1 var2 var3 var4 < $tempfile; do # stuff with var1, etc.done However, it seems absurd to need to call awk every time I parse the config file. Is there a way to make read ignore commented or whitespace-only lines in a file, without external binaries/potential performance issues? Answers so far are quite helpful! To clarify, I don't want to use a temp file, but I do want to read the config from a file , not from standard in. I'm well aware that I can use an input redirection when I call the script, but for various reasons that won't work in my circumstance. I want to softcode the input to read from, e.g.: configfile="/opt/myconfigfile.txt"[ $# -gt 0 ] && [ -r "$1" ] && configfile="$1"while read var1 var2 var3 var4 < "$configfile" ; do ... But when I try this, it just reads the first line of configfile over and over until I kill the process. Maybe this should be its own question...but it's probably a single line change from what I'm doing. Where's my error?
You don't need a tempfile to do this, and sed (or awk) are far more flexible in comment processing than a shell case statement. For example: configfile='/opt/myconfigfile.txt'[ $# -gt 0 ] && [ -r "$1" ] && configfile="$1"sed -e 's/[[:space:]]*#.*// ; /^[[:space:]]*$/d' "$configfile" | while read var1 var2 var3 var4; do # stuff with var1, etc. done# Note: var1 etc are not available to the script at this# point. They are only available in the sub-shell running# the while loop, and go away when that sub-shell ends. This strips comments (with or without leading whitespace) and deletes empty lines from the input before piping it into the while loop. It handles comments on lines by themselves and comments appended to the end of the line: # full-line comment# var1 var2 var3 var4abc 123 xyz def # comment here Calling sed or awk for tasks like this isn't "absurd", it's perfectly normal. That's what these tools are for. As for performance, I'd bet that in anything but very tiny input files, the sed version would be much faster. Piping to sed has some startup overhead but runs very fast, while shell is slow. Update 2022-05-03: Note that the variables (var1, var2, var3, etc) which are set in the while read loop will "go out of scope" when the while loop ends. The can only be used inside that while loop. The while loop is being run in a sub-shell because the config file is being piped into it. When that sub-shell dies, its environment goes with it - and a child process can not change the environment of its parent process. If you want the variables to retain their values after the while loop, you need to avoid using a pipe. For example, use input redirection ( < ) and process substitution ( <(...) ): while read var1 var2 var3 var4; do # stuff with var1, etc.done < <(sed -e 's/[[:space:]]*#.*// ; /^[[:space:]]*$/d' "$configfile")# remainder of script can use var1 etc if and as needed. With this process substitution version, the while loop runs in the parent shell and the sed script is run as a child process (with its output redirected into the while loop). sed and its environment goes away when it finished, while the shell running the while loop retains the variables created/changed by the loop.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244465", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135943/" ] }
244,469
To start with, I apologize if this is a painfully obvious/trivial issue, I'm still learning the ins and outs of linux/unix. I work with a few servers that require access via ssh and private key to log into. So, the command is something like this: ssh -i /path/to/key.pem [email protected] I've created a bash script that let's me just use my own call, access , and just has a basic switch statement for the arguments that follow to control which server I log into. For example, access server1 would issue the appropriate ssh command to log into server1. The Problem The ssh call just hangs up and I'm left with an empty terminal that won't accept SIGINT ( Ctrl + C ) and I must quit the terminal and open it up again to even use it. As far as I can tell, this might be a permissions thing for the private key. Its permissions are currently 600 . Changing it to 644 gives me an error that the permissions are too open and exits the ssh attempt. Any advice?
There is ssh_config , made for this, where you can specify your hosts aliases and keys and store it without creating such hara-kiri as bash scripts to do so. It is basically stored in your ~/.ssh/config in this format: Host host1 Hostname 000.000.000.000 User user IdentityFile /path/to/key.pem and then you can simply call ssh host1 to get to 000.000.000.000 If you really want to be effective and have even shorter shortcuts, bash alias is more suitable than the bash scripts. alias access="ssh -i /path/to/key.pem [email protected]" If you really want to use bash script, you need to force ssh to allocate you TTY on remote server using -tt option: ssh -tti /path/to/key.pem [email protected] For more tips, you can browse through the manual page for ssh and ssh_config .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244469", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144178/" ] }
244,494
My data: Question Nr. 311Main proteins are in the lorem ipsunA Lorem RNA testB CellsC MetoclomD CellsE MusclesQuestion Nr. 312Main proteins are in the lorem ipsunA LoremB CellsC MetoclomD CellsE Muscles... Wanted format: \item Main proteins are in the lorem ipsunA Lorem RNA testB CellsC MetoclomD CellsE Muscles\itemMain proteins are in the lorem ipsunA LoremB CellsC MetoclomD CellsE Muscles\item ... Where I am planning to present the options each on new line. My attempt: sed s/Question Nr.*/\item/g Which should replace all lines having Question Nr[anything on the line] - problem is in the detection what comes after, since there can be many options, but the end of options is \n\n i.e. the newline. Semistage problem here: \item Main proteins are in the lorem ipsunA Lorem RNA testB CellsC MetoclomD Cells E Muscles\item Main proteins are in the lorem ipsunA LoremB CellsC MetoclomD Cells E Muscles Other challenges Have capitalized words like HIV and RNA in the options; some solutions below insert empty line after HI and RN How can you get my wanted output by sed / perl ?
Another way with tr + sed : tr -s \\n <infile | sed '$!G;s/Question Nr.*/\\item/' tr squeezes all newlines and then sed appends hold space content (empty newline) to each line except the last one, replacing Question Nr.* with \item . With this method you won't be able to edit the file in-place. I chose tr here as it's faster then sed 's regex (even if it's not as clean as a sed -only solution)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244494", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16920/" ] }
244,502
I'm running file against a wallet.dat file (A file that Bitcoin keeps its private keys in) and even though there doesn't seem to be any identifiable header or string, file can still tell that it's a Berkley DB file, even if I cut it down to 16 bytes. I know that file was applying some sort of rule or searching for some sequence to identify it. I want to know what the rule it's applying here is, so that I can duplicate it in my own program.
Grab the source of the file command. Most if not all open sources unices use this one . The file command comes with the magic database, named after the magic numbers that it describes. (This database is also installed on your live system, but in a compiled form.) Look for the file that contains the description text that you see: grep 'Berkeley DB' magic/Magdir/* The magic man page describes the format of the file. The trigger lines for “Berkeley DB” are 0 long 0x00061561 Berkeley DB0 belong 0x00061561 Berkeley DB12 long 0x00061561 Berkeley DB12 belong 0x00061561 Berkeley DB12 lelong 0x00061561 Berkeley DB12 long 0x00053162 Berkeley DB12 belong 0x00053162 Berkeley DB12 lelong 0x00053162 Berkeley DB12 long 0x00042253 Berkeley DB12 belong 0x00042253 Berkeley DB12 lelong 0x00042253 Berkeley DB12 long 0x00040988 Berkeley DB12 belong 0x00040988 Berkeley DB 12 lelong 0x00040988 Berkeley DB The first column specifies the offset at which a certain byte sequence is to be found. The third column contains the byte sequence. The second column describes the type of byte sequence: long means 4 bytes in the platform's endianness ; lelong and belong mean 4 bytes in little-endian and big-endian order respectively. Rather than replicate the rules, you may want to call the file utility; it's specified by POSIX , but the formats that it recognizes and the descriptions that it outputs aren't. Alternatively, you can link to libmagic and call the magic_file or magic_buffer function.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/244502", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32067/" ] }
244,531
This is the process I want to kill: sooorajjj@Treako ~/Desktop/MerkMod $ sudo netstat -tunap | grep :80tcp6 0 0 :::80 :::* LISTEN 20570/httpd
There are several ways to find which running process is using a port. Using fuser it will give the PID(s) of the multiple instances associated with the listening port. sudo apt-get install psmiscsudo fuser 80/tcp80/tcp: 1858 1867 1868 1869 1871 After finding out, you can either stop or kill the process(es). You can also find the PIDs and more details using lsof sudo lsof -i tcp:80COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME nginx 1858 root 6u IPv4 5043 0t0 TCP ruir.mxxx.com:http (LISTEN) nginx 1867 www-data 6u IPv4 5043 0t0 TCP ruir.mxxx.com:http (LISTEN) nginx 1868 www-data 6u IPv4 5043 0t0 TCP ruir.mxxx.com:http (LISTEN) nginx 1869 www-data 6u IPv4 5043 0t0 TCP ruir.mxxx.com:http (LISTEN) nginx 1871 www-data 6u IPv4 5043 0t0 TCP ruir.mxxx.com:http (LISTEN) To limit to sockets that listen on port 80 (as opposed to clients that connect to port 80): sudo lsof -i tcp:80 -s tcp:listen To kill them automatically: sudo lsof -t -i tcp:80 -s tcp:listen | sudo xargs kill
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/244531", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/137002/" ] }
244,650
I have log files with a time stamp and six values in each linei want to reduce the amount of data, by removing consecutive lines with the same values (ignoring time stamps) and keeping the first and last line of each duplicate set. Preferably using a bash script. It should be a magic sed or awk command combination. Even if i have to parse the file multiple times, reading 3 lines at a time and removing the middle one, is a good solution. original file: 1447790360 99999 99999 20.25 20.25 20.25 20.501447790362 20.25 20.25 20.25 20.25 20.25 20.501447790365 20.25 20.25 20.25 20.25 20.25 20.501447790368 20.25 20.25 20.25 20.25 20.25 20.501447790371 20.25 20.25 20.25 20.25 20.25 20.501447790374 20.25 20.25 20.25 20.25 20.25 20.501447790377 20.25 20.25 20.25 20.25 20.25 20.501447790380 20.25 20.25 20.25 20.25 20.25 20.501447790383 20.25 20.25 20.25 20.25 20.25 20.501447790386 20.25 20.25 20.25 20.25 20.25 20.501447790388 20.25 20.25 99999 99999 99999 999991447790389 99999 99999 20.25 20.25 20.25 20.501447790391 20.00 20.25 20.25 20.25 20.25 20.501447790394 20.25 20.25 20.25 20.25 20.25 20.501447790397 20.25 20.25 20.25 20.25 20.25 20.501447790400 20.25 20.25 20.25 20.25 20.25 20.50 desired result: 1447790360 99999 99999 20.25 20.25 20.25 20.501447790362 20.25 20.25 20.25 20.25 20.25 20.501447790386 20.25 20.25 20.25 20.25 20.25 20.501447790388 20.25 20.25 99999 99999 99999 999991447790389 99999 99999 20.25 20.25 20.25 20.501447790391 20.00 20.25 20.25 20.25 20.25 20.501447790394 20.25 20.25 20.25 20.25 20.25 20.501447790400 20.25 20.25 20.25 20.25 20.25 20.50
uniq is (sort of) the perfect tool for this, by default in uniq you can keep/show the first but not last line in set. uniq has a -f flag which allows you to skip the first few fields. From man uniq: -f, --skip-fields=N avoid comparing the first N fields -s, --skip-chars=N avoid comparing the first N characters A field is a run of blanks (usually spaces and/or TABs), then non-blank characters. Fields are skipped before chars. Example with uniq -c to show count see what uniq is doing: -bash-4.2$ uniq -c -f 1 original_file 1 1447790360 99999 99999 20.25 20.25 20.25 20.50 9 1447790362 20.25 20.25 20.25 20.25 20.25 20.50 1 1447790388 20.25 20.25 99999 99999 99999 99999 1 1447790389 99999 99999 20.25 20.25 20.25 20.50 1 1447790391 20.00 20.25 20.25 20.25 20.25 20.50 3 1447790394 20.25 20.25 20.25 20.25 20.25 20.50 Not bad. Pretty close to what is wanted.And easy to do.But missing the last matching line in group . . . . The grouping options in uniq are also interesting for this question . . . --group[=METHOD] show all items, separating groups with an empty line METHOD={separate(default),prepend,append,both} -D, --all-repeated[=METHOD] print all duplicate lines groups can be delimited with an empty line METHOD={none(default),prepend,separate} Example, uniq by group . . . -bash-4.2$ uniq --group=both -f 1 original_file 1447790360 99999 99999 20.25 20.25 20.25 20.501447790362 20.25 20.25 20.25 20.25 20.25 20.501447790365 20.25 20.25 20.25 20.25 20.25 20.501447790368 20.25 20.25 20.25 20.25 20.25 20.501447790371 20.25 20.25 20.25 20.25 20.25 20.501447790374 20.25 20.25 20.25 20.25 20.25 20.501447790377 20.25 20.25 20.25 20.25 20.25 20.501447790380 20.25 20.25 20.25 20.25 20.25 20.501447790383 20.25 20.25 20.25 20.25 20.25 20.501447790386 20.25 20.25 20.25 20.25 20.25 20.501447790388 20.25 20.25 99999 99999 99999 999991447790389 99999 99999 20.25 20.25 20.25 20.501447790391 20.00 20.25 20.25 20.25 20.25 20.501447790394 20.25 20.25 20.25 20.25 20.25 20.501447790397 20.25 20.25 20.25 20.25 20.25 20.501447790400 20.25 20.25 20.25 20.25 20.25 20.50 Then grep for line before and after every empty line and strip blank lines: -bash-4.2$ uniq --group=both -f 1 original_file |grep -B1 -A1 ^$ |grep -Ev "^$|^--$"1447790360 99999 99999 20.25 20.25 20.25 20.501447790362 20.25 20.25 20.25 20.25 20.25 20.501447790386 20.25 20.25 20.25 20.25 20.25 20.501447790388 20.25 20.25 99999 99999 99999 999991447790389 99999 99999 20.25 20.25 20.25 20.501447790391 20.00 20.25 20.25 20.25 20.25 20.501447790394 20.25 20.25 20.25 20.25 20.25 20.501447790400 20.25 20.25 20.25 20.25 20.25 20.50 Tah dahhh! Pretty good.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244650", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144292/" ] }
244,732
I found the following command in the .profile file while installing node through nvm: [ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" I want to know what is the the purpose of [] and && in the following command. I have not encountered this command syntax before, and I want to understand, what is the following command doing, and what is the syntax called?My guess is that it is creating a soft link, am I right? EDIT: nvm.sh is not an executable file.
The [ is a test construct: $ help [[: [ arg... ] Evaluate conditional expression. This is a synonym for the "test" builtin, but the last argument must be a literal `]', to match the opening `['. The -s is one of the available tests, it returns true if the file both exists and is not empty: $ help test | grep -- -s -s FILE True if file exists and is not empty. The && is the AND operator . It will run the command on the right only if the command on the left was successful. Finally, the . is the source command which tells the shell to evaluate any code in the sourced file within the same shell session: $ help ..: . filename [arguments] Execute commands from a file in the current shell. Read and execute commands from FILENAME in the current shell. The entries in $PATH are used to find the directory containing FILENAME. If any ARGUMENTS are supplied, they become the positional parameters when FILENAME is executed. So, the command you posted is the same as: ## If the file exists and is not emptyif [ -s "$NVM_DIR/nvm.sh" ]; then ## Source it . "$NVM_DIR/nvm.sh"fi
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/244732", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/125813/" ] }
244,748
I have a script which runs several commands remotely through ssh. I'm running each command separately because I want to do other things in between executions. However, I don't want to recreate an ssh session every time I issue a new command. I've read about -oControlMaster but I can't seem to get it to work. When I run: ssh -oControlMaster=yes -oControlPath=/tmp/test.sock root@host after I enter my password, I just get an ssh session. If I exit out, the /tmp/test.sock file is no where to be found. What am I missing?
You can use the ControlPersist option to leave the socket after you disconnect from the server. e.g in my ssh config file i have this snippet, which leave the connection open 3 sec. Host * ControlMaster auto ControlPath ~/.ssh/master-socket/%r@%h:%p #ControlPath ~/.ssh/%r@%h:%p ControlPersist 3s
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244748", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6266/" ] }
244,750
I have a Dell Inspiron 5523, which has two drives. One is an HDD (call it sda) and one an SSD (call it sdb). My system is dual boot UEFI with a large part of it being Windows 8 (around 332GB) and a the rest Linux Mint (72GB). My system had a swap space on sdb and on sda I have two partitions: sda9 has all the system files and sda10 has the home folder files. Recently, I wanted to pass some space from sda9 to sda10 because the first was set up with 60GB and the second with only 7.7GB. So I used gparted live CD and moved 30 GB from sda9 to sda10. After the procedure finished with no problem, when rebooting again and choosing the Linux Mint Cinammon option, I got a kernel panic printing the following: Kernel panic - not syncing: Attempted to kill init! exitcode:0x00007f00CPU: 2 PID: 1 Comm: sh Not tainted 3.16.0-38-generic #52~14.04.1-UbuntuCall Trace:dump_stack +0x45/0x56panic+0xc8/0x1fcdo_exit+0xa57/0xa60do_group_exit+0x3f/0xa0SyS_exit_group+0x14/0x20System_call_fastpath+0x1a/0x1fKernel Offset: 0x0 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)drm_kms_helper: panic ocurred, switching back to text console I have tried to fix grub with grub repair but after successfully doing that I've seen no change (the error stays the same). When trying to boot manually following these instructions , I get the following error: Targeted filesystem doesn't have requested /sbin/init./bin/sh: 0: can't access tty: Job control turned off# Note that when I write ls on the GRUB command line, I get partitions in the form of (hd0,gpt9) -> sda9
You can use the ControlPersist option to leave the socket after you disconnect from the server. e.g in my ssh config file i have this snippet, which leave the connection open 3 sec. Host * ControlMaster auto ControlPath ~/.ssh/master-socket/%r@%h:%p #ControlPath ~/.ssh/%r@%h:%p ControlPersist 3s
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244750", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144369/" ] }
244,752
In zsh (and other shells), if I include an argument like (for example): {a,b,c}{d,e,f} brace expansion turns it into: ad ae af bd be bf cd ce cf For my purposes, the argument order is important, and I need the braces to expand right-to-left instead of left-to-right. That is, I want the expansion to be: ad bd cd ae be ce af bf cf Is there a way to control the order that multiple sets of braces are expanded? I'm looking for something that will work in any situation, not just with these arguments.
You can combine parameter expansion with brace expansion. % foo=(d e f)$ echo {a,b,c}${^foo}ad bd cd ae be ce af bf cf If you don't want to define foo separately (as seems likely), you can use the following: $ echo {a,b,c}${^:-d e f}ad bd cd ae be ce af bf cf If you have the rcexpandparam option set, then you don't need the ^ in either example to enable this behavior. (Note: while testing, I also had the shwordsplit option set. If you don't have it set, then try, for example echo {a,b,c}${^=:-d e f} . Moral of the story: almost anything is possible in zsh , but you have to make sure you are using the correct combination of options and syntax.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244752", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73/" ] }
244,764
I need to grep some files to see which ones contain a certain word: grep -l word * Next I need to grep that list of files to see which ones contain a different word. The easy way would probably be to write a script but I don't know quite how to work it.
Assuming none of the file names contain whitespace, single quote, double quote or backslash characters (or start with - with GNU grep ), you can do: grep -l word * | xargs grep word2 Xargs will run the second grep over each file from the first grep. With GNU grep / xargs or compatible, you can make it more reliable with: grep -lZ word ./* | xargs -r0 grep word2 Using -Z makes grep print the file names NUL-delimited so it can be used with xargs -0 . The -r option to xargs avoids the second grep being run if the first grep doesn't find anything.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/244764", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144373/" ] }
244,767
As you probably know ACPI procfs is deprecated in new kernel versions and with sysfs I don't know of a clean way of reading the state of the lid button. The only way I've come up with is hooking up acpid event of lid button change and writing its state to some file. But the issue with this approach is that in case you put your laptop to sleep with lid closed and resume it with lid open, you will end up with a wrong state written in that status file. Also I wouldn't mind if there was a way to retrieve the state with acpi_call module.
Assuming none of the file names contain whitespace, single quote, double quote or backslash characters (or start with - with GNU grep ), you can do: grep -l word * | xargs grep word2 Xargs will run the second grep over each file from the first grep. With GNU grep / xargs or compatible, you can make it more reliable with: grep -lZ word ./* | xargs -r0 grep word2 Using -Z makes grep print the file names NUL-delimited so it can be used with xargs -0 . The -r option to xargs avoids the second grep being run if the first grep doesn't find anything.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/244767", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53317/" ] }
244,779
I have a list of words in a text file, one per line. I would like to make a symbolic link for each of for each word in the text file. For example, lets say real directory is /stuff/testing/original Text file is of format word1word2word3 I want to have /stuff/testing/word1, /stuff/testing/word2, /stuff/testing/word3, all being a symlink to /stuff/testing/original What is the best way to accomplish this?
Assuming none of the file names contain whitespace, single quote, double quote or backslash characters (or start with - with GNU grep ), you can do: grep -l word * | xargs grep word2 Xargs will run the second grep over each file from the first grep. With GNU grep / xargs or compatible, you can make it more reliable with: grep -lZ word ./* | xargs -r0 grep word2 Using -Z makes grep print the file names NUL-delimited so it can be used with xargs -0 . The -r option to xargs avoids the second grep being run if the first grep doesn't find anything.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/244779", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140147/" ] }
244,798
So, being foolish old me, I tried to use os-uninstaller to remove my Windows installation and make way for Arch Linux. Unfortunately, now nothing on this computer boots, and I cannot even boot from a USB! The "USB HDD" option has completely disappeared from the BIOS and has been replaced by "ubuntu". I can't seem to boot from anything, and was looking for a way to restore the ability to boot from USB again. If it helps, the computer is a Samsung NP540U3C and the BIOS is Phoenix Securecore Tiano Setup. Also, why would ubuntu remove the ability to boot from USB in the first place? Very perplexed over here.
Assuming none of the file names contain whitespace, single quote, double quote or backslash characters (or start with - with GNU grep ), you can do: grep -l word * | xargs grep word2 Xargs will run the second grep over each file from the first grep. With GNU grep / xargs or compatible, you can make it more reliable with: grep -lZ word ./* | xargs -r0 grep word2 Using -Z makes grep print the file names NUL-delimited so it can be used with xargs -0 . The -r option to xargs avoids the second grep being run if the first grep doesn't find anything.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/244798", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144386/" ] }
244,811
Is there any way to disallow file execution from home director in Linux? My target is to protect my system from malicious scripts etc. Sure, I can remove execution bit with chmod for /home/user and all its subdirectories but it easy could be changed since user is owner of /home/user . So I think about enabling execution from bin , /usr/bin , usr/sbin only and disallow execution from other directories. My system is Debian 8.
if /home is a separate partition, you can mount it with the noexec option. By doing this, you are destroying (or attempting to) much of the functionality of a unix system for your users as it disables ALL user-written scripts, not just "malicious" ones. Writing scripts to get stuff done is a perfectly normal thing for unix users to do. It still doesn't stop them from writing scripts and executing them with bash myscript.sh or perl myscript.pl etc. If you don't have at least minimal trust in your users, don't give them a shell, or give them a restricted shell such as /bin/rbash instead of /bin/bash .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/244811", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92830/" ] }
244,887
I have two parallel files with the same number of lines in two languages and plan to merge these two files line by line with the delimiter ||| . E.g., the two files are as follows: File A: 1Mo 1,1 I love you.1Mo 1,2 I like you.Hi 1,3 I am hungry.Hi 1,4 I am foolish. File B: 1Mo 1,1 Ich liebe dich.1Mo 1,2 Ich mag dich.Hi 1,3 Ich habe Durst.Hi 1,4 Ich bin neu. The expected output is like this: 1Mo 1,1 I love you. ||| 1Mo 1,1 Ich liebe dich.1Mo 1,2 I like you. ||| 1Mo 1,2 Ich mag dich.Hi 1,3 I am hungry. ||| Hi 1,3 Ich habe Durst.Hi 1,4 I am foolish. ||| Hi 1,4 Ich bin neu. I tried the paste command such as: paste -d "|||" fileA fileB But the returned output is only containing one pipe such as: 1Mo 1,1 I love you. |1Mo 1,1 Ich liebe dich.1Mo 1,2 I like you. |1Mo 1,2 Ich mag dich. Is there any way to separate each pair of lines by tripe pipe ||| ?
With POSIX paste : :|paste -d ' ||| ' fileA - - - - fileB paste will concatenate corresponding lines of all input files. Here we have six files, fileA , four dummy files from standard in - , and fileB . The list of delimiters include a space, three pipe and a space in that order will be used by paste circularly. For the first line of six files, fileA will be concatenated with the first dummy file (which is nothing, thank to the no-op : operator), produce line1-fileA<space> . The first dummy file will be concatenated with the second by a pipe, produce line1-fileA | , then the second dummy file with the third dummy file, produce line1-fileA || , the third dummy file with the the forth dummy file, produce line1-fileA ||| . And the forth dummy file with fileB , produce line1-fileA ||| line1-fileB . Those step will be repeated for all lines, give you the expected result. The use of :| is for less-typing, and mainly use in interactive shell. In a script, you should use: </dev/null paste -d ' ||| ' fileA - - - - fileB to prevent a subshell from being spawned.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/244887", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143368/" ] }
244,894
I have an Acer Aspire on which I installed Linux Mint 17.2. The touchpad does not work at all; xinput doesn't even list any touchpad unit at all. Probably a driver issue, is there some way to make it work?
The solution: add i8042.nopnp to the kernel command line. To do this: sudoedit /etc/default/grub and add: GRUB_CMDLINE_LINUX="i8042.nopnp" If there's already a line with GRUB_CMDLINE_LINUX=… , add i8042.nopnp inside the quotes, separated from any other word within the quotes by a space, e.g. GRUB_CMDLINE_LINUX="some-other=option i8042.nopnp" Then run sudo update-grub and reboot. Hope it works, it worked for me!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244894", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144448/" ] }
244,900
The Fedora (23) Mate comes with 4 desktops.How can I add more desktops? The manual says "right click on the desktop switcher applet -> settings. and change the amount of available deskops." There is no field to change the amount of desktops!
The solution: add i8042.nopnp to the kernel command line. To do this: sudoedit /etc/default/grub and add: GRUB_CMDLINE_LINUX="i8042.nopnp" If there's already a line with GRUB_CMDLINE_LINUX=… , add i8042.nopnp inside the quotes, separated from any other word within the quotes by a space, e.g. GRUB_CMDLINE_LINUX="some-other=option i8042.nopnp" Then run sudo update-grub and reboot. Hope it works, it worked for me!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244900", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12891/" ] }
244,913
I want to accomplish the equivalent of: list=()while read i; do list+=("$i")done <<<"$input" with IFS=$'\n' read -r -a list <<<"$input" What am I doing wrong? input=`/bin/ls /`IFS=$'\n' read -r -a list <<<"$input"for i in "${list[@]}"; do echo "$i"done This should print a listing of / , but I'm only getting the first item.
You must use mapfile (or its synonym readarray , which was introduced in bash 4.0 ): mapfile -t list <<<"$input" One read invocation only work with one line, not the entire standard input. read -a list populate the content of first line of standard in to the array list . In your case, you got bin as the only element in array `list.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244913", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23692/" ] }
244,936
How do I output a string in the bottom right corner of the terminal?
string=whateverstty size | { read y x tput sc # save cursor position tput cup "$((y - 1))" "$((x - ${#string}))" # position cursor printf %s "$string" tput rc # restore cursor.} That assumes all characters in $string are one cell wide (and that $string doesn't contain control characters (like newline, tab...)). If your string may contain zero-width (like combining characters) or double-width ones, you could use ksh93's printf 's %Ls format specifier that formats based or character width: string='whatéver'# aka string=$'\uFF57\uFF48\uFF41\uFF54\uFF45\u0301\uFF56\uFF45\uFF52'stty size | { read y x tput sc # save cursor position tput cup "$((y - 1))" 0 # position cursor printf "%${x}Ls" "$string" tput rc # restore cursor.} That would erase the leading part of the last line though.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244936", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23692/" ] }
244,942
I am initially producing two files which contain lists of URLs—I will refer to them as old and new . I would like to compare the two files and if there are any URLs in the new file which are not in the old file, I would like these to be displayed in an extra_urls file. Now, I've read some stuff about using the diff command but from what I can tell, this also analyses the order of the information. I don't want the order to have any effect on the output. I just want the extra URL's in new printed to the extra_urls file, no matter what order they are placed in either of the other two files. How can I do this?
You can use the comm command to compare two files, and selectively show lines unique to one or the other, or the lines in common. It requires the inputs to be sorted, but you can sort them on the fly, by using process substitution. comm -13 <(sort old.txt) <(sort new.txt) If you're using a version of bash that doesn't support process substitution, it can be emulated using named pipes. An example is shown in Wikipedia .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/244942", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/137780/" ] }
244,943
I have below paths in my system /a/b/z/c/d//e/f/z/y/g/ In "z" directory there are some files i just want to make tar of that files without going into directory .
You can use the comm command to compare two files, and selectively show lines unique to one or the other, or the lines in common. It requires the inputs to be sorted, but you can sort them on the fly, by using process substitution. comm -13 <(sort old.txt) <(sort new.txt) If you're using a version of bash that doesn't support process substitution, it can be emulated using named pipes. An example is shown in Wikipedia .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/244943", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144485/" ] }
244,970
My current directory is buried deep in multiple subfolder layers from my home directory. If I want to open this directory in a gui-based file browser, I have to double click folder after folder to reach it. This is very time consuming. On the other hand, with very few key strokes and several times hitting the tab button, it is very easily reachable via a terminal. I want to know if there is a way to open the current directory in a terminal onto a a file browser. What is the command to do this? For reference, I have an ubuntu system, but I'd like to know what the commands are across the various distributions of linux.
xdg-open . xdg-open is part of the xdg-utils package, which is commonly installed by default in many distributions (including Ubuntu). It is designed to work for multiple desktop environments, calling the default handler for the file type in your desktop environment. You can pass a directory, file, or URL , and it will open the proper program for that parameter. For example, on my KDE system: xdg-open . opens the current directory in the Dolphin file manager xdg-open foo.txt opens foo.txt in emacsclient, which I've configured to be the default handler for .txt files xdg-open http://www.google.com/ opens google.com in my default web browser The application opens as a separate window, and you'll get a prompt back in your terminal and can issue other commands or close your terminal without affecting your new GUI window. I usually get a bunch of error message printed to stderr , but I just ignore them. Edit: Adding the arguments xdg-open . >/dev/null 2>&1 redirects the errors and the output. This call won't block your terminal. Binding this to an alias like filemanager='xdg-open . >/dev/null 2>&1' can come in handy.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/244970", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15417/" ] }
244,983
I want to clone a hard disk using dd. Because I want to keep a process on the machine alive continuously, I would like to do this while the filesystem is still mounted. I understand this is not the "ideal" way to do this, but it also seems from Googling that it is possible. The clone is being used as a backup; in the event of a hard disk failure I would like to have an image to dd back onto a new hard disk. The OS that is running lives on the disk I want to clone. The process I have running does do some disk I/O but not with the disk I wish to clone. As far as I know, only the OS/system processes would be reading or writing to the disk while I do this operation. What I want to know is if this light use is likely to ruin the whole cloned image? I imagine that there's a danger of getting a few files corrupted if they're being written as they are read by dd, but I have no idea how likely it is to ruin the backup. Can anyone share some insights? Short of putting the it on a disk and trying to start it, is there any way I can verify the integrity of the image? Thanks!
If you're lucky, the filesystem corruption will be detected as soon as you try to mount the copy. If you're unlucky, it won't be detected until later. It's also possible that you'll manage to get a consistent copy of the filesystem except for the files that were modified during the copy. But I wouldn't count on it. It might work with ext4 as long as you don't create, delete or move any file, so that the directories aren't modified. If you copy a filesystem that's mounted read-only, of course, everything will be fine. Except that you shouldn't use dd , use cat instead. There are several reliable ways to clone a disk. Pick one of these, rather than one that practically guarantees corruption. Some filesystems offer a clone functionality, for example btrfs . I don't think ext4 does. If the filesystem is on Linux's native partition scheme, i.e. an LVM volume, you can make an LVM snapshot. That requires that you use LVM, rather than putting the filesystem directly on some other partition scheme such as MBR or GPT. You'll be left with a filesystem that wasn't cleanly unmounted, but represents a consistent snapshot of the original at a point in time. If you can get the filesystem onto a RAID-1 array, you can clone it by adding a member to the array, waiting for it to synch, and detaching the new member. Here too you'll have a consistent but not clean snapshot. You can create a RAID-1 volume around an existing filesystem , but that requires an offline step to shrink the filesystem by 128kB. You can make a file-level backup. That won't get you a consistent view of the filesystem, since copying files takes time, but it does guarantee at least that every file that wasn't modified during the backup will be backed up correctly.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244983", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144510/" ] }
244,994
I am using Sugar on a Stick (Fedora 23) 0.106 for i686 as a distro. When I use the terminal I get a very weird behaviour. For example when I type ls I get ]777;notify;Command completed;ls[sugar] # where [sugar] # is the value of my PS1 environmental variable. My .bashrc look like this: # .bashrc# Source global definitionsif [ -f /etc/bashrc ]; then . /etc/bashrcfi# PromptPS1="[sugar] # "# Uncomment the following line if you don't like systemctl's auto-paging feature:# export SYSTEMD_PAGER=# User specific aliases and functions The problem is gone when I comment out the Source global definitions section. However when I wanted to modify the /etc/bashrc I read that it is not wise to modify this file. Here's the file: # /etc/bashrc# System wide functions and aliases# Environment stuff goes in /etc/profile# It's NOT a good idea to change this file unless you know what you# are doing. It's much better to create a custom.sh shell script in# /etc/profile.d/ to make custom changes to your environment, as this# will prevent the need for merging in future updates.# are we an interactive shell?if [ "$PS1" ]; then if [ -z "$PROMPT_COMMAND" ]; then case $TERM in xterm*|vte*) if [ -e /etc/sysconfig/bash-prompt-xterm ]; then PROMPT_COMMAND=/etc/sysconfig/bash-prompt-xterm elif [ "${VTE_VERSION:-0}" -ge 3405 ]; then PROMPT_COMMAND="__vte_prompt_command" else PROMPT_COMMAND='printf "\033]0;%s@%s:%s\007" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/\~}"' fi ;; screen*) if [ -e /etc/sysconfig/bash-prompt-screen ]; then PROMPT_COMMAND=/etc/sysconfig/bash-prompt-screen else PROMPT_COMMAND='printf "\033k%s@%s:%s\033\\" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/\~}"' fi ;; *) [ -e /etc/sysconfig/bash-prompt-default ] && PROMPT_COMMAND=/etc/sysconfig/bash-prompt-default ;; esac fi # Turn on parallel history shopt -s histappend history -a # Turn on checkwinsize shopt -s checkwinsize [ "$PS1" = "\\s-\\v\\\$ " ] && PS1="[\u@\h \W]\\$ " # You might want to have e.g. tty in prompt (e.g. more virtual machines) # and console windows # If you want to do so, just add e.g. # if [ "$PS1" ]; then # PS1="[\u@\h:\l \W]\\$ " # fi # to your custom modification shell script in /etc/profile.d/ directoryfiif ! shopt -q login_shell ; then # We're not a login shell # Need to redefine pathmunge, it get's undefined at the end of /etc/profile pathmunge () { case ":${PATH}:" in *:"$1":*) ;; *) if [ "$2" = "after" ] ; then PATH=$PATH:$1 else PATH=$1:$PATH fi esac } # By default, we want umask to get set. This sets it for non-login shell. # Current threshold for system reserved uid/gids is 200 # You could check uidgid reservation validity in # /usr/share/doc/setup-*/uidgid file if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then umask 002 else umask 022 fi SHELL=/bin/bash # Only display echos from profile.d scripts if we are no login shell # and interactive - otherwise just process them to set envvars for i in /etc/profile.d/*.sh; do if [ -r "$i" ]; then if [ "$PS1" ]; then . "$i" else . "$i" >/dev/null fi fi done unset i unset -f pathmungefi# vim:ts=4:sw=4 What can I do about it?
In addition to the PS1 environment variable, the PROMPT_COMMAND environment variable also affects your prompt. From the bash man page: If set, the value is executed as a command prior to issuing each primary prompt It is that command that is adding the unwanted content to your prompt. You can stop that behavior by unsetting the variable in your .bashrc: unset PROMPT_COMMAND
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/244994", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/128489/" ] }
245,036
What is the best way to generate random numbers in Solaris? I can not seem to find a good answer for this. Most results do not work in my environment. There is a variable or command RAND that seems logical it would work in some manner similar to $RANDOM which I see in most of my searches but it always produces 0. I have found this command od -X -A n /dev/random | head -2 Which seems very random but the return format is odd (to me). 140774 147722 131645 061031 125411 053337 011722 165106 066120 073123 040613 143651 040740 056675 061051 015211 Currently using: -bash-3.2$ uname -aSunOS XXXXXXXXX 5.10 Generic_150400-29 sun4v sparc SUNW,SPARC-Enterprise-T5120
$RANDOM is available in ksh and in bash, but not in /bin/sh . The value is a random number between 0 and 32768, and is not suitable for cryptographic use. Reading from /dev/random generates a stream of random bytes which is suitable for cryptographic use. Since these are arbitrary bytes, potentially including null bytes, you can't store them in a shell variable. You can store $n bytes in a file with </dev/random dd ibs=1 count=$n >rnd You can use od to transform these bytes into a printable representation using octal or hexadecimal values. If you find the output “strange”, well, maybe you should pick different od options. Another option to obtain a printable representation is to call uuencode to produce Base64: </dev/random dd ibs=1 count=$n | uuencode -m _ | sed -e '1d' -e '$d'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/245036", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/130356/" ] }
245,064
I'd like to be able to be able to wrap a command so that if its output doesn't fit in a terminal it will be automatically piped through a pager. Right now I'm using the following shell function (in zsh, under Arch Linux): export LESS="-R"RET="$($@)"RET_LINES="$(echo "${RET}" | wc -l)"if [[ $RET_LINES -ge $LINES ]]; then echo "${RET}" | ${PAGER:="less"}else echo "${RET}"fi but this doesn't really convince me. Is there a better way (in terms of robustness and overhead) to achieve what I want? I'm open to zsh-specific code, too, if it does the job well. Update: Since I asked this question I found an answer which provides a somewhat better — if more complicated — solution, which buffers at most $LINES lines before piping the output to less instead of caching it all. Sadly that's not really satisfying either, because neither solution accounts for long, wrapped lines. For example, if the code above is stored in a function called pager_wrap , then pager_wrap echo {1..10000} prints a very long line to stdout instead of piping through a pager.
I’ve got a solution that’s written for POSIX shell compliance,but I’ve tested it only in bash,so I don’t know for sure whether it’s portable. And I don’t know zsh, so I have made no attempt to make it zsh-friendly. You pipe your command into it;passing a command as argument(s) to another commandis a bad design * . Of course any solution to this problem needs to knowhow many rows and columns the terminal has. In the code below, I’ve assumed that you can rely onthe LINES and COLUMNS environment variables (which less looks at). More reliable methods are: use rows="${LINES:=$(tput lines)}" and cols="${COLUMNS:=$(tput cols)}" , as suggested by A.P. , or look at the output from stty size . Note that this command must have the terminal as its standard input,so, if it’s in a script, and you’re piping into the script,you’ll have to say stty size <&1 (in bash) or stty size < /dev/tty . Capturing its output is even more complicated. The secret ingredient: the fold command will break long linesthe way the screen will, so the script can handle long lines correctly. #!/bin/shbuffer=$(mktemp)rows="$LINES"cols="$COLUMNS"while truedo IFS= read -r some_data e=$? # 1 if EOF, 0 if normal, successful read. printf "%s" "$some_data" >> "$buffer" if [ "$e" = 0 ] then printf "\n" >> "$buffer" fi if [ $(fold -w"$cols" "$buffer" | wc -l) -lt "$rows" ] then if [ "$e" != 0 ] then cat "$buffer" else continue fi else if [ "$e" != 0 ] then "${PAGER:="less"}" < "$buffer" # The above is equivalent to # cat "$buffer" | "${PAGER:="less"}" # … but that’s a UUOC. else cat "$buffer" - | "${PAGER:="less"}" fi fi breakdonerm "$buffer" To use this: Put the above into a file; let’s assume you call it mypager . (Optionally) put it into a directory that’s is your search path;e.g., $HOME/bin . Make it executable by typing chmod +x mypager . Use it in commands like ps ax | mypager or ls -la | mypager . If you skipped the second step(putting the script into a directory that’s is your search path),you’ll have to do ps ax | path_to_mypager /mypager ,where path_to_mypager can be a relative path like “ . ”. * Why is passing a command as argument(s) to another command a bad design? I. Aesthetics / Conformance to Traditions / Unix Philosophy Unix has a philosophy of Do One Thing and Do It Well . For example, if a program is going to display data in a certain way(as pagers do),then it shouldn’t also be invoking the mechanism that produces the data. That’s what pipes are for. Not many Unix programs execute user-specified commands or programs. Let’s look at some that do: The shell, as in sh -c " command " Well, running user-specified commands is the shell’s job ;it’s the One Thing that the shell does. (Of course I am not saying that the shell is a simple program.) env , nice , nohup , setsid , su , and sudo . These programs have something in common — they all exist to run a programwith a modified execution environment 1 . They have to work the way they do,because Unix generally doesn’t allow youto change the execution environment of another process;you have to change your own process, and then fork and/or exec . _______ 1 I’m using the phrase execution environment in the broad sense, referring not only to environment variables,but also process attributes such as “ nice ” value, UID and GIDs,process group, session ID, controlling terminal, open files,working directory, umask value, ulimit s,signal dispositions, alarm timer, etc. Programs that allow a “shell escape”. The only example that springs to mind is vi / vim ,although I’m pretty sure that there are others. These are historical artifacts. They predate window systems and even job control;if you were editing a file, and you wanted to do something else(like look at a directory listing), you would have had to save your fileand exit from the editor to get back to your shell. Nowadays you can switch to another window,or use Ctrl + Z (or type :suspend )to get back to your shell while keeping your editor alive,so shell escapes are, arguably, obsolete. I’m not counting programs that execute other (hard-coded) programsso as to leverage their capabilities rather than duplicate them. For example, some programs may execute diff or sort . (For example, there are tales that that early versions of spell used sort -u to get a list of the words used in a document,and then diff — or perhaps comm — to compare that listto the dictionary word list and identify which words from the documentwere not in the dictionary.) II. Timing Issues The way your script is written, the RET="$($@)" line doesn’t completeuntil the invoked command completes. Therefore, your script cannot begin to display datauntil the command that generates it has completed. Probably the simplest way to fix thatis to make the data-generating commandseparate from the data-displaying program (although there are other ways). III. Command History Suppose you run some commandwith output processed by your display filter, and you look at the output,and decide that you want to save that output in a file. If you had typed (as a hypothetical example) ps ax | mypager you can then type !:1 > myfile or press ↑ and edit the line appropriately. Now, if you had typed mypager "ps ax" you can still go back and edit that command into ps ax > myfile ,but it’s not so straightforward. Or suppose you decide that you want to run ps uax next. If you had typed ps ax | mypager , you could do !:0 u!:* Again, with mypager "ps ax" , it’s still doable, but, arguably, harder. Also, look at the two commands: ps ax | mypager and mypager "ps ax" . Suppose you run a history listing an hour later. ISTM that you’d have to look at mypager "ps ax" a little bit harderto see what the command being executed is. IV. Complex Commands / Quoting echo {1..10000} is obviously just an example command; ps ax isn’t much better. What if you want to do something just a little bit more realistic,like ps ax | grep oracle ?  If you type mypager ps ax | grep oracle it will run mypager ps ax and pipe the output from that through grep oracle . So, if the output from ps ax is 30 lines long, mypager will invoke less ,even if the output from ps ax | grep oracle is only 3 lines. There are probably examples that will fail in a more dramatic fashion. So you have to do what I was showing earlier: mypager "ps ax | grep oracle" But RET="$($@)" can’t handle that. There are, of course, ways to handle things like that, but they are discouraged. What if the command line whose output you want to captureis even more complicated; e.g., command 1 " arg 1 " | command 2 ' arg 2 ' $' arg 3 ' where the arguments contain messy combinations of space, tab, $ , | , \ , < , > , * , ; , & , [ , ] , ( , ) , ` ,and maybe even ' and " .  A command like that can be hard enoughto type directly into the shell correctly.  Now imagine the nightmareof having to quote it to pass it as an argument to mypager .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/245064", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34530/" ] }
245,107
Sometimes I see that the current working directory in shell prompt is abbreviated and sometimes not. For example /usr/bin will be displayed as bin$ or /folder1/folder2 displayed as folder2$ , in other cases I have seen /folder1/folder2 displayed as full /folder1/folder2$ I am using default terminal settings (I am using Fedora 22 virtual machine for learning, but I also notice this fact in several other tutorial videos using different distro) Is there any rule?
Another way to shorten the path, if you use \w is with the PROMPT_DIRTRIM shell variable. A demo: jackman@b7q9bw1:/usr/local/share/doc $ echo "$PS1"\u@\h:\w \$ jackman@b7q9bw1:/usr/local/share/doc $ pwd/usr/local/share/docjackman@b7q9bw1:/usr/local/share/doc $ PROMPT_DIRTRIM=2jackman@b7q9bw1:.../share/doc $ pwd/usr/local/share/docjackman@b7q9bw1:.../share/doc $
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/245107", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144104/" ] }
245,129
The ext4 file system has a feature called has_journal . In the dumpe2fs output, we can see something like this: # dumpe2fs /dev/sda2 | grep -i journalJournal inode: 8Journal backup: inode blocksJournal features: journal_incompat_revokeJournal size: 32MJournal length: 8192Journal sequence: 0x00000662Journal start: 1 So the journal is 32M in size, and it starts at the beginning of the file system. I know that the size of the journal depends on the size of the partition. I don't remember the limits right now, but it's not that big value. So what kind of data is stored in the journal? I've read once that if you want to secure remove a file from your disk (via shred ), you have to take into account the file system's journal because it can store some information about the removed file. Is there a way to check what is in the journal? Are there any tools that can show the information?
The exact contents of the journal depend on how you have configured your ext4 file system. The official ext4 documentation says: There are 3 different data modes: writeback mode In data=writeback mode, ext4 does not journal data at all. This mode provides a similar level of journaling as that of XFS, JFS, and ReiserFS in its default mode - metadata journaling. A crash+recovery can cause incorrect data to appear in files which were written shortly before the crash. This mode will typically provide the best ext4 performance. ordered mode In data=ordered mode, ext4 only officially journals metadata, but it logically groups metadata information related to data changes with the data blocks into a single unit called a transaction. When it's time to write the new metadata out to disk, the associated data blocks are written first. In general, this mode performs slightly slower than writeback but significantly faster than journal mode. journal mode data=journal mode provides full data and metadata journaling. All new data is written to the journal first, and then to its final location. In the event of a crash, the journal can be replayed, bringing both data and metadata into a consistent state. This mode is the slowest except when data needs to be read from and written to disk at the same time where it outperforms all others modes. Enabling this mode will disable delayed allocation and O_DIRECT support. So you can have both metadata (e.g. file name) and actual data (i.e. file contents) residing in your journal file. If you're interested in details on the format in which transaction data is actually stored in the journal, you should refer to the respective header file: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/linux/jbd2.h There's also a wiki page which explains how these structures are laid out on the disk: https://ext4.wiki.kernel.org/index.php/Ext4_Disk_Layout There's a packet in debian called sleuthkit , in which you have some tools like jls or jcat . The jls tool can list all journal entries of an ext4 file system, for example: # jls grafi.imgJBlk Description0: Superblock (seq: 0)sb version: 4sb version: 4sb feature_compat flags 0x00000000sb feature_incompat flags 0x00000000sb feature_ro_incompat flags 0x000000001: Allocated Descriptor Block (seq: 2)2: Allocated FS Block 1613: Allocated Commit Block (seq: 2, sec: 1448889478.49360128)4: Allocated Descriptor Block (seq: 3)5: Allocated FS Block 1616: Allocated Commit Block (seq: 3, sec: 1448889494.3355841024)7: Allocated Descriptor Block (seq: 4)8: Allocated FS Block 1459: Allocated FS Block 110: Allocated FS Block 16111: Allocated FS Block 12912: Allocated FS Block 835913: Allocated FS Block 835314: Allocated FS Block 015: Allocated FS Block 13016: Allocated Commit Block (seq: 4, sec: 1448889528.3540304896)... And of course, there's more entries depending on the size of the journal. In this case there was about 16382, most of which were empty. If you want to do something with the log, for instance, recover some file, you have to use jcat in order to extract the i-node block: jcat grafi.img 8 10 > blok-161 And inspect the single i-node. The block is 4096 bytes in size, and covers 16 i-nodes, each of which is 256 bytes long. Anyways in that way you can get the first block of an extent, the number of blocks in the extent, how many extents were used to describe that particular file, its size and other stuff like this. All you need to recover that file from the disk based only on the i-node entry that you got from the journal. There's also debugfs in e2fsprogs package. It has logdump tool, which is similar to the jls .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/245129", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52763/" ] }
245,145
I've created a simple C program like so: int main(int argc, char *argv[]) { if (argc != 5) { fputs("Not enough arguments!\n", stderr); exit(EXIT_FAILURE); } And I have my PATH modified in etc/bash.bashrc like so: PATH=.:$PATH I've saved this program as set.c and am compiling it with gcc -o set set.c in the folder ~/Programming/so However, when I call set 2 3 nothing happens. There is no text that appears. Calling ./set 2 3 gives the expected result I've never had a problem with PATH before and which set returns ./set . So it seems the PATH is the correct one. What's is happening?
Instead of using which , which doesn't work when you need it most , use type to determine what will run when you type a command: $ which set./set$ type setset is a shell builtin The shell always looks for builtins before searching the $PATH , so setting $PATH doesn't help here. It would be best to rename your executable to something else, but if your assignment requires the program to be named set , you can use a shell function: $ function set { ./set; }$ type setset is a functionset (){ ./set} (That works in bash , but other shells like ksh may not allow it. See mikeserv's answer for a more portable solution.) Now typing set will run the function named "set", which executes ./set . GNU bash looks for functions before looking for builtins, and it looks for builtins before searching the $PATH . The section named "COMMAND EXECUTION" in the bash man page gives more information on this. See also the documentation on builtin and command : help builtin and help command .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/245145", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/137641/" ] }
245,208
what is the command to modify metric of an existing route entry in linux?I am able to change gateway of an existing entry using "ip route change" command as below but not able to change metrics. Is there any other command for that? route –n40.2.2.0 30.1.3.2 255.255.255.0 eth2ip route change 40.2.2.0/24 via 30.1.2.2route -n40.2.2.0 30.1.2.2 255.255.255.0 eth1
As noted in a comment to the question, quoting a message on the linux-net mailing list: "The metric/priority cannot be changed [...] This is a limitation of the current protocol [...]." The only way is to delete the route and add a new one. This is done using the route command, example: sudo route add -net default gw 10.10.0.1 netmask 0.0.0.0 dev wlan0 metric 1 Debian manpage for the route command
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/245208", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/121891/" ] }
245,229
I would like to precede each line with a number saying how many slashes the line has got. awk '{ l=$0; gsub("[^/]","",l); print length(l),l }' This doesn't work bacause l=$0 seems to assign by reference.How do I dup the string? Is there a better way to do this with standard UNIX tools?I essentially want to sort a list of filepaths by depth (slash count).
No, awk always do assignment by value, not by reference. The RHS of variable assignment is an expression , and an expression in awk always return a value. To duplicate a variable, just assigning its value to new variable, you can operate on new variable without affecting the original variable. In: $ echo 1 | awk '{l=$0; sub("1","2",l); print l, $0}'2 1 only value of l was modified, $0 value wasn't changed. With your requirement in the question, simply do: awk -F '/' '{print NF-1, $0}' <file You don't need to do any parsing work, let awk do it all for you before you enter the script body. You only need to extract the information.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/245229", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23692/" ] }
245,293
I am trying to compare two command output (no files) vimdiff "$(tail /tmp/cachain.pem)" "$(tail /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem)" I tried playing with redirection, pipe, and vim - -c but I must be missing something. Can anyone help please ?
You are confusing $(…) with <(…) . You used the former, which passes the output as arguments to vimdiff . For example, if the last line of /path/to/foo contains bar bar bar , then the following command echo $(tail -1 /path/to/foo) is equivalent to echo bar bar bar Instead, you need to use <(…) . This is called process substitution , and passes the output as a pseudo-file to the vimdiff command. Hence, use the following. vimdiff <(tail /tmp/cachain.pem) <(tail /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem) This works in bash and zsh , but apparently there is no way to do process substitution in tcsh .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/245293", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74152/" ] }
245,331
Using extended Unicode characters is (no-doubt) useful for many users. Simpler shells (ash (busybox), dash) and ksh do fail with: tést() { echo 34; }tést But bash , mksh , lksh , and zsh seem to allow it. I am aware that POSIX valid function names use this definition of Names . That means this regex: [a-zA-Z_][a-zA-Z0-9_]* However, in the first link it is also said: An implementation may allow other characters in a function name as an extension. The questions are: Is this accepted and documented? Where? For which shells (if any)? Related questions: Its possible use special characters in a shell function name? I am not interested in using meta-characters (>) in function names. Upstart and bash function names containing “-” I do not believe that an operator (subtraction "-") should be part of a name.
Since POSIX documentation allow it as an extension, there's nothing prevent implementation from that behavior. A simple check (ran in zsh ): $ for shell in /bin/*sh 'busybox sh'; do printf '[%s]\n' $shell $=shell -c 'á() { :; }' done[/bin/ash]/bin/ash: 1: Syntax error: Bad function name[/bin/bash][/bin/dash]/bin/dash: 1: Syntax error: Bad function name[/bin/ksh][/bin/lksh][/bin/mksh][/bin/pdksh][/bin/posh]/bin/posh: á: invalid function name[/bin/yash][/bin/zsh][busybox sh]sh: syntax error: bad function name show that bash , zsh , yash , ksh93 (which ksh linked to in my system), pdksh and its derivation allow multi-bytes characters as function name. yash is designed to support multibyte characters from the beginning, so there's no surprise it worked. The other documentation you can refer is ksh93 : A blank is a tab or a space. An identifier is a sequence of letters, digits, or underscores starting with a letter or underscore. Identifiers are used as components of variable names. A vname is a sequence of one or more identifiers separated by a . and optionally preceded by a .. Vnames are used as function and variable names. A word is a sequence of characters from the character set defined by the current locale , excluding non-quoted metacharacters. So setting to C locale: $ export LC_ALL=C$ á() { echo 1; }ksh: á: invalid function name make it failed.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/245331", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
245,339
We have server under Ubuntu 12.04 with Apache HTTP 2.2 installed there. Kernel 3.2.0. Faced with weird behavior during dowloading some files. Virtualhost config: <Directory /var/www/name/*> ... AllowOverride AuthConfig # add these accordingly for the MIME types to be compressed AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/rss+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript <Files *.gz> SetOutputFilter DEFLATE Header set Content-Encoding: gzip Header append Content-Encoding: deflate </Files></Directory> Problem is - sometimes for unknown reasons it's impossible to download some (!) files: when file download 99% - speed decreased to 0 and download stops. Nothing unusual in logs - but I have found one oddity in tcpdump (after download speed == 0) results. For example - during download attempt of badfile.gz : 10:36:37.611369 IP (tos 0x0, ttl 64, id 7954, offset 0, flags [DF], proto TCP (6), length 1420) 37.**.**.176.80 > 10.**.**.25.55981: Flags [.], cksum 0x00a9 (correct), seq 228803:230171, ack 197, win 243, options [nop,nop,TS val 2097666946 ecr 811530774], length 136810:36:37.611396 IP (tos 0x0, ttl 64, id 64391, offset 0, flags [DF], proto TCP (6), length 52, bad cksum 0 (->933a)!) 10.**.**.25.55981 > 37.**.**.80: Flags [.], cksum 0xac28 (incorrect -> 0xf8fc), seq 197, ack 230171, win 4053, options [nop,nop,TS val 811530824 ecr 2097666946], length 0 There is Flags [.] - so, it's hang on on the data transmission - there is no Finalize flags (afaik). Another tcpdump example during download another file goodfile.gz (from same Apache's directory on server side): 10:39:21.216118 IP (tos 0x0, ttl 64, id 18169, offset 0, flags [DF], proto TCP (6), length 52, bad cksum 0 (->47c9)!) 10.**.**.25.55981 > 37.**.**.80: Flags [F.], cksum 0xac28 (incorrect -> 0x83bb), seq 0, ack 1, win 4096, options [nop,nop,TS val 811691867 ecr 2097666946], length 0 There few files with different extension/size/grants etc - but problem come up only with few of them. So - problem appear sometimes, without any changes on server side. Sometimes badfile.gz can be dowloaded without problems - sometimes (usually) it hang up. Same during downloading with browsers - Chrome reports " Failed - Network error ", Firefox - just says " Estimate unknown " during download. Please, let me know if I can add more info. A few examples. badfile first: $ wget http://static.content.domain.net/assets/json/en-GB/content3.json.gz...HTTP request sent, awaiting response... 200 OKLength: 229874 (224K) [application/x-gzip]Saving to: 'content3.json.gz.3'content3.json.gz.3 99%[==============...=====> ] 224.42K --.-KB/s eta 0s And goodfile : $ wget http://static.content.domain.net/assets/json/en-GB/24k.tar.gz...HTTP request sent, awaiting response... 200 OKLength: 24576 (24K) [application/x-gzip]Saving to: '24k.tar.gz.1'24k.tar.gz.1 100%[=========...======>] 24.00K --.-KB/s in 0.05s 2015-11-25 10:38:40 (440 KB/s) - '24k.tar.gz.1' saved [24576/24576] P.S. We have complicated enough network configuration, including VPN tunnels between offices/datacenters - may be cause somewhere here too. P.P.S We also have very old system there: # /usr/lib/update-notifier/apt-check --human-readable205 packages can be updated.154 updates are security updates. But it can not be updated now :-)
Since POSIX documentation allow it as an extension, there's nothing prevent implementation from that behavior. A simple check (ran in zsh ): $ for shell in /bin/*sh 'busybox sh'; do printf '[%s]\n' $shell $=shell -c 'á() { :; }' done[/bin/ash]/bin/ash: 1: Syntax error: Bad function name[/bin/bash][/bin/dash]/bin/dash: 1: Syntax error: Bad function name[/bin/ksh][/bin/lksh][/bin/mksh][/bin/pdksh][/bin/posh]/bin/posh: á: invalid function name[/bin/yash][/bin/zsh][busybox sh]sh: syntax error: bad function name show that bash , zsh , yash , ksh93 (which ksh linked to in my system), pdksh and its derivation allow multi-bytes characters as function name. yash is designed to support multibyte characters from the beginning, so there's no surprise it worked. The other documentation you can refer is ksh93 : A blank is a tab or a space. An identifier is a sequence of letters, digits, or underscores starting with a letter or underscore. Identifiers are used as components of variable names. A vname is a sequence of one or more identifiers separated by a . and optionally preceded by a .. Vnames are used as function and variable names. A word is a sequence of characters from the character set defined by the current locale , excluding non-quoted metacharacters. So setting to C locale: $ export LC_ALL=C$ á() { echo 1; }ksh: á: invalid function name make it failed.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/245339", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46938/" ] }
245,362
I want to strip all numeric characters from a String variable, eg from: VARIABLE=qwe123rty567 to: echo $VARIABLE> qwerty I've searched many posts but they either use sed to output to file/file names, or output to echo. I was not able to get it working with because of the white space: VARIABLE=$VARIABLE | sed 's/[0-9]*//g'
With bash : $ printf '%s\n' "${VARIABLE//[[:digit:]]/}"qwerty [:digit:] can contain other characters than 0 to 9 depend on your locale. If you want only to remove 0 to 9, use C locale instead.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/245362", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62732/" ] }
245,378
I have written a number of debuggers that all can colorize source code text shown in a terminal session. They all understand that some terminals have a dark background and some have a light background and that of course the colors need to be different depending on the terminal scheme. It is annoying to have to set to the other scheme when your terminal doesn't match the default background, so I'd like to find a way to figure this out automatically. Suggestions? (They all support options --highlight={light|dark|plain} ) One simple mechanism would be to key off an environment variable. For my shell profiles I've been using DARK_BACKGROUND_COLOR , but If there is already some sort of default name like there is for PAGER , EDITOR , SHELL , HOME , etc. I'd like to use that. Is there such a environment name convention? Other suggestions? Edit: Based on the accepted answer and discussion, I have switched from using DARK_BACKGROUND_COLOR to COLORFGBG . Value 15;0 is for a dark background (technically white on black) and 0;15 (technically black on white) is for a light background.
There is no such convention. Furthermore, an environment variable is not a good way to report information about a terminal, because the value can get stale if a program starts another terminal emulator which doesn't update this variable, or if a program connects to multiple terminals. (The TERM environment variable doesn't suffer from these problems because it's universal: every terminal emulator sets it and every program is aware of it. The problems only arise when a variable is partially supported.) The right way to obtain the information would be to query the terminal. In the unix world, this is done by writing an escape sequence which the terminal interprets as “send back some data that answers my query”. As Thomas Dickey explains , xterm has such a control sequence , OSC 11 ; ? BEL (set text parameters, parameter 11 = text background color, value ? means query instead of set). Write \e]11;?\a to the terminal (where \e is the escape character ( ^[ ) and \a is the bell character ( ^G )), and xterm replies with a string like \e]11;rgb:0000/0000/0000\a (that's a black background). Unfortunately, few other terminal emulators support this escape sequence. Even in xterm, this feature might be disabled (through the XTerm.VT100.allowColorOps resource ) because it's a security risk: writing to a terminal can result in output to that terminal that's partially controlled by the text being written. Rxvt sets the environment variable COLORFGBG to a string like 7;0 where 7 is the foreground color (7 is light gray) and 0 is the background color (black). Konsole also supports this. Emacs attempts to detect whether the terminal has a light or dark background, in order to set the background-mode terminal parameter . As of Emacs 24.5, there are three methods to set the background mode automatically: On xterm , Emacs uses the OSC 11 escape sequence as explained above. On rxvt , Emacs uses the COLORFGBG environment variable as explained above. On DOS and Windows consoles, Emacs uses OS-specific interfaces to obtain information about the terminal; these interfaces play the same role as the OSC 11 escape sequence. This leaves out many terminals, however there is some progress: the vte library, which powers many terminal emulators such as gnome-terminal, guake, terminator, xfce4-terminal, …, implements OSC 11 reporting like xterm since version 0.35.2 . You can detect VTE-based terminals by checking the environment variable VTE_VERSION ; the value is a number, you want 3502 and above. If you want to standardize on a way report the information to applications, then support on the terminal side might not matter: after all you know whether you prefer light or dark backgrounds. Then you might as well align with rxvt and use COLORFGBG , since it's the only interface that somebody is already using and that you can adopt independently of any terminal support. The COLORFGBG interface is limited: it was designed for a world with only 16 colors, and everybody agreeing on a mapping from color numbers to colors (at least approximately, exact hues differ). Konsole supports more than 16 colors, but it uses an approximation when reporting COLORFGBG : it approximates the foreground and background colors by one of the 16 standard colors. If all you care about is light vs dark, that's not a problem, just set COLORFGBG to 15;0 for light text on a dark background or 0;15 for dark text on a light background.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/245378", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39695/" ] }
245,403
I use Cinnamon workspace switcher. Everything is okay, but the visual effect is too fast. Is it possible to set the duration of workspace switching effect? Linux Mint 17.2
I think the animation of switch workspace is annoying.So, in Cinnamon 3.0.7, I backup /usr/share/cinnamon/js/ui/windowManager.js and edit const WINDOW_ANIMATION_TIME = 0.25; to const WINDOW_ANIMATION_TIME = 0; then restart cinnamon by Alt+F2 , input r and Enter And you can set bigger number to make animation slower.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/245403", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80722/" ] }