source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
318,332 | In Linux, using a Bash Terminal, I can do: hostname -d to display the name of the DNS domain, and hostname -i to display the network address(es) of the hostname . How can I retrieve the same information--preferably using a single command (with option, if needed), and without having to elevate privileges--from a Bash terminal in Mac OS X? For reference, here's the Bash Version I'm using in Mac OS X: GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin15) . | For the hostname -d , use hostname -f : hostname -f | sed -e 's/^[^.]*\.//' For IP-addresses, use ifconfig -a (look for the inet data). Your machine may have only one network device, en0 , so you could do just ifconfig en0 |awk '/inet / {print $2; }' If you are interested in all of the network devices, keep in mind that ifconfig -l lists the devices. This lists the devices and their correspond addresses: #!/bin/shfor name in $(ifconfig -l)do ifconfig $name |awk -v name=$name '/inet / {printf "%s: %s\n", name, $2; }'done Further reading: How do I find my IP Address from the command line? hostname -- set or print name of current host system | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/318332",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117468/"
]
} |
318,369 | From my understanding, jobs are pipelines started from a certain shell and you can manage these jobs ( fg , bg , Ctrl-Z) from within this shell. A job can consist of multiple processes/commands. My question is what happens to these jobs when the original, containing shell exits?Suppose huponexit is not set so background processes keep running after the shell exits. Suppose I have done: $ run.sh | grep 'abc' &[1] job_id Then I exit this shell. I'll enter a new shell and run jobs and see nothing obviously. But I can do ps aux | grep run.sh and see this process running and I'll also do ps aux | grep grep and see the process for grep 'abc' running too. Is there a way to just get the job ID for the full pipeline so that I can kill it in one go, or do I have to kill all the processes separately from another shell once I have exited the original shell? (I have tried the latter and it works, but it seems like a hassle to keep track of all the processes.) | When the shell exits, it might send the HUP signal to background jobs, and this might cause them to exit. The SIGHUP signal is only sent if the shell itself receives a SIGHUP, i.e. only if the terminal goes away (e.g. because the terminal emulator process dies) and not if you exit the shell normally (with the exit builtin or by typing Ctrl + D ). See In which cases is SIGHUP not sent to a job when you log out? and Is there any UNIX variant on which a child process dies with its parent? for more details. In bash, you can set the huponexit option to also send SIGHUP to background jobs on a normal exit. In ksh, bash and zsh, calling disown on a job removes it from the list of jobs to send SIGHUP to. A process that receives SIGHUP may ignore or catch the signal, and then it won't die. Using nohup when you run a program makes it immune to SIGHUP. If the process isn't killed due to a possible SIGHUP then it remains behind. There's nothing left to relate it to job numbers in the shell. The process may still die if it tries to access the terminal but the terminal no longer exists. That depends how the program reacts to a non-existent terminal. If the job contains multiple processes (e.g. a pipeline), then all these processes are in one process group . Process groups were invented precisely to capture the notion of a shell job that is made up of multiple related processes. You can see processes grouped by process group by displaying their process group ID (PGID — normally the process ID of the first process in the group), e.g. with ps l under Linux or something like ps -o pid,pgid,tty,etime,comm portably. You can kill all the processes in a group by passing a negative argument to kill . For example, if you've determined that the PGID for the pipeline you want to kill is 1234, then you can kill it with kill -TERM -1234 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/318369",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167597/"
]
} |
318,382 | I have a luks-encrypted partition that was protected by a passphrase and a key file. The key file was for routine access and the passphrase was in a sealed envelope for emergencies. May months went by and I accidentally shredded the key file, so I recovered by using the passphrase from the envelope. Now I want to know, I have two active key slots but I don't know which contains the useless key file pass phrase and which has my emergency passphrase in it. Obviously if I remove the wrong one I'll lose all the data on the drive. #cryptsetup luksDump /dev/sda2LUKS header information for /dev/sda2Version: 1Cipher name: aesCipher mode: xts-plain64Hash spec: sha256Payload offset: 4096MK bits: 256MK digest: xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx MK salt: xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx MK iterations: 371000UUID: 28c39f66-dcc3-4488-bd54-11ba239f7e68Key Slot 0: ENABLED Iterations: 2968115 Salt: xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx Key material offset: 8 AF stripes: 4000Key Slot 1: ENABLED Iterations: 2968115 Salt: xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx xx Key material offset: 264 AF stripes: 4000Key Slot 2: DISABLEDKey Slot 3: DISABLEDKey Slot 4: DISABLEDKey Slot 5: DISABLEDKey Slot 6: DISABLEDKey Slot 7: DISABLED | As you've discovered, you can use cryptsetup luksDump to see which key slots have keys. You can check the passphrase for a particular slot with cryptsetup luksOpen --test-passphrase --key-slot 0 /dev/sda2 && echo correct This succeeds if you enter the correct passphrase for key slot 0 and fails otherwise (including if the passphrase is correct for some other key slot). If you've forgotten one of the passphrases then you can only find which slot it's in by elimination, and if you've forgotten two of the passphrases then there's no way to tell which is which (otherwise the passphrase hash would be broken). To remove the passphrase you've forgotten, you can safely run cryptsetup luksKillSlot /dev/sda2 0 and enter the passphrase you remember. To wipe a key slot, cryptsetup requires the passphrase for a different key slot, at least when it isn't running in batch mode (i.e. no --batch-mode , --key-file=- or equivalent option). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/318382",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17749/"
]
} |
318,385 | I'm trying to generate a gpg key $ gpg --full-gen-key but eventurally I get an error gpg: agent_genkey failed: No such file or directoryKey generation failed: No such file or directory I'm on Arch Linux. $ gpg --versiongpg (GnuPG) 2.1.15libgcrypt 1.7.3Copyright (C) 2016 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>This is free software: you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law.Home: /home/me123/.gnupg............. The directory /home/me123/.gnupg exists | Did you delete the /home/me123/.gnupg directory and then it was recreated by gpg? If so, that's likely what is confusing the agent. Either restart the agent ( gpgconf --kill gpg-agent ) or, more drastically, reboot your machine and try again. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/318385",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/198038/"
]
} |
318,386 | Currently, I have my server output my uptime to a HTML page using: TIME=$(uptime -p)echo ""${TIME}"!" >> /var/www/html/index.new Which generates an output of: up 1 day, 1 hour, 2 minutes! I would like to also (for the sake of curiosity) to be able to display my system's record uptime, though am uncertain as to the best way to log this and display it back in the (uptime -p) [day, hr, min] format. Is there a pre-existing tool which can do this? Or would I need to log uptime to a file and pull out the highest value with grep or something similar? | Did you delete the /home/me123/.gnupg directory and then it was recreated by gpg? If so, that's likely what is confusing the agent. Either restart the agent ( gpgconf --kill gpg-agent ) or, more drastically, reboot your machine and try again. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/318386",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/196651/"
]
} |
318,424 | First to cut off trivial but inapplicable answers: I can use neither the find + xargs trick nor its variants (like find with -exec ) because I need to use few such expressions per call. I will get back to this at the end. Now for a better example let's consider: $ find -L some/dir -name \*.abc | sortsome/dir/1.abcsome/dir/2.abcsome/dir/a space.abc How do I pass those as arguments to program ? Just doing it doesn't do the trick $ ./program $(find -L some/dir -name \*.abc | sort) fails since program gets following arguments: [0]: ./program[1]: some/dir/1.abc[2]: some/dir/2.abc[3]: some/dir/a[4]: space.abc As can be seen, the path with space was split and program considers it to be two different arguments. Quote until it works It seems novice users such as myself, when faced with such problems, tend to randomly add quotes until it finally works - only here it doesn't seem to help… "$(…)" $ ./program "$(find -L some/dir -name \*.abc | sort)"[0]: ./program[1]: some/dir/1.abcsome/dir/2.abcsome/dir/a space.abc Because the quotes prevent word-splitting, all the files are passed as a single argument. Quoting individual paths A promising approach: $ ./program $(find -L some/dir -name \*.abc -printf '"%p"\n' | sort)[1]: "some/dir/1.abc"[2]: "some/dir/2.abc"[3]: "some/dir/a[4]: space.abc" The quotes are there, sure. But they are no longer interpreted. They are just part of the strings. So not only they did not prevent word splitting, but also they got into arguments! Change IFS Then I tried playing around with IFS . I would prefer find with -print0 and sort with -z anyway - so that they will have no issues on "wired paths" themselves. So why not force word splitting on the null character and have it all? $ ./program $(IFS=$'\0' find -L some/dir -name \*.abc -print0 | sort -z)[0]: ./program[1]: some/dir/1.abcsome/dir/2.abcsome/dir/a[2]: space.abc So it still splits on space and does not split on the null . I tried to place the IFS assignment both in $(…) (as shown above) and before ./program . Also I tried other syntax like \0 , \x0 , \x00 both quoted with ' and " as well as with and without the $ . None of those seemed to make any difference… And here I'm out of ideas. I tried few more things but all seemed to run down to the same problems as listed. What else could I do? Is it doable at all? Sure, I could make the program accept the patterns and do searches itself. But it is a lot of double work while fixing it to a specific syntax. (What about providing files by a grep for example?). Also I could make the program accept a file with a list of paths. Then I can easily dump find expression to some temp file and provide the path to that file only. This could supported be along direct paths so that if user has just a simple path it can be provided without intermediate file. But this doesn't seem nice - one needs to create extra files and take care of them, not to mention extra implementation required. (On the plus side, however, it could be a rescue for cases in which the number of files as arguments start to cause issues with command line length…) At the end, let me remind you again that find + xargs (and alike) tricks will not work in my case. For description simplicity I'm showing only one argument. But my true case looks more like this: $ ABC_FILES=$(find -L some/dir -name \*.abc | sort)$ XYZ_FILES=$(find -L other/dir -name \*.xyz | sort)$ ./program --abc-files $ABC_FILES --xyz-files $XYZ_FILES So doing an xargs from one search still leaves me with how to deal with the other one… | Use arrays. If you don't need to handle the possibility of newlines in your filenames, then you could get away with mapfile -t ABC_FILES < <(find -L some/dir -name \*.abc | sort)mapfile -t XYZ_FILES < <(find -L other/dir -name \*.xyz | sort) then ./program --abc-files "${ABC_FILES[@]}" --xyz-files "${XYZ_FILES[@]}" If you do need to handle newlines within filenames, and have bash >= 4.4, you can use -print0 and -d '' to null-terminate the names during array construction: mapfile -td '' ABC_FILES < <(find -L some/dir -name \*.abc -print0 | sort -z) (and similarly for the XYZ_FILES ). If you don't have the newer bash, then you could use a null-terminated read loop to append filenames to the arrays e.g. ABC_FILES=()while IFS= read -rd '' f; do ABC_FILES+=( "$f" ); done < <(find -L some/dir -name \*.abc -print0 | sort -z) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/318424",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101845/"
]
} |
318,460 | In a POSIX sh, or in the Bourne shell (as in Solaris 10's /bin/sh ), is it possible to have something like: a='some var with spaces and a special space'printf "%s\n" $a And, with the default IFS , get: somevarwithspacesandaspecial space That is, protect the space between special and space by some combination of quoting or escaping? The number of words in a isn't known beforehand, or I'd try something like: a='some var with spaces and a special\ space'printf "%s\n" "$a" | while read field1 field2 ... The context is this bug reported in Cassandra, where OP tried to set an environment variable specifying options for the JVM: export JVM_EXTRA_OPTS='-XX:OnOutOfMemoryError="echo oh_no"' In the script executing Cassandra, which has to support POSIX sh and Solaris sh: JVM_OPTS="$JVM_OPTS $JVM_EXTRA_OPTS"#...exec $NUMACTL "$JAVA" $JVM_OPTS $cassandra_parms -cp "$CLASSPATH" $props "$class" IMO the only way out here is to use a script wrapping the echo oh_no command. Is there another way? | Not really. One solution is to reserve a character as the field separator. Obviously it will not be possible to include that character, whatever it is, in an option. Tab and newline are obvious candidates, if the source language makes it easy to insert them. I would avoid multibyte characters if you want portability (e.g. dash and BusyBox don't support multibyte characters). If you rely on IFS splitting, don't forget to turn off wildcard expansion with set -f . tab=$(printf '\t')IFS=$tabset -fexec java $JVM_EXTRA_OPTS … Another approach is to introduce a quoting syntax. A very common quoting syntax is that a backslash protects the next character. The downside of using backslashes is that so many different tools use it as a quoting characters that it can sometimes be difficult to figure out how many backslashes you need. set javaeval 'set -- "$@"' $(printf '%s\n' "$JVM_EXTRA_OPTS" | sed -e 's/[^ ]/\\&/g' -e 's/\\\\/\\/g') …exec "$@" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/318460",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70524/"
]
} |
318,484 | I have configured my Debian server to use msmtp for sending mails. Current use case are for example sending a daily report from logwatch to my isp email. echo "$body" | mutt -s "$topic" -- "[email protected]" I have configured msmtp by means of a global msmtprc file located at /etc/msmtprc . Contents shown below. The next thing I want to configure is that my email for my root account (e.g., output from crontabs) is sent to my isp email as well. I have googled around and found, for example on the Arch wiki, that I should just configure my aliases. Which I have done so at the bottom of the msmtp configuration file. However, after running newaliases , and trying to execute echo test | mail -s "test message" root I get the error send-mail: /etc/aliases: line 2: invalid address 'postmaster'Can't send mail: sendmail process failed with error code 78 I am unsure how I can fix this. The alias shown below is what was already present. I only added the gmail address. I think I could just put a new aliases file but that might break other services that rely on this. I.e., I don't know what the proper way to fix this is. /etc/aliases # /etc/aliasesmailer-daemon: postmasterpostmaster: rootnobody: roothostmaster: rootusenet: rootnews: rootwebmaster: rootwww: rootftp: rootabuse: rootnoc: rootsecurity: rootroot: christphe, [email protected] /etc/msmtprc # ------------------------------------------------------------------------------# msmtp System Wide Configuration file# ------------------------------------------------------------------------------# A system wide configuration is optional.# If it exists, it usually defines a default account.# This allows msmtp to be used like /usr/sbin/sendmail.# ------------------------------------------------------------------------------# Accounts# ------------------------------------------------------------------------------account isphost mail.isp.netport 587from [email protected] loginuser [email protected] foobarsyslog LOG_MAILlogfile /var/log/msmtp.log# ------------------------------------------------------------------------------# Configurations# ------------------------------------------------------------------------------# Construct envelope-from addresses of the form "[email protected]".#auto_from on#maildomain fermmy.server# Use TLS.tls ontls_starttls ontls_trust_file /etc/ssl/certs/ca-certificates.crt# Syslog logging with facility LOG_MAIL instead of the default LOG_USER.# Must be done within "account" sub-section above#syslog LOG_MAIL# Set a default accountaccount default : ispaliases /etc/aliases# ------------------------------------------------------------------------------# | Update 2019-10-17 msmtp version 1.8.6 (released 2019-09-27) now has native support for chained/recursive alias expansion in /etc/aliases . See https://marlam.de/msmtp/news/msmtp-1-8-6/ . Original Answer So, I had the exact same issue when I migrated from ssmtp to msmtp. The issue is caused by the is_address() function in aliases.c . Basically, if the target of the alias doesn't contain '@' , msmtp thinks it's invalid and dies. You can work around this by just appending @ to all the aliases that redirect to root. In your example, you would modify /etc/aliases as follows: # /etc/aliasesmailer-daemon: postmaster@postmaster: root@nobody: root@hostmaster: root@usenet: root@news: root@webmaster: root@www: root@ftp: root@abuse: root@noc: root@security: root@root: christphe@, [email protected] I plan to log a bug/issue against msmtp to get this behavior changed so it just works and will update this answer then. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/318484",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73242/"
]
} |
318,498 | I want to check my file usage with programs like https://wiki.ubuntuusers.de/Festplattenbelegung/ for the files on my partitions on which / (root) is on. However, there are many other files systems mounted into the file system somewhere. Those I do not want to check out - only my root partition. How can I exclude those? I would like to use a GUI program (so not du ). I thought that I either find a program in which can do the exclusion, but I haven't found one. I thought another option might be if I could mount my root device ( /dev/mapper/mylvg-myrootpartition ) to another location additionally to the normal mount to / and analyse this second mount folder, but I haven't managed to do that. Ideas? | There are gui tools, but give ncdu a try. It's a cli tool, fast and allows you to navigate directories whilst easily viewing the usage % of each dir. ncdu -x / The -x option stands for Do not cross filesystem boundaries, i.e. only count files and directories on the same filesystem as the directory being scanned. If it really must be an X gui tool, I found the source code of that unix interface from Jurassic Park a while ago, was good for a laugh... will try to find... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/318498",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122989/"
]
} |
318,515 | I recently installed the brew command on my Debian machine to install tldr man pages on my system. The command looks useful for installing programs that aren't packaged by Debian, also it does not require sudo to install packages . However, there is a limitation: only a few packages can be installed through the command brew . Is it possible to configure brew to install packages from Debian repositories? | Is it possible? Yes. Both programs are open source.Is it convenient? Not really. Why? Package managers work more or less like this: They track packages installed on your system(and their version) To do this, they specify their own format of packages(e.g. .deb), and use these packages as instructions on how to install the program and how to track it They also track dependancies (e.g. "this program needs openssl to work!") This is why having a system that would use few package managers isn't the best idea: Each package manager would have to be informed about the package being installed(e.g. brew would have to know that you installed firefox , and apt would have to know that you installed tldr ) Each package manager would have to resolve dependancies from other package managers(e.g. "Brew: This program needs ncurses , but apt already installed ncurses , so I don't need to pull them!"). You see, the problem with 2 is that package managers are abstraction for the underlying repositories. People like Debian folks choose the packages they want users to use, and they make them available to others. However, they also select these packages so that system is consistent; they want the least amount of packages to offer the most functionality. Why install ncurses version 1,2, and 3, when you can get everything to work with version 2? The first problem is also bad news. The package managers would have to inform each other about what they do, or they could collide( brew wouldn't know that ncurses is already installed). So why is it hard? Package managers would need to cooperate tightly Package managers would have to have strict policy about what to do when they can't agree on package Package managers would have to be able to work almost interchangebly, with the only visible difference being available programs Package managers would have to be able to track each others' repositories in case of updates. This effectively means you would need a package manager that would consist of the two package managers. You would need a new program. So what can I do? First of all, I would ask myself "Why do I want to do this?". Honestly, your distribution should supply you with plenty of packages. If you aren't happy with how many packages you have, you might consider switching to other distribution that has more packages that you need. If you are really desperate to get this brew to work, I would propose the following solution, although I'm not sure if this is fully possible: Grab the sources of brew . Learn the brew recipes format. Write a program that automatically translates recipes to Debian packages. Modify brew so that whenever you run it, it calls the program to translate recipes to .deb packages/searches for the programs in your distro's repos, then call apt to install this package. Making such modifications would probably take much time and isn't the easy thing. I suggest changing distro or sticking to your package manager instead. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/318515",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153195/"
]
} |
318,518 | I trying to understand the manpage of the dd program, which mentions: Sending a USR1 signal to a running 'dd' process makes it print I/O statistics to standard error and then resume copying. $ dd if=/dev/zero of=/dev/null& pid=$! $ kill -USR1 $pid; sleep 1; kill $pid What does pid=$! mean? Is this an assignment of a variable, which gets the pid of dd ?And is eventually used in the $pid variable? Also why do they use sleep and kill ? Is this the way to use -USR1 ? | dd if=/dev/zero of=/dev/null& The trailing & means run the prefix command in background. (Disclaimer: This is oversimplified statement) Refer to this : $! is the PID of the most recent background command. So pid=$! assign the most Recent background PID to variable pid, which is dd PID. Also why they use sleep and kill?. You need kill $pid (if not specified parameter, default signal for kill is TERM which is process termination) to terminate the dd process after you done testing, otherwise dd process may just stay in background and exhausting your CPU resources. Check your System Monitor of your platform to see. Whereas Kill -USR1 $pid print I/O statistics, it doesn't terminate the process. Without sleep 1 second, your dd process may get terminated by last command statement kill $pid ** before have the chance to write statistics output to your terminal. The processes is synchronous but trap+write operation ( kill -USR1 $pid ) may slower than terminate operation ( kill $pid ). So sleep 1 second to delay the startup of kill $pid to ensure statistics output done printing. This is the way to use -USR1? Just man dd : Sending a USR1 signal to a running 'dd' process makes it print I/O statistics to standard error and then resume copying. And man 7 signal : SIGUSR1 30,10,16 Term User-defined signal 1 SIGUSR2 31,12,17 Term User-defined signal 2 Combine both statements , you should understand USR1 is User-defined signal which is defined by dd to provide a way for user to interrupt it and print I/O statistics on the fly. It's program specific handler, it doesn't means you can kill -USR1 other_program_pid and expect statistics output. Also you might interest about this: Why does SIGUSR1 cause process to be terminated? . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/318518",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106182/"
]
} |
318,572 | From su 's man page: For backward compatibility, su defaults to not change the current directoryand to only set the environment variables HOME and SHELL (plus USER and LOGNAMEif the target user is not root). It is recommended to always use the --login option (instead of its shortcut -) to avoid side effects causedby mixing environments....-, -l, --login Start the shell as a login shell with an environment similar to a real login: o clears all the environment variables except TERM o initializes the environment variables HOME, SHELL, USER, LOGNAME, and PATH o changes to the target user's home directory o sets argv[0] of the shell to '-' in order to make the shell a login shell It's hard to tell if there's any difference between - and --login (or supposedly just -l ). Namely, the man page says "instead of its shortcut -", but all these options are grouped together, and I don't see an explanation of the difference, if it exists at all. UPD I checked the question, which is supposed to solve my problem . The question is basically about difference between su and su - . And I'm asking about difference between su - and su --login . So no, it doesn't solve it at all. | Debian's manual entry seems to be more enlightening: -, -l, --login Provide an environment similar to what the user would expect had the user logged in directly. When - is used, it must be specified before any username. For portability it is recommended to use it as last option, before any username. The other forms (-l and --login) do not have this restriction. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/318572",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29867/"
]
} |
318,595 | I have a file prova.txt like this: Start to grab from here: 1fix1fix2fix3fix4random1random2random3random4extra1extra2blaStart to grab from here: 2fix1fix2fix3fix4random1546random2561extra2blablaStart to grab from here: 1fix1fix2fix3fix4random1random22131 and I need to grep out from "Start to grab here" to the first blank line. The output should be like this: Start to grab from here: 1fix1fix2fix3fix4random1random2random3random4Start to grab from here: 2fix1fix2fix3fix4random1546random2561Start to grab from here: 1fix1fix2fix3fix4random1random22131 As you can see the lines after "Start to grab here" are random, so -A -B grep flag don't work: cat prova.txt | grep "Start to grab from here" -A 15 | grep -B 15 "^$" > output.txt Can you help me to find a way that catch the first line that will be grabbed (as "Start to grab from here"), until a blank line. I cannot predict how many random lines I will have after "Start to grab from here". Any unix compatible solution is appreciate (grep, sed, awk is better than perl or similar). EDITED: after brilliant response by @john1024, I would like to know if it's possible to: 1° sort the block (according to Start to grab from here: 1 then 1 then 2) 2° remove 4 (alphabetically random) lines fix1,fix2,fix3,fix4 but are always 4 3° eventually remove random dupes, like sort -u command Final output shoul be like this: # fix lines removed - match 1 first timeStart to grab from here: 1random1random2random3random4#fix lines removed - match 1 second timeStart to grab from here: 1#random1 removed cause is a duperandom22131#fix lines removed - match 2 that comes after 1Start to grab from here: 2random1546random2561 or # fix lines removed - match 1 first time and the second tooStart to grab from here: 1random1random2random3random4#random1 removed cause is a duperandom22131#fix lines removed - match 2 that comes after 1Start to grab from here: 2random1546random2561 The second output is better that the first one. Some other unix command magic is needed. | Using awk Try: $ awk '/Start to grab/,/^$/' prova.txtStart to grab from here: 1random1random2random3random4Start to grab from here: 2random1546random2561Start to grab from here: 3random45random22131 /Start to grab/,/^$/ defines a range. It starts with any line that matches Start to grab and ends with the first empty line, ^$ , that follows. Using sed With very similar logic: $ sed -n '/Start to grab/,/^$/p' prova.txtStart to grab from here: 1random1random2random3random4Start to grab from here: 2random1546random2561Start to grab from here: 3random45random22131 -n tells sed not to print anything unless we explicitly ask it to. /Start to grab/,/^$/p tells it to print any lines in the range defined by /Start to grab/,/^$/ . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/318595",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193403/"
]
} |
318,597 | I am writing a shell script program to extract some exceptions from a log file and do some processing. Currently I am using the below command in my script to capture the line nos in the log file that has an exception. The exception lines will contains ERROR keyord just after date and timestamp lineNos=($(grep -n ERROR $file | grep Exception | cut -d':' -f1 | tail -3)) While testing the current script, I noticed that some log entries contains ERROR and Exceptions in the same row but which is not actually the kind of ERROR that I am looking for(Example line 5) I would like to modify my script in such a way that it will return only the line no 3 in the below example log. 2016-10-21 15:25:37,231 INFO Processinng current row 2016-10-21 15:25:37,231 INFO com.test.main com.test.controller.CrashException: 2016-10-21 15:25:37,231 ERROR com.test.main com.test.controller.CrashException: 2016-10-21 15:25:37,231 DEBUG com.test.main com.test.controller.CrashException: 2016-10-21 15:25:37,231 DEBUG Processing row with ERROR and Exception 2016-10-21 15:25:37,231 DEBUG processed row 1: | Using awk Try: $ awk '/Start to grab/,/^$/' prova.txtStart to grab from here: 1random1random2random3random4Start to grab from here: 2random1546random2561Start to grab from here: 3random45random22131 /Start to grab/,/^$/ defines a range. It starts with any line that matches Start to grab and ends with the first empty line, ^$ , that follows. Using sed With very similar logic: $ sed -n '/Start to grab/,/^$/p' prova.txtStart to grab from here: 1random1random2random3random4Start to grab from here: 2random1546random2561Start to grab from here: 3random45random22131 -n tells sed not to print anything unless we explicitly ask it to. /Start to grab/,/^$/p tells it to print any lines in the range defined by /Start to grab/,/^$/ . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/318597",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194709/"
]
} |
318,625 | How do I grant a specific user the right to change user and group ownership of files and directories inside a specific directory? I did a Google search and saw that there is such a thing as setfacl , which allows for granting users specific rights to change permissions for files and directories . From what I read, though, this command does not allow granting chown permissions. So, say a file has user1 user1 theFile1user1 user1 theDirectory1 Issuing the following command would fail. [user1@THEcomputer]$ chown user2 theFile I do have root access on the computer. Is there a way to grant a user to issue chown commands inside a directory? UPDATE: How to add a user to a group. Here is the article that I used to add datamover to the hts group. [root@Venus ~]# usermod -a -G datamover hts[root@Venus ~]# exitlogout[hts@Venus Receive]$ groupshts wireshark datamover[hts@Venus Receive]$ UPDATE (address comment by RuiFRibeiro): Changing the ownership of the directory to the directory does not work, see screenshot. [datamover@Venus root]$ ls -latotal 311514624drwxrwxrwx. 6 datamover datamover 4096 Oct 14 14:05 .drwxr-xr-x 4 root root 4096 Aug 20 16:52 ..-rwxrwxrwx. 1 datamover datamover 674 Aug 31 16:47 create_files.zipdrwxrwxrwx 2 datamover datamover 4096 Oct 17 17:07 dudi-rwxrwxrwx. 1 datamover datamover 318724299315 Oct 13 15:47 Jmr400.mov-rwxrwxrwx. 1 datamover datamover 182693854 Aug 31 16:47 Jmr_Commercial_WithSubtitles.mov-rwxrwxrwx. 1 datamover datamover 80607864 Aug 31 16:47 Jmr_DataMover_Final.movdrwxrwxrwx. 2 datamover datamover 122880 Aug 23 11:54 ManyFilesdrwxrwxrwx. 3 datamover datamover 4096 Oct 25 07:18 Receivedrwxrwxrwx 2 datamover datamover 4096 Oct 14 13:40 sarah-rwxrwxrwx 1 datamover datamover 3184449 Oct 14 14:05 SourceGrid_4_40_bin.zip[datamover@Venus root]$ cd ./Receive/[datamover@Venus Receive]$ ls -latotal 178540drwxrwxrwx. 3 datamover datamover 4096 Oct 25 07:18 .drwxrwxrwx. 6 datamover datamover 4096 Oct 14 14:05 ..-rwxrwxrwx 1 hts hts 182693854 Oct 25 07:18 Jmr_Commercial_WithSubtitles.movdrwxrwxrwx 2 datamover datamover 122880 Oct 23 13:33 ManyFiles[datamover@Venus Receive]$ chown datamover:datamover ./Jmr_Commercial_WithSubtitles.movchown: changing ownership of './Jmr_Commercial_WithSubtitles.mov': Operation not permitted Here is an attempt as the owner of the file: [hts@Venus Receive]$ chown datamover:datamover Jmr_Commercial_WithSubtitles.movchown: changing ownership of 'Jmr_Commercial_WithSubtitles.mov': Operation not permitted So as you can see, neither possibility works. UPDATE (address countermode's answer) Group ownership may be changed by the file owner (and root). However, this is restricted to the groups the owner belongs to. Yes, one does have to log out first. Here is the result of my attempt: [hts@Venus ~]$ groups htshts : hts wireshark datamover[hts@Venus ~]$ cd /mnt/DataMover/root/Receive/[hts@Venus Receive]$ ls -latotal 178540drwxrwxrwx. 3 datamover datamover 4096 Oct 25 07:18 .drwxrwxrwx. 6 datamover datamover 4096 Oct 14 14:05 ..-rwxrwxrwx 1 hts hts 182693854 Oct 25 07:18 Jmr_Commercial_WithSubtitles.movdrwxrwxrwx 2 datamover datamover 122880 Oct 23 13:33 ManyFiles[hts@Venus Receive]$ chown hts:datamover ./Jmr_Commercial_WithSubtitles.mov [hts@Venus Receive]$ ls -latotal 178540drwxrwxrwx. 3 datamover datamover 4096 Oct 25 07:18 .drwxrwxrwx. 6 datamover datamover 4096 Oct 14 14:05 ..-rwxrwxrwx 1 hts datamover 182693854 Oct 25 07:18 Jmr_Commercial_WithSubtitles.movdrwxrwxrwx 2 datamover datamover 122880 Oct 23 13:33 ManyFiles[hts@Venus Receive]$ chown datamover:datamover ./Jmr_Commercial_WithSubtitles.mov chown: changing ownership of ‘./Jmr_Commercial_WithSubtitles.mov’: Operation not permitted[hts@Venus Receive]$ Adding hts to the datamover group does indeed allow me to change the ownership of the group part, so now a partial answer and validation for the statement. | Only root has the permission to change the ownership of files. Reasonably modern versions of Linux provide the CAP_CHOWN capability; a user who has this capability may also change the ownership of arbitrary files. CAP_CHOWN is global, once granted, it applies to any file in a local file system. Group ownership may be changed by the file owner (and root). However, this is restricted to the groups the owner belongs to. So if user U belongs to groups A, B, and C but not to D, then U may change the group of any file that U owns to A, B, or C, but not to D. If you seek for arbitrary changes, then CAP_CHOWN is the way to go. CAUTION CAP_CHOWN has severe security implications, a user with a shell that has capability CAP_CHOWN could get root privileges. (For instance, chown libc to yourself, patch in your Trojan Horses, chown it back and wait for a root process to pick it up.) Since you want to restrict the ability to change ownership to certain directories, none of the readily available tools will aid you. Instead you may write your own variant of chown that takes care of the intended restrictions. This program needs to have capability CAP_CHOWN e.g. setcap cap_chown+ep /usr/local/bin/my_chown CAUTION Your program will probably mimic the genuine chown , e.g. my_chown user:group filename(s) . Do perform your input validation very carefully. Check that each file satisfies the intended restrictions, particularly, watch out for soft links that point out of bounds. If you want to restrict access your program to certain users, you may either create a special group, set group ownership of my_chown to this group, set permissions to 0750, and add all users that are permitted to this group. Alternatively you may use sudo with suitable rules (in this case you also don't need capability magic). If you need even more flexibility, then you need to code the rules you have in mind into my_chown . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/318625",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91119/"
]
} |
318,633 | Could someone please explain how the exit command works in Unix terminal? A search of man exit and which exit was not helpful and I have come across the following issue. After installing add on packages for Anaconda and PyCharm on my new Red Hat system I noticed that when ever I called exit to exit out of a terminal session I would get a series of errors, and then the terminal quits as expected. The errors seem to suggest that my call to exit is triggering a call rm ~/anaconda3/.../ and rm ~/PyCharm/.... , causing an error. All of the directories also appear to be the locations of packages I downloaded for these programs (i.e. numpy), see below. $ exitrm: cannot remove ‘~/anaconda3/lib/python3.5/site-packages/numpy/core’: Is a directory...... Resolved In my ~/.bash_logout file, there was a line find ~ -xdev ( -name *~ -o -name .*~ -o -name core ) -exec \rm '{}' \; Commenting this line out stopped the error messages. It appears to search and delete all temporary files. But it also attempts to find directories with the word "core" in them, and delete those as well. This was a preset in the system. | Well usually you would only see execution upon exiting a shell if you've manually configured this. But maybe one of the packages you've installed came with a bash exit shell script... check; ~/.bash_logout maybe you'll find a script call from there, it's an odd one... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/318633",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/181697/"
]
} |
318,724 | Hello I have a text file containing the following info: #[Tue Oct 25 00:00:02 2016] --- START OUTPUT#CMD: XXXEND-->0<--#[Tue Oct 25 00:00:57 2016] --- END#RETURN: 1#ELAPSED TIME (in seconds): 55#[Tue Oct 25 00:05:01 2016] --- START OUTPUT#CMD: XXXEND-->0<--#[Tue Oct 25 00:05:33 2016] --- END#RETURN: 0#ELAPSED TIME (in seconds): 32 I want to get the --End line the Return line and the Elapsed line if its corresponding Return is > 0. So far I have just been able to grep the Return line grep "#RETURN:" -A 1 -B 1 f.log But how to i grep only if the Return is > 0 ? Desired output: #[Tue Oct 25 00:00:57 2016] --- END#RETURN: 1#ELAPSED TIME (in seconds): 55 | With awk : awk '/END$/ {prev=$0; next}; /^#RETURN/ && $2>0 {cur=$0; pr=1; next};\ pr {printf "%s\n%s\n%s\n", prev, cur, $0; pr=0}' file.txt /END$/ {prev=$0; next} : If the line ends with END , save it as variable prev , and go to the next line; This is the line before RETURN /^#RETURN/ && $2>0 {cur=$0; pr=1; next} : If the line starts with #RETURN and the second field is greater than 0, then save the line as cur , set variable pr as 1 (true), and go to the next line pr {printf "%s\n%s\n%s\n", prev, cur, $0; pr=0} : If pr is true, then print the output in desired format, and finally set pr as 0 (false) Example: % cat file.txt #[Tue Oct 25 00:00:02 2016] --- START OUTPUT#CMD: XXXEND-->0<--#[Tue Oct 25 00:00:57 2016] --- END#RETURN: 1#ELAPSED TIME (in seconds): 55#[Tue Oct 25 00:05:01 2016] --- START OUTPUT#CMD: XXXEND-->0<--#[Tue Oct 25 00:05:33 2016] --- END#RETURN: 0#ELAPSED TIME (in seconds): 32% awk '/END$/ {prev=$0; next}; /^#RETURN/ && $2>0 {cur=$0; pr=1; next}; pr {printf "%s\n%s\n%s\n", prev, cur, $0; pr=0}' file.txt#[Tue Oct 25 00:00:57 2016] --- END#RETURN: 1#ELAPSED TIME (in seconds): 55 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/318724",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129073/"
]
} |
318,755 | I would like to change the way I display a big (~6000 lines) log file using wim, less or whatever, in order to simplify the problem checking. I'd like to highlight some line of the log based on a patter (i.e. error , warning , info ...) and/or hide some others. Which tools could I use? Do I just need a shell script? It's important that after the process, I can read the output using less, vim, ... to perform search operations! Edit: a little piece of the log: 2016/10/25 12:19:24.403355 INFO <ServiceManager.cpp#2614 TID#3> Security object has NOT been parsed2016/10/25 12:19:24.403369 INFO <ServiceManager.cpp#1263 TID#3> Service object sequence started2016/10/25 12:19:24.403372 DBG <ServiceManager.cpp#1276 TID#3> preinvoke succeeded | I would recommend a shell script, based on awk like Valentin B. solution: $ cat colorizeawk 'function color(c,s) { printf("\033[%dm%s\033[0m\n",30+c,s)}/error/ {color(1,$0);next}/success/ {color(2,$0);next}/warning/ {color(3,$0);next}/INFO/ {color(4,$0);next}/DBG/ {color(5,$0);next}{print}' $1 In order to be able to interactively view the colorized output, I would use less in raw mode, e.g.: colorize mylog.txt | less -R | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/318755",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/196912/"
]
} |
318,761 | I have a file with the following structure: GO:0000001 mitochondrion inheritanceGO:0000002 mitochondrial genome maintenanceGO:0000003 reproductionalt_id: GO:0019952alt_id: GO:0050876GO:0000005 obsolete ribosomal chaperone activityGO:0000006 high-affinity zinc uptake transmembrane transporter activityGO:0000007 low-affinity zinc ion transmembrane transporter activityGO:0000008 obsolete thioredoxinalt_id: GO:0000013GO:0000009 alpha-1,6-mannosyltransferase activity Where it says alt_id it means that it is an alternative to the previous GO: code.I'd like to add to each alt_id the definition of the previous GO: , that is, I want an output like this: GO:0000001 mitochondrion inheritanceGO:0000002 mitochondrial genome maintenanceGO:0000003 reproductionalt_id: GO:0019952 reproductionalt_id: GO:0050876 reproductionGO:0000005 obsolete ribosomal chaperone activityGO:0000006 high-affinity zinc uptake transmembrane transporter activityGO:0000007 low-affinity zinc ion transmembrane transporter activityGO:0000008 obsolete thioredoxinalt_id: GO:0000013 obsolete thioredoxinGO:0000009 alpha-1,6-mannosyltransferase activity How can I copy the content of the previous row in the following? I work with Cygwin in a Windows-based environment. | I would recommend a shell script, based on awk like Valentin B. solution: $ cat colorizeawk 'function color(c,s) { printf("\033[%dm%s\033[0m\n",30+c,s)}/error/ {color(1,$0);next}/success/ {color(2,$0);next}/warning/ {color(3,$0);next}/INFO/ {color(4,$0);next}/DBG/ {color(5,$0);next}{print}' $1 In order to be able to interactively view the colorized output, I would use less in raw mode, e.g.: colorize mylog.txt | less -R | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/318761",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122520/"
]
} |
318,824 | Upgraded here a few VM servers to Debian 9. Now when using ssh , we cannot copy and paste between remote terminals. The cursor seems to be doing the movements, and marking the text, albeit in a funnier/different way than the usual, but nothing gets copied other to the clipboard when doing command-C / command-V or copy and paste in the respective menu. We also tried doing the mouse movements with Shift and other keyboard combinations, without positive results. This is happening in OS/X, namely Sierra and El Capitan, and in Windows, using mobaXterm terminals too. The situation is due to vim´s awareness of having a mouse. Following other questions in Stack Overflow, I created /etc/vim/vimrc.local with set mouse="r" and set mouse="v ; it did not work out well. Finally setup up set mouse="" in the same file, with some moderate success. However, it also does not work well 100% of the time. What else can be done? | Solution: change mouse=a to mouse=r in your local .vimrc . The problem with setting this in /usr/share/vim/vim80/defaults.vim as the accepted answer says, is that it will be overwritten on every update. I searched for a long time and ended up on this one: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=864074 LOCAL SOLUTION (flawed): The first solution is to use local .vimrc files and set it there. So you could create a local .vimrc ( ~/.vimrc ) for every user and set your options there. Or create one in /etc/skel so it will be automatically created for every new user you create. But when you use local .vimrc files, you have to set all options there, because if there is a local .vimrc , the defaults.vim doesn't get loaded at all! And if there is no local .vimrc all your settings are being overwritten from defaults.vim . GLOBAL SOLUTION (preferrable): I wanted a global configuration for all users, which loads the default options and then adds or overwrites the defaults with my personal settings. Luckily there is an option for that in Debian: The /etc/vim/vimrc.local will be loaded after the /etc/vim/vimrc . So you can create this file and load defaults, preventing them from being loaded again (at the end) and then add your personal options: Please create the following file: /etc/vim/vimrc.local " This file loads the default vim options at the beginning and prevents" that they are being loaded again later. All other options that will be set," are added, or overwrite the default settings. Add as many options as you" whish at the end of this file." Load the defaultssource $VIMRUNTIME/defaults.vim" Prevent the defaults from being loaded again later, if the user doesn't" have a local vimrc (~/.vimrc)let skip_defaults_vim = 1" Set more options (overwrites settings from /usr/share/vim/vim80/defaults.vim)" Add as many options as you whish" Set the mouse mode to 'r'if has('mouse') set mouse=rendif (Note that $VIMRUNTIME used in the above snippet has a value like /usr/share/vim/vim80/defaults.vim .) If you also want to enable the "old copy/paste behavior", add the following lines at the end of that file as well: " Toggle paste/nopaste automatically when copy/paste with right click in insert mode:let &t_SI .= "\<Esc>[?2004h"let &t_EI .= "\<Esc>[?2004l"inoremap <special> <expr> <Esc>[200~ XTermPasteBegin()function! XTermPasteBegin() set pastetoggle=<Esc>[201~ set paste return ""endfunction | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/318824",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138261/"
]
} |
318,839 | I am looking to have a awk regular expression that can give all strings not matching a particular word. using /^((?!word \+).)*/ works in java but does not work in AWK. Get compilation failed error, escaping the brackets fixes the compilation error, but the regular expression matching is not correct. It would be great if any one can help with a awk regular expression . I can not use string" !~ /regex/ I need to use string" ~ /regex/ regex shuould pass for all string but for a specific string. Strings containing domain should be filtered out. Input This is domain testThis is do testThis is test Output This is do testThis is test Need to do with regular expression only. Can not change the Awk code in AWK its like string" ~ /regex/ so can only pass a regex to achieve this. | While Thomas Dickey's answer is clever, there is a right way to do this: awk '!/domain/ {print}' <<EOFThis is domain testThis is do testThis is testEOFThis is do testThis is test | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/318839",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/196970/"
]
} |
318,846 | I am trying to filter out the lines containing a particular word. The regex is command line input to the script. $0 ~ regex {//Do something.} Sample input is: **String** **number**domain 1domain 2bla 3 So, from the above input, user can say - filter out the rows which have word "domain". What I've tried: regex = "\?\\!domain" (negative lookahead). But this regex is filtering out every row. Not just the rows with word "domain". | For given input file input containing the following: domaindemesne To filter for lines containing domain : $ awk '/domain/ { print }' inputdomain To filter for lines not containing domain : $ awk '!/domain/ {print }' inputdemesne For filtering based on the field rather than the entire line, we can try the following for the new given input file: example www.example.comexemplar www.example.net To filter out lines where the first field contains example : $ awk '$1 !~ /example/ { print }' inputexemplar www.example.net In your question, you used $0 which is the entire line rather than the first field. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/318846",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/158782/"
]
} |
318,859 | I usually use watch Linux utility to watch the output of a command repeatedly every n seconds, like in watch df -h /some_volume/ . But I seem not to be able to use watch with a piped series of command like: $ watch ls -ltr|tail -n 1 If I do that, watch is really watching ls -ltr and the output is being passed to tail -n 1 which doesn't output anything. If I try this: $ watch (ls -ltr|tail -n 1) I get $ watch: syntax error near unexpected token `ls' And any of the following fails some reason or another: $ watch <(ls -ltr|tail -n 1)$ watch < <(ls -ltr|tail -n 1)$ watch $(ls -ltr|tail -n 1)$ watch `ls -ltr|tail -n 1)` And finally if do this: $ watch echo $(ls -ltr|tail -n 1) I see no change in the output at the given interval because the command inside $() is run just once and the resulting output string is always printed ("watched") as a literal. So, how do I make the watch command work with a piped chain of commands [other that putting them inside a script]? | watch 'command | othertool | yet-another-tool' | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/318859",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40195/"
]
} |
318,955 | Sometimes we just need to type a slightly different name when using mv/cp/convert . For example, convert IMG-long-number.jpg IMG-long-number.png How can I repeat IMG-long-number.jpg before typing the IMG-long-number.png, so I only need to make small adjustment? This is similar to How to repeat currently typed in parameter on bash console? but for zsh/zle. | !#$<Tab> works for me. Given: $ echo a Typing !#$ then pressing Tab expands !#$ to a . Tab completion also lists other options if you try an operation with : : $ echo a !#$:& -- repeat substitutionA -- absolute path resolving symbolic linksQ -- strip quotesa -- absolute pathc -- PATH search for commande -- leave only extensiong -- globally apply s or &h -- head - strip trailing path elementl -- lower case all wordsq -- quote to escape further substitutionsr -- root - strip suffixs -- substitute stringt -- tail - strip directoriesu -- upper case all words | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/318955",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38242/"
]
} |
319,019 | I am not aware of a KDE-native tool to mount iso images in Plasma 5 and I keep using the Gnome tool gnome-disk-utility , as indicated here . Is there a KDE version of this tool? It is able to do more than just add a context menu entry to mount iso files, like setting mounting options and backing up partitions, but I am mostly interested in the 'mount iso' option. | If what is wanted is a separate Qt-based gui, Acetoneiso qualifies: it is written in Qt and can mount and unmount images; it installs fuseiso and two qt packages. But in spite of its name gnome-disk-utility only comes by itself with no packages foreign to KDE. It includes the gnome-disk-image-mounter tool which does the actual job. To install: sudo apt install gnome-disk-utility To run "Disks": gnome-disks To mount an image file: gnome-disk-image-mounter or execute the file /usr/share/applications/gnome-disk-image-mounter.desktop . This executable is not found by the launcher as long as it contains the line NoDisplay=true ; change that to false to launch it as any other program. (then select the iso) To add it to context menu: kate ~/.local/share/kservices5/ServiceMenus/mount_gnomedisks.desktop (be sure you have the folder ~/.local/share/kservices5/ServiceMenus ) then paste these lines: [Desktop Entry]Actions=mountIcon=dialog-informationMimeType=application/x-cd-image;application/x-raw-disk-image;model/x.stl-binaryServiceTypes=KonqPopupMenu/PluginType=ServiceX-KDE-Priority=TopLevel[Desktop Action mount]Exec=gnome-disk-image-mounter %UIcon=drive-removable-mediaName=Mount image and save. And there are also the Dolphin services; most of those do not work, as they are outdated, and the newest ones are not the best rated. Luckily, there are exceptions, like KDE-Services ( https://store.kde.org/p/998464 ). It seem it cannot be installed from the Dolphin-Services button; instead, it can be downloaded as tar.bz2 archive, unpacked and, by opening a terminal in the resulting folder, it can be installed by running the command sudo make install . This is a collection of services, desktop files installed at system level in /usr/share/kservices5/ServiceMenus/ and scripts installed in usr/share/applications . To have a simplified service menu based on this, see this answer. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/319019",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
319,044 | nmap -p1-65535 localhost gives me PORT STATE SERVICE8080/tcp open http-proxy What is the process that is using this port. From /etc/services/ : http-alt 8080/tcp webcache # WWW caching service | You can use lsof to find out the list of processes runing on 8080 lsof -i :8080 You can get more detail about the process throught : ps -ef | grep put_the _PID_here | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/319044",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189711/"
]
} |
319,052 | I am trying to run an OpenGL 2.1+ application over SSH. [my computer] --- ssh connection --- [remote machine] (application) I use X forwarding to run this application and with that in mind I think there are a couple of ways for this application to do 3D graphics: Using LIBGL_ALWAYS_INDIRECT, the graphics hardware on my computer can be used. According to this post this is generally limited to OpenGL version 1.4. Using Mesa software rendering on the remote machine. This supports higher versions of OpenGL, but uses the CPU. However, in my case the remote machine has a decent graphics card. So rather than software rendering, I was wondering if it could do hardware rendering remotely instead. Also, if there is another way to use my machine's graphics card that would be great too. | The choice is not necessarily between indirect rendering and software rendering, but more precisely between direct and indirect rendering. Direct rendering will be done on the X client (the remote machine), then rendering results will be transferred to X server for display. Indirect rendering will transmit GL commands to the X server, where those commands will be rendered using server's hardware. Since you want to use the 3D hardware on the remote machine, you should go with direct rendering (and accept overhead of transmitting rendered raster image over the network). If you application cannot live with OpenGL 1.4, direct rendering is your only option. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/319052",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197123/"
]
} |
319,257 | I have (well, I had ) a directory: /media/admin/my_data It was approximately 49GB in size and had tens of thousands of files in it.The directory is the mount point of an active LUKS partition. I wanted to rename the directory to: /media/admin/my_data_on_60GB_partition I didn't realise at the time, but I issued the command from home directory so I ended up doing: ~% sudo mv /media/admin/my_data my_data_on_60GB_partition So then the mv program started to move /media/admin/my_data and its contents to a new directory ~/my_data_on_60GB_partition . I used Ctrl + C to cancel the command part way through, so now I have a whole bunch of files split across directories: ~/my_data_on_60GB_partition <--- about 2GB worth files in here and /media/admin/my_data <---- about 47GB of orig files in here The new directory ~/my_data_on_60GB_partition and some of its subdirectories are owned by root. I'm assuming the mv program must have copied the files as root initially and then after the transfer chown 'ed them back to my user account. I have a somewhat old backup of the directory/partition. My question is, is it possible to reliably restore the bunch of files that were moved? That is, can I just run: sudo mv ~/my_data_on_60GB_partition/* /media/admin/my_data or should I give up trying to recover, as the files are possibly corrupted and partially complete, etc.? OS - Ubuntu 16.04 mv --version mv (GNU coreutils) 8.25 | When moving files between filesystems, mv doesn't delete a file before it's finished copying it, and it processes files sequentially (I initially said it copies then deletes each file in turn, but that's not guaranteed — at least GNU mv copies then deletes each command-line argument in turn, and POSIX specifies this behaviour ). So you should have at most one incomplete file in the target directory, and the original will still be in the source directory. To move things back, add the -i flag so mv doesn't overwrite anything: sudo mv -i ~/my_data_on_60GB_partition/* /media/admin/my_data/ (assuming you don't have any hidden files to restore from ~/my_data_on_60GB_partition/ ), or better yet (given that, as you discovered, you could have many files waiting to be deleted), add the -n flag so mv doesn't overwrite anything but doesn't ask you about it: sudo mv -n ~/my_data_on_60GB_partition/* /media/admin/my_data/ You could also add the -v flag to see what's being done. With any POSIX-compliant mv , the original directory structure should still be intact, so alternatively you could check that — and simply delete /media/admin/my_data ... (In the general case though, I think the mv -n variant is the safe approach — it handles all forms of mv , including e.g. mv /media/admin/my_data/* my_data_on_60GB_partition/ .) You'll probably need to restore some permissions; you can do that en masse using chown and chmod , or restore them from backups using getfacl and setfacl (thanks to Sato Katsura for the reminder ). | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/319257",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106525/"
]
} |
319,267 | Background I am writing a simple script to be running in Raspbain on Raspberry Pi 2, simple turn a LED on to indicate I am ready to connect with SSH from my desktop computer. The script is not important here, only to mention that as I use frequence control so the script is running an infinite loop, to turn the LED on and off frequently. So this is an example of simple service. However, at least the accepted answer of this question advise me to set the type to idle. So my service file looks like [Unit]Description=Turn on LED after SSH is ready[Service]Type=idleExecStart=/usr/bin/sshready.py[Install]Wants=network-online.targetAfter=network-online.target Effect The service runs as expected. However, I noticed when I start putty in my desktop computer right after the LED turns on the login prompt not appear immidiately. So I checked with $ systemd-analyze plot > output.svg The result shows Question It looks like my services starts not after network-online.target , what is wrong here and how can I fix it? | When there is a question about a systemd directive, you can use man systemd.directives to find where it's documented. In this case it shows that After= is documented in man systemd.unit . In that file, it shows that After= directive is listed in the "[UNIT] SECTION OPTIONS", indicating that it belongs in the [Unit] section of the file. The same documentation also documents the [INSTALL] section options, and After= is not listed there. In short, your After= directive was in the wrong location the unit file so it had no effect until you moved it to the correct location. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/319267",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197316/"
]
} |
319,301 | Let's say I have a CSV file: "col1","col2","col3""col4","col5,subtext","col6 The problem I have is as follows : cut -d, -f1,2 test.txt"coll1","col2""col4","col5 The desired output is : "col1","col2""col4","col5,subtext" | The ParseWords module, which ships with Perl, covers this quite elegantly. Example below. $ perl -MText::ParseWords -nE '@a=quotewords ",",1,$_;say $a[0],",",$a[1]' <test.txt"col1","col2""col4","col5,subtext"$ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/319301",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102456/"
]
} |
319,315 | How could I use something like sed to split a file into two so the file containing eric shwartzdavid snyder where the 4 spaces between entries are actually tabs into two files such as: file1 : ericdavid file2 : shwartzsnyder So it puts everything after the tab on each line into another file. | A solution could be: awk '{ print $1 > "file1"; print $2 > "file2"}' file | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/319315",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/175563/"
]
} |
319,327 | #! /bin/bashfor i in {A,B,C,D,E,F,G,H,J} ; do echo "$i $i $i $i $i $i $i $i" cat > ~/Desktop/$i.txt done I want to make 9 text files, each showing me one letter repeated 8 times. e.g. A.txt should have the letter A 8 times in a column)If I run the script without the cat , it indeed shows me 8 times the A letter, then 8 times the B, C, etc. When inserting the cat statement, it doesn't work. What am I doing wrong? | A solution could be: awk '{ print $1 > "file1"; print $2 > "file2"}' file | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/319327",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197020/"
]
} |
319,363 | I have a problem with evince PDF-document viewer. I have a printer that is well configured with cups, and I can print PDFs from other PDF viewers such as Okular, but not with Evince.There are simply no printer listed when I want to print with Evince, only "print to a file", or "print with lpr". I can use lpr to print with evince, but I have to type the command with the options I want, which is not very practical. I'm running Debian Testing (Stretch) with Evince 3.22.1. I tried to delete the files ~/.cups/lpoptions and ~/.config/evince/print-settings but it did not solve the problem. | I had the same issue and I couldn't print any images either with most GTK+ applications. The latest GTK3 (3.22) requires the package gtk3-print-backends for printers to be listed in GTK3 print dialogs. Installing that package did the trick for me. I'm running Arch Linux. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/319363",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159872/"
]
} |
319,423 | Let's say i create a filename with this: xb@dnxb:/tmp/test$ touch '"i'"'"'m noob.mp4"'xb@dnxb:/tmp/test$ ls -1"i'm noob.mp4"xb@dnxb:/tmp/test$ Then vim . to go inside Netrw directory listing. " ============================================================================" Netrw Directory Listing (netrw v156)" /tmp/test" Sorted by name" Sort sequence: [\/]$,\<core\%(\.\d\+\)\=\>,\.h$,\.c$,\.cpp$,\~\=\*$,*,\.o$,\.obj$,\.info$,\.swp$,\.bak$,\~$" Quick Help: <F1>:help -:go up dir D:delete R:rename s:sort-by x:special" ==============================================================================.././"i'm noob.mp4" Then press Enter to view the file. Type: :!ls -l % It will shows error: xb@dnxb:/tmp/test$ vim .ls: cannot access '/tmp/test/i'\''m noob.mp4': No such file or directoryshell returned 2Press ENTER or type command to continue I also tried: [1] :!ls -l '%' : Press ENTER or type command to continue/bin/bash: -c: line 0: unexpected EOF while looking for matching `"'/bin/bash: -c: line 1: syntax error: unexpected end of fileshell returned 1Press ENTER or type command to continue [2] :!ls -l "%" : Press ENTER or type command to continue/bin/bash: -c: line 0: unexpected EOF while looking for matching `''/bin/bash: -c: line 1: syntax error: unexpected end of fileshell returned 1Press ENTER or type command to continue [3] :!ls -l expand("%") : /bin/bash: -c: line 0: syntax error near unexpected token `('/bin/bash: -c: line 0: `ls -l expand(""i'm noob.mp4"")'shell returned 1Press ENTER or type command to continue [4] !ls -l shellescape("%") : /bin/bash: -c: line 0: syntax error near unexpected token `('/bin/bash: -c: line 0: `ls -l shellescape("/tmp/test/"i'm noob.mp4"")'shell returned 1Press ENTER or type command to continue [5] !ls -l shellescape(expand("%")) : /bin/bash: -c: line 0: syntax error near unexpected token `('/bin/bash: -c: line 0: `ls -l shellescape(expand("/tmp/test/"i'm noob.mp4""))'shell returned 1Press ENTER or type command to continue My ultimate goal is perform rsync by Ctrl + c , e.g: nnoremap <C-c> :!eval `ssh-agent -s`; ssh-add; rsync -azvb --no-t % [email protected]:/home/xiaobai/storage/ My platform is Kali Linux's vim.gtk3 , bash. Fedora's vim and gvim also have the same problem. What's the correct syntax to escape filename containing single and double quotes in vim ? [UPDATE] exec '!ls -l' shellescape(expand('%')) can work, but stil i can't figure out how to make rsync above work. I have no idea where should i put quotes for this more complex command rsync . | From :help filename-modifiers : The file name modifiers can be used after "%", "#", "#n", "<cfile>", "<sfile>","<afile>" or "<abuf>". ...... :s?pat?sub? Substitute the first occurrence of "pat" with "sub". This works like the |:s| command. "pat" is a regular expression. Any character can be used for '?', but it must not occur in "pat" or "sub". After this, the previous modifiers can be used again. For example ":p", to make a full path after the substitution. :gs?pat?sub? Substitute all occurrences of "path" with "sub". Otherwise this works like ":s". So rather than just handling double quotes or single quotes, let's just backslash escape everything unusual: :!ls -l %:gs/[^0-9a-zA-Z_-]/\\&/ Works perfectly with the test filename you provided. To use an absolute path, which you may want for rsync , you can add :p at the end: :!ls -l %:gs/[^0-9a-zA-Z_-]/\\&/:p Actually, it also works just fine if you backslash-escape literally every character, and it's shorter to type: :!ls -l %:gs/./\\&/:p So, in your rsync command, instead of % , use %:gs/./\\&/:p . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/319423",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64403/"
]
} |
319,512 | This is actually not my question, someone asked about this in a facebook group but nobody was able to answer it, so there are some variables: x="abcde12345"y="s'ldfsd[opsk12345"z="1234sdfsdfafa23456" He wants to show the numbers at the end of those variables, so the end result should be like this: 12345123456 The number of digits are vary, it can be 1 to 10 or even 1 to 100000. and it's not always 12345 , it's random. What is the best way to accomplish this ? I've tried grep -P "[0-9].*[0-9]$" but it also shows letters between numbers too. | In bash, you can remove the longest leading substring ending with a non-digit from a variable $var using parameter substitution ${var##[^0-9]} or (POSIXly) ${var##[!0-9]} e.g. $ echo "$x --> ${x##*[^0-9]}"abcde12345 --> 12345$ $ echo "$y --> ${y##*[^0-9]}"s'ldfsd[opsk12345 --> 12345$ $ echo "$z --> ${z##*[^0-9]}"1234sdfsdfafa23456 --> 23456 See for example Parameter Expansion | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/319512",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197532/"
]
} |
319,655 | Double quotes are required in bash for command substitution: $ echo "$(date)"Fri Oct 28 19:16:40 EDT 2016 Whereas single quotes do not do command substitution: $ echo '$(date)'$(date) … why then do I see the following behavior with alias that seems to suggest that command substitution happened with single quotes? alias d='$(date)'$ dNo command 'Fri' found, did you mean: .... | Single-quote vs double-quote versions Let's define the alias using single-quotes: $ alias d='$(date)' Now, let's retrieve the definition of the alias: $ alias dalias d='$(date)' Observe that no command substitution was yet performed. Let's do the same, but this time with double-quotes: $ alias d="$(date)"$ alias dalias d='Fri Oct 28 17:01:12 PDT 2016' Because double-quotes are used, command substitution was performed before the alias was defined. Single-quote version Let's try executing the single-quote version: $ alias d='$(date)'$ dbash: Fri: command not found The single-quote version is equivalent to running: $ $(date)bash: Fri: command not found In both cases, the command substitution is performed when the command is executed. A variation Let's consider this alias which uses command substitution and is defined using single-quotes: $ alias e='echo $(date)'$ eFri Oct 28 17:05:29 PDT 2016$ eFri Oct 28 17:05:35 PDT 2016 Every time that we run this command, date is evaluated again. With single-quotes, the command substitution is performed when the alias is executed, not when it is defined. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/319655",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24044/"
]
} |
319,667 | If I type command on my terminal, I don't get "command not found", and the exit code is 0. I assume that this means command actually does something on bash. Also, command -h returns: bash: command: -h: invalid optioncommand: usage: command [-pVv] command [arg ...] What is it used for? | From help command : $ help commandcommand: command [-pVv] command [arg ...] Execute a simple command or display information about commands. Runs COMMAND with ARGS suppressing shell function lookup, or display information about the specified COMMANDs. Can be used to invoke commands on disk when a function with the same name exists. Options: -p use a default value for PATH that is guaranteed to find all of the standard utilities -v print a description of COMMAND similar to the `type' builtin -V print a more verbose description of each COMMAND Exit Status: Returns exit status of COMMAND, or failure if COMMAND is not found. As a more general note, rather than just using -h when you don't know what a command does, you should try: type -a command Which would in this case have told you it is a shell builtin. help command is good for shell builtins. For other commands (and also for shell builtins, actually), try man somecommand Also, -h is not necessarily the "help" option. If you don't know what a command does, that may not be a safe assumption to make. Safer is --help . somecommand --help (Common commands where -h is a valid option but does not mean "help" are ls , free , df , du . All of these are informational only, but the assumption that -h will only ever mean "help" is a dangerous assumption.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/319667",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141689/"
]
} |
319,735 | How to show every installed command-line shell, (i.e. bash , zsh , etc. ), with no duplicates, and nothing else, (i.e. no programs that aren't shells)? This code almost works on my Lubuntu system, (which has dash , ksh , zsh , csh and yash ), but prints whiptail and fails to print yash : apropos shell | grep sh | \ sed 's/ .*//;s/.*/which &/e;/^\/bin\//!d;s/.*/realpath &/e;/^\/bin\//!d' | \ sort -u | xargs whatisbash (1) - GNU Bourne-Again SHellbsd-csh (1) - a shell (command interpreter) with C-like syntaxdash (1) - command interpreter (shell)ksh93 (1) - KornShell, a command and programming languagelksh (1) - Legacy Korn shell built on mkshmksh (1) - MirBSD Korn shellwhiptail (1) - display dialog boxes from shell scriptszsh5 (1) - the Z shell | On FreeBSD, TrueOS/PC-BSD, DragonFly BSD, et al. The list of approved shells, i.e. shells that the administrator permits users to change their login shell to with the chsh command, is in the conventional /etc/shells file. A simple cat /etc/shells gives one the list of approved shells. However, this is not quite the list of installed shells. Although many third party shells (the operating system itself coming with the Almquist and TENEX C shells) install themselves into /etc/shells when installed from packages or ports, this isn't guaranteed and of course the administrator may have changed /etc/shells so that there is a shell that was installed but that is not approved . The list of installed shells is not hard to come by, though. As aforementioned, the Almquist and TENEX C shells come with the operating system, as /bin/sh and /bin/tcsh (a.k.a. /bin/csh ) respectively. To them one adds the list of shells that are installed from packages. In the FreeBSD package system, all shells are in the shells/ area of the package hierarchy, so one simply uses the pkg tool to query the installed package database: pkg query "%o %n-%v %c" | awk '/^shells\// {$1="";print $0;}' This will catch fish, rc, v7sh, heirloom-sh, and suchlike if one has them installed but will also yield a handful of false positives for packages that are in the shells/ hierarchy but that aren't per se shells, such as bash-completion. Further reading shells/ . FreeBSD ports tree . freebsd.org. pkg-query . FreeBSD manual . 2015. freebsd.org. On OpenBSD OpenBSD is like FreeBSD, TrueOS et al. with some differences. One still runs cat /etc/shells to see the list of approved shells, and there is still the difference between approved and installed shells. OpenBSD has an older package manager, though, and a different set of shells that come in the operating system itself. On OpenBSD, the operating system itself comes with the Korn shell (pdksh, specifically) and the C shell (not TENEX C shell) as /bin/sh (a.k.a. /bin/ksh ) and /bin/csh (not /bin/tcsh ) respectively. Again, third party shells that one adds to that list are in the shells/ area of the package hierarchy, and the command to find the installed ones is thus pkg_info -P -A | grep '^shells/' If you have the sqlports package installed, you can also use sqlite3 to make SQL queries against the /usr/local/share/sqlports database to find installed shell packages. Further reading shells/ . OpenBSD ports tree . ports.su. pkg_info . OpenBSD manual . 2016. openbsd.org. On Debian, Ubuntu, et al. Again, the list of approved shells is obtainable with cat /etc/shells and again this is not the same as the list of installed shells. On Debian and Ubuntu, every shell is managed by the package manager. There are no shells that "come with the operating system". Again, all shell packages are handily marked. APT (the Advanced Packaging Tool) has the notion of "sections" rather than a hierarchy as the BSD ports/packages worlds have, and shell packages are in the Shells section. There are several tools that can query the package manager's database. I choose aptitude here. One runs aptitude search '~i~sshells' which searches for installed ( ~i ) packages in the section ( ~s ) named shells . This is aptitude 's "shorthand" search syntax. The "true" search syntax would be '?installed ?section(shells)' which is somewhat more to type. Furthermore: you can get aptitude to print out more information about each package with its -F command-line option. Consider aptitude search -F '%p %v %t %d' '~i~sshells' for example. Further reading Shells . packages.debian.org. Daniel Burrows and Manuel A. Fernandez Montecelo (2016). aptitude users' manual . Debian. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/319735",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165517/"
]
} |
319,740 | I am trying to set the eth0 interface to use dhcp to get an ipv4 address, using the command line. I can manually change the ip address using sudo ifconfig eth0 x.x.x.x netmask x.x.x.x Is there a similar command to use to set eth0 to get an address using dhcp? I tried typing: sudo dhclient eth0 however the ip address doesn't change when I type this. The /etc/network/interfaces file was set to iface eth0 inet manual which I then changed to: auto eth0iface eth0 inet dhcp however this doesn't change the eth0 ip address even if the system is rebooted. | If your dhcp is properly configured to give you an IP address, the command: dhclient eth0 -v should work. The option -v enable verbose log messages, it can be useful. If your eth0 is already up, before asking for a new IP address, try to deconfigure eth0 . To configure the network interfaces based on interface definitions in the file /etc/network/interfaces you can use ifup and ifdown commands. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/319740",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
319,845 | I hav an older Debian 7 VM for testing. I'm trying to reduce VM footprint size because I am about out of space. I wanted to remove Iceweasel since I don't really use it, and I can usually get by with wget . When I ran Apt it told me it was removing GNOME, too: $ sudo apt-get remove iceweasel*...The following packages were automatically installed and are no longer required: hyphen-en-us libfs6 task-desktop x11-apps x11-session-utils x11-xfs-utils xinit xorgUse 'apt-get autoremove' to remove them.The following extra packages will be installed: icedove iceowl-extensionSuggested packages: apparmor calendar-google-providerThe following packages will be REMOVED: gnome gnome-core iceweasel task-gnome-desktopThe following NEW packages will be installed: icedove iceowl-extension0 upgraded, 2 newly installed, 4 to remove and 0 not upgraded.Need to get 44.7 MB of archives.After this operation, 100 MB of additional disk space will be used.... Why does removing Iceweasel nuke GNOME? After removing Iceweasel and then making an autoclean and autoremove pass, this was presented. I'm fairly certain this VM has been rendered useless. The following packages will be REMOVED: aisleriot ant ant-optional argyll at-spi2-core baobab browser-plugin-gnash ca-certificates-java caribou caribou-antler cheese dconf-tools default-jre default-jre-headless empathy empathy-common espeak-data file-roller finger fonts-cantarell fonts-opensymbol fonts-sil-gentium fonts-sil-gentium-basic gcalctool gdebi gdm3 gedit gedit-common gedit-plugins gir1.2-atspi-2.0 gir1.2-gdata-0.0 gir1.2-gnomekeyring-1.0 gir1.2-goa-1.0 gir1.2-gtop-2.0 gir1.2-gucharmap-2.90 gir1.2-javascriptcoregtk-3.0 gir1.2-rb-3.0 gir1.2-tracker-0.14 gir1.2-webkit-3.0 gir1.2-wnck-3.0 glchess glines gnash gnash-common gnect gnibbles gnobots2 gnome-backgrounds gnome-color-manager gnome-dictionary gnome-disk-utility gnome-documents gnome-font-viewer gnome-games gnome-games-data gnome-games-extra-data gnome-icon-theme-extras gnome-mag gnome-nettool gnome-orca gnome-packagekit gnome-packagekit-data gnome-screenshot gnome-shell-extensions gnome-sudoku gnome-system-log gnome-tweak-tool gnome-video-effects gnomine gnotravex gnotski gnuchess gnuchess-book grilo-plugins-0.1 gtali gucharmap guile-2.0-libs hamster-applet hyphen-en-us iagno icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-netx icedtea-netx-common inkscape iputils-tracepath java-common libapache-pom-java libatk-adaptor libatk-adaptor-data libatk-bridge2.0-0 libatk-wrapper-java libatk-wrapper-java-jni libatspi1.0-0 libatspi2.0-0 libavahi-gobject0 libavahi-ui-gtk3-0 libblas3gf libboost-program-options1.49.0 libboost-thread1.49.0 libcaribou-gtk-module libcaribou-gtk3-module libcmis-0.2-0 libcolamd2.7.1 libcolorblind0 libcommons-beanutils-java libcommons-collections3-java libcommons-compress-java libcommons-digester-java libcommons-logging-java libcommons-parent-java libdb-java libdb-je-java libdb5.1-java libdb5.1-java-jni libdee-1.0-4 libdiscid0 libdmapsharing-3.0-2 libdotconf1.0 libespeak1 libexttextcat-data libexttextcat0 libfs6 libgail-common libgdict-1.0-6 libgdict-common libgdu-gtk0 libgeocode-glib0 libgexiv2-1 libgnome-mag2 libgpod-common libgpod4 libgraphite2-2.0.0 libgrilo-0.1-0 libgtk-vnc-2.0-0 libgupnp-av-1.0-2 libgupnp-dlna-1.0-2 libgvnc-1.0-0 libhsqldb-java libhyphen0 libicc2 libicu4j-java libimdi0 libjaxp1.3-java libjline-java libjtidy-java liblinear-tools liblinear1 liblouis-data liblouis2 liblucene2-java libmagick++5 libminiupnpc5 libmtp-common libmtp-runtime libmtp9 libmythes-1.2-0 libnatpmp1 libplot2c2 libpstoedit0c2a libraw5 libregexp-java libreoffice libreoffice-base libreoffice-base-core libreoffice-calc libreoffice-common libreoffice-core libreoffice-draw libreoffice-emailmerge libreoffice-evolution libreoffice-filter-binfilter libreoffice-filter-mobiledev libreoffice-gnome libreoffice-gtk libreoffice-help-en-us libreoffice-impress libreoffice-java-common libreoffice-math libreoffice-report-builder-bin libreoffice-style-galaxy libreoffice-style-tango libreoffice-writer librhythmbox-core6 libsctp1 libservlet2.5-java libsofia-sip-ua-glib3 libsofia-sip-ua0 libsonic0 libspeechd2 libstlport4.6ldbl libsvm-tools libtelepathy-farstream2 libunique-3.0-0 libvisio-0.0-0 libwnck-common libwnck22 libwpd-0.9-9 libwpg-0.2-2 libwps-0.2-2 libxalan2-java libxerces2-java libxml-commons-external-java libxml-commons-resolver1.1-java libxss1 libxz-java lightsoff lksctp-tools lp-solve mahjongg media-player-info minissdpd mobile-broadband-provider-info mythes-en-us network-manager-gnome nmap openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib openjdk-7-jre openjdk-7-jre-headless p7zip-full perlmagick pstoedit python-brlapi python-louis python-mako python-markupsafe python-pyatspi python-pyatspi2 python-speechd python-uno python-wnck python-zeitgeist quadrapassel rdesktop rhythmbox rhythmbox-data rhythmbox-plugin-cdrecorder rhythmbox-plugins rygel rygel-playbin rygel-preferences rygel-tracker seahorse shotwell shotwell-common simple-scan sound-juicer sound-theme-freedesktop speech-dispatcher swell-foop task-desktop telepathy-gabble telepathy-idle telepathy-logger telepathy-rakia telepathy-salut transmission-common transmission-gtk ttf-liberation ttf-sil-gentium-basic tzdata-java uno-libs3 unoconv ure vinagre vino x11-apps x11-session-utils x11-xfs-utils xbrlapi xdg-user-dirs-gtk xfonts-mathml xinit xorg xul-ext-adblock-plus zeitgeist-core0 upgraded, 0 newly installed, 278 to remove and 0 not upgraded. | As others have explained, the desktop meta-packages — such as task-desktop or gnome-core — install a web browser nowadays (well, for quite a long time actually). You might expect gnome-core to install Epiphany, or at least allow it as an alternative to Iceweasel, but it doesn't for security reasons . The gnome-core description mentions the browser dependency: These are the core components of the GNOME Desktop environment, an intuitive and attractive desktop. This meta-package depends on a basic set of programs, including a file manager, an image viewer, a web browser, a video player and other tools. It contains the official “core” modules of the GNOME desktop. So the reasons it depends on Iceweasel are two-fold: it's defined as depending on a web browser; the only sensible browser to depend on for the GNOME desktop is Iceweasel, because Epiphany doesn't have enough security support, and Chromium doesn't integrate into the desktop properly. There used to be an alternative dependency on gnome-www-browser , but that was removed in 2011 (without explanation as far as I can tell). It may be worth asking the maintainers to re-introduce it, but it wouldn't help you install gnome-core without a browser. The mechanisms which lead to GNOME being removed if you remove Iceweasel are relatively straightfoward. When you ask apt-get to do something, it tries really hard to do it — so removing a package removes anything which depends on it (after asking you). gnome-core depends on iceweasel , and gnome depends on gnome-core , so apt-get remove iceweasel also removes gnome-core and gnome . Removing these meta-packages causes all the packages they depend on to become candidates for removal using autoremove , since the packaging system now considers them to be unnecessary (no package marked as not automatically installed depends on them). The packaging system considers that the user really wants those packages which are marked as explicitly installed, and anything else is only installed to support those packages. So if anything removes gnome or gnome-core , the next time you run apt-get autoremove , it will consider that many of the installed packages are unnecessary... There are a couple of workarounds: if you want to keep gnome-core installed without Iceweasel, use equivs or apt-holepunch (the latter is much easier to use in this case, thanks Joshua !) to build a fake iceweasel package and install that along with gnome-core ; go through all the packages that gnome and gnome-core depend on, decide which of them you want to use and/or need ( e.g. gdm3 , gnome-session , nautilus ...), and mark them using apt-mark manual ... or using aptitude 's GUI (which will be a lot easier). In any case you can't break your VM by removing packages unless you start removing essential packages (and apt-get will loudly complain before letting you do so), or the kernel. You might end up having to log in to a text console, but you can fix things from there just as well as from an X terminal emulator. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/319845",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
319,852 | In dash, functions and variables appear to live in separate namespaces: fn(){ fn="hello world"}fn; echo "The value is $fn!" #prints: The value is hello world!fn; echo "The value is $fn!" #prints: The value is hello world!#the fn variable doesn't conflict with the fn function Is this a dash-specific feature or a POSIX guarantee? | A guarantee : 2.9.5 Function Definition Command A function is a user-defined name that is used as a simple command to call a compound command with new positional parameters. A function is defined with a "function definition command". [...] The function is named fname; the application shall ensure that it is a name (see XBD Name) and that it is not the name of a special built-in utility. An implementation may allow other characters in a function name as an extension. The implementation shall maintain separate name spaces for functions and variables. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/319852",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23692/"
]
} |
319,860 | I can't figure out how is implemented programs such as Vim (or top for example) that are executed inside the terminal and has a GUI. It is assumed that the terminal can only display characters, and Vim can not only show multiple windows, also can handle the cursor moving it in all directions. Another example is the linux top utility that shows information in real time that is updated, how is possible that this program can update the information instead of making a scroll down and showing new printed characters?. | vim and gvim may be separate executables, linked with different libraries. It is possible to have one executable doing either interface (elvis and emacs do this for example). vim 4.0 in 1996 added a -g option for telling it to use the GUI version (which in this case would be part of the same executable). elvis - a clone of the ex/vi text editor , uses -G gui option emacs normally uses the X display, but will start in the terminal using the -nw option. What are the differences between the different vim packages available in Ubuntu? I did not find a copy of the announcement for 4.0 (which might have given some clues regarding the motivation for the -g option (vim's announcements mailing list started in 1997 ), but see it mentioned in an old FAQ by Laurent Duperval: 7.3 How can I make Vim faster on a Unix station? The GUI support in Vim 4.0 can slow down the startup time noticeably. Until Vim supports dynamic loading, you can speed up the startup time by compiling two different versions of Vim: one with the GUI and one without the GUI and install both. Make sure you remove the link from $bindir/gvim to $bindir/vim when installing the GUI version, though. If screen updating is your problem, you can run Vim in screen. screen is an ascii terminal multiplexer. The latest version can be found at <URL:ftp://ftp.uni-erlangen.de:/pub/utilities/screen>. My recollection is that for quite a while, there were two executables (when that changed would require quite a lot of research into the actual packages used). But the capability was there starting in 1996. Given either type of interface, there are ways to update the display. For gvim, that uses the X libraries, while terminal applications such as top (or vim ) use escape sequences. Depending on the system, both of these are termcap applications , obtaining their repertoire of escape sequences using the termcap interface of ncurses, etc. (some versions of top actually use ncurses for display, e.g., htop ). vim augments that repertoire using builtin-tables (which often are redundant). Interestingly, the procps version of top in Debian is (a relative rarity) a terminfo application as can be seen by inspecting its source-code . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/319860",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79363/"
]
} |
319,869 | I would like to know every file which is +100Mb and hasn't been accessed in the last month, and I have written succesfully: find / -size +100M -atime +30 And now I want to move those files to a folder called /big-not-used changing its name as: file_nameYYYYMMDD where file_name is the orginal's file's name and YYYYMMDD is today's date, in year month day. For example film.mkv goes to /big-not-used/film.mkv20161031 My sentence would be: find / -size +100M -atime +30 -exec mv {} /big-not-used/... \; But I don't know how to append today's date at the file's name.I have found that date +%Y-%m-%d outputs: 2016-10-31 which is useful.Now the doubt is how to get this file's name?Following: https://stackoverflow.com/questions/5456120/how-to-only-get-file-name-with-linux-find ... -exec basename {} \; Maybe?: find / -size +100M -atime +30 -exec mv {} /big-not-used/$(basename {})$(date +%Y-%m-%d) \; But it gives an error because basename is replying with the file's full path instead of its name which I would use: ${var##/*/} to get the file's name, but the question is how do I insert what basename {} replies into the var in the previous expresion!?. Maybe? $(${$(basename {})##/*/}) But is says sintactic error near the unexpected '}'... | vim and gvim may be separate executables, linked with different libraries. It is possible to have one executable doing either interface (elvis and emacs do this for example). vim 4.0 in 1996 added a -g option for telling it to use the GUI version (which in this case would be part of the same executable). elvis - a clone of the ex/vi text editor , uses -G gui option emacs normally uses the X display, but will start in the terminal using the -nw option. What are the differences between the different vim packages available in Ubuntu? I did not find a copy of the announcement for 4.0 (which might have given some clues regarding the motivation for the -g option (vim's announcements mailing list started in 1997 ), but see it mentioned in an old FAQ by Laurent Duperval: 7.3 How can I make Vim faster on a Unix station? The GUI support in Vim 4.0 can slow down the startup time noticeably. Until Vim supports dynamic loading, you can speed up the startup time by compiling two different versions of Vim: one with the GUI and one without the GUI and install both. Make sure you remove the link from $bindir/gvim to $bindir/vim when installing the GUI version, though. If screen updating is your problem, you can run Vim in screen. screen is an ascii terminal multiplexer. The latest version can be found at <URL:ftp://ftp.uni-erlangen.de:/pub/utilities/screen>. My recollection is that for quite a while, there were two executables (when that changed would require quite a lot of research into the actual packages used). But the capability was there starting in 1996. Given either type of interface, there are ways to update the display. For gvim, that uses the X libraries, while terminal applications such as top (or vim ) use escape sequences. Depending on the system, both of these are termcap applications , obtaining their repertoire of escape sequences using the termcap interface of ncurses, etc. (some versions of top actually use ncurses for display, e.g., htop ). vim augments that repertoire using builtin-tables (which often are redundant). Interestingly, the procps version of top in Debian is (a relative rarity) a terminfo application as can be seen by inspecting its source-code . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/319869",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197677/"
]
} |
319,877 | I created a thinpool LV using the following command: lvcreate --type thin-pool -l 100%VG -n lv-thinpool vg-test Now /dev/mapper has the following entries: vg--test-lv--thinpoolvg--test-lv--thinpool_tdatavg--test-lv--thinpool_tmeta Why do the double hyphens appear, and how can I prevent them? | If either the volume group or the logical volume name contains a hyphen, then LVM doubles the hyphen when a device path file is created. Use underscores ( _ ) in lieu of hyphens in VG and LV names to avoid double hyphens in the composite device path name. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/319877",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
319,924 | I have just installed Debian on a HP Envy laptop, with GNOME. root@Cavalier:/home/jon# cat /etc/issueDebian GNU/Linux 8 \n \lroot@Cavalier:/home/jon# lsb_release -aNo LSB modules are available.Distributor ID: DebianDescription: Debian GNU/Linux 8.6 (jessie)Release: 8.6Codename: jessie I am connected to the internet using a cable. I would like to connect via wifi. I have a broadcom wifi adapter installed: root@Cavalier:/home/jon# lspci | grep Wireless08:00.0 Network controller: Broadcom Corporation BCM4352 802.11ac Wireless Network Adapter (rev 03) There is an icon in the top right that tells me that it the "Wired" status is "Connected". But I can't see anything similar for wireless. I have tried following the instructions here , but after logging out or rebooting, I don't see anything related to wireless in the top right. I have tried running nm-applet, but get an error: root@Cavalier:/home/jon# nm-applet (nm-applet:2663): nm-applet-WARNING **: Failed to initialize D-Bus: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. What else can I try to get the wifi working | apt-get install linux-image-$(uname -r|sed 's,[^-]*-[^-]*-,,') linux-headers-$(uname -r|sed 's,[^-]*-[^-]*-,,') broadcom-sta-dkmsmodprobe -r b44 b43 b43legacy ssb brcmsmac bcmamodprobe wl https://wiki.debian.org/wl | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/319924",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/172833/"
]
} |
319,979 | Say I create a bridge interface on linux ( br0 ) and add to it some interfaces ( eth0 , tap0 , etc.). My understanding is that this interface act like a virtual switch with all its interfaces/ports that I add to it. What is the meaning of assigning a MAC and an IP address to that interface? Does the interface act as an additional port on the switch/bridge which allows other ports to access the host machine? I have seen some pages talk about assigning an IP address to a bridge. Is the MAC assignation implied (or automatic)? | Because a bridge is an ethernet device it needs a MAC address. A linux bridge can originate things like spanning-tree protocol frames, and traffic like that needs an origin MAC address. A bridge does not require an ip address. There are many situations in which you won't have one. However, in many cases you may have one, such as: When the bridge is acting as the default gateway for a group of containers or virtual machines (or even physical interfaces). In this case it needs an ip address (because routing happens at the IP layer). When your "primary" NIC is a member of the bridge, such that the bridge is your connectivity to the outside world. In this case, rather than assigning an ip address to (for example) eth0 , you would assign it to the bridge device instead. If the bridge is not required for ip routing, then it doesn't need an ip address. Examples of this situation include: When the bridge is being used to create a private network of devices with no external connectivity, or with external connectivity provided through a device other than the bridge. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/319979",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36146/"
]
} |
320,033 | I have been trying to cross compile util-linux for arm but I keep ending up with dynamically linked executable files and I don't know why is this. My objective is static. I have been cross-compiling before different tools using similar steps and it has always worked so I don't know what I am doing wrong this time. I am using Ubuntu 16.04. Here are the commands I am running: export CC=arm-linux-gnueabi-gccexport ac_cs_linux_vers=4export CFLAGS=-staticexport CPPFLAGS=-staticexport LDFLAGS=-static./configure --host=arm-linux LDFLAGS=-static --disable-shared --without-tinfo --without-ncurses --disable-ipv6 --disable-pylibmount --enable-static-programs=fdisk,sfdisk,whereis --prefix=/opt/util-linux/arm --bindir=/opt/util-linux/arm/bin --sbindir=/opt/util-linux/arm/sbin As you can see, I specified static at every place I could think of even repeating stuff "just to make sure it understands me" and after I run the configure script, here is the output: util-linux 2.28.2prefix: /opt/util-linux/armexec prefix: ${prefix}localstatedir: ${prefix}/varbindir: /opt/util-linux/arm/binsbindir: /opt/util-linux/arm/sbinlibdir: ${exec_prefix}/libincludedir: ${prefix}/includeusrbin_execdir: ${exec_prefix}/binusrsbin_execdir: ${exec_prefix}/sbinusrlib_execdir: ${exec_prefix}/libcompiler: arm-linux-gnueabi-gcccflags: -staticsuid cflags: ldflags: -staticsuid ldflags: Python: /usr/bin/pythonPython version: 2.7Python libs: ${exec_prefix}/lib/python2.7/site-packagesBash completions: /usr/share/bash-completion/completionsSystemd support: noBtrfs support: yeswarnings: Then I do: make fdisk or make whereis and once the compilation is done, I do: file fdisk fdisk being the file that just got created and: fdisk: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.3, for GNU/Linux 3.2.0, BuildID[sha1]=369363ef8f8173a3a1c2edc178eb77255a2dc415, not stripped As you can see it says, "dynamically linked". I have been searching all over the Internet but I failed to find an answer. I also do: ./configure --host=arm-linux LDFLAGS=-static --disable-shared --without-tinfo --without-ncurses --disable-ipv6 --disable-pylibmount --prefix=/opt/util-linux/arm --bindir=/opt/util-linux/arm/bin --sbindir=/opt/util-linux/arm/sbin Which is exactly the same configure command as the one before that except for the missing "--enable-static-programs" parameter which "should" by default compile everything as static except that it does not. Am I doing something wrong or is this a Makefile error? | I just figured out why the original commands posted in my question weren't producing static files! I had to run make LDFLAGS="--static ". After I did this, everything linked statically! To repeat, I ran: export CC=arm-linux-gnueabi-gccexport ac_cs_linux_vers=4export CFLAGS=-staticexport SUID_CFLAGS=-staticexport SUID_LDFLAGS=-staticexport CPPFLAGS=-staticexport LDFLAGS=-static then ./configure --host=arm-linux-gnueabi --disable-shared --without-tinfo --without-ncurses --disable-ipv6 --disable-pylibmount --prefix=/opt/util-linux/arm --bindir=/opt/util-linux/arm/bin --sbindir=/opt/util-linux/arm/sbin and then make LDFLAGS="--static" and everything linked statically! No more need for object files collection as demonstrated in my previous answer but yeah that too can be used as an alternative. Also for your info, here is my version info as some of you will probably care: $ arm-linux-gnueabi-gcc --versionarm-linux-gnueabi-gcc (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.1) 5.4.0 20160609Copyright (C) 2015 Free Software Foundation, Inc.This is free software; see the source for copying conditions. There is NOwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.$ arm-linux-androideabi-ld --versionGNU gold (GNU Binutils 2.25.90.20151125) 1.11Copyright (C) 2015 Free Software Foundation, Inc.This program is free software; you may redistribute it under the terms ofthe GNU General Public License version 3 or (at your option) a later version.This program has absolutely no warranty. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/320033",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171432/"
]
} |
320,036 | I was running mc and tried to install curl and postfix in the shell: apt-get install curl postfix . A configuration dialog for postfix appeared and after pressing Ctrl-o (Toggle show/hide MC) two times I couldn't read the configuration message anymore so I quit MC with F10. The apt-get process still existed and I killed it with sudo kill xxxx .Now apt-get doesn't work. I've tried apt-get -f install X@X ~ % LC_ALL=C sudo apt-get -f install Reading package lists... DoneBuilding dependency tree Reading state information... DoneCorrecting dependencies... DoneThe following extra packages will be installed: exim4-config lsb-invalid-mtaThe following packages will be REMOVED: postfixThe following NEW packages will be installed: exim4-config lsb-invalid-mta0 upgraded, 2 newly installed, 1 to remove and 3 not upgraded.1 not fully installed or removed.Need to get 0 B/523 kB of archives.After this operation, 2462 kB disk space will be freed.Do you want to continue? [Y/n] Preconfiguring packages ...dpkg: postfix: dependency problems, but removing anyway as you requested: lsb-core depends on lsb-invalid-mta (>= 4.1+Debian13+nmu1) | mail-transport-agent; however: Package lsb-invalid-mta is not installed. Package mail-transport-agent is not installed. Package postfix which provides mail-transport-agent is to be removed. Package exim4-daemon-light which provides mail-transport-agent is not installed. bsd-mailx depends on default-mta | mail-transport-agent; however: Package default-mta is not installed. Package exim4-daemon-light which provides default-mta is not installed. Package mail-transport-agent is not installed. Package postfix which provides mail-transport-agent is to be removed. Package exim4-daemon-light which provides mail-transport-agent is not installed.(Reading database ... 421170 files and directories currently installed.)Removing postfix (2.11.3-1) ...dpkg: error processing package postfix (--remove): subprocess installed pre-removal script returned error exit status 102Errors were encountered while processing: postfixE: Sub-process /usr/bin/dpkg returned an error code (1) and other solutions presented in various questions ( Q1 , Q2 , Q3 ): apt-get purge postfix , apt-get --reinstall install postfix , dpkg --pending --configure . A really similar question (duplicate?) is Error in postfix , but it has no answer. How can I fix the package manager? My OS is Debian Jessie. EDIT 1 Bahamut's suggestions fails with error code 102: X@X ~ % LC_ALL=C sudo dpkg --install /var/cache/apt/archives/postfix_2.11.3-1_amd64.debSelecting previously unselected package postfix.(Reading database ... 421172 files and directories currently installed.)Preparing to unpack .../postfix_2.11.3-1_amd64.deb ...dpkg: warning: subprocess old pre-removal script returned error exit status 102dpkg: trying script from the new package instead ...dpkg: ... it looks like that went OKUnpacking postfix (2.11.3-1) over (2.11.3-1) ...Setting up postfix (2.11.3-1) ...insserv: script postfix is not an executable regular file, skipped!Postfix configuration was not changed. If you need to make changes, edit/etc/postfix/main.cf (and others) as needed. To view Postfix configurationvalues, see postconf(1).After modifying main.cf, be sure to run '/etc/init.d/postfix reload'.Running newaliasesdpkg: error processing package postfix (--install): subprocess installed post-installation script returned error exit status 102Processing triggers for systemd (215-17+deb8u5) ...Processing triggers for ufw (0.33-2) ...Processing triggers for man-db (2.7.0.2-5) ...Processing triggers for libc-bin (2.19-18+deb8u6) ...Errors were encountered while processing: postfix | Manually removing configuration files worked: rm /var/lib/dpkg/info/postfix.*apt-get purge postfix | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/320036",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/176445/"
]
} |
320,070 | I tried gpick, gcolor2, gcolor3, pick , pychrom and none of them seem to work with Wayland. I am running Arch Linux 64-bit with GNOME 3.22.1 through XWayland (default since 3.22.x). I don't want to change into an X session just to pick a colour. Anyone had success with this? | grim is a screenshot tool for Wayland that is able to do that: grim -g "$(slurp -p)" -t ppm - | convert - -format '%[pixel:p{0,0}]' txt:- After selecting a point on the screen with the mouse, it produces output like this: # ImageMagick pixel enumeration: 1,1,65535,srgb0,0: (40,85,119) #285577 srgb(40,85,119) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/320070",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150010/"
]
} |
320,103 | Can someone please explain to me, what the difference is between creating mdadm array using partitions or the whole disks directly? Supposing I intend to use the whole drives. Imagine a RAID6 created in two ways, either: mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 or: mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd What is the difference, and possible problems arising from any of the two variants? For example, I mean the reliability or manageability or recovery operations on such arrays, etc. | The most important difference is that it allows you to increase the flexibility for disk replacement. It is better detailed below along with a number of other recommendations. One should consider to use a partition instead of the entire disk. This should be under the general recommendations for setting up an array and may certainly spare you some headaches in the future when further disk replacements get necessary. The most important arguments is: Disks from different manufacturers (or even different models of the "same" capacity from the same manufacturer) don't necessarily have the exact same disk size and, even the smallest size difference, will prevent you from replacing a failed disk with a newer one if the second is smaller than the first. Partitioning allows you to workaround this; Side note on why to use different manufacturers disks: Disks will fail, this is not a matter of a "if" but a "when". Disks of the same manufacturer and the same model have similar properties, and so, higher chances of failing together under the same conditions and time of use. The suggestion so is to use disks from different manufacturers, different models and, in special, that do not belong to the same batch (consider buying from different stores if you are buying disks of the same manufacturer and model). This is not uncommon that a second disk fail happen during a resotre after a disk replacement when disks of the same batch are used. You certainly don't want this to happen to you. So the recommendations: 1) Partition the disks that will be used with a slightly smaller capacity than the overall disk space (e.g, I have a RAID5 array of 2TB disks and I intentionally partitioned them wasting about 100MB in each). Then, use /dev/sd?1 of each one for composing the array - This will add a safety margin in case a new replacing disk has less space than the original ones used to assemble the array when it was created; 2) Use disks from different manufacturers; 3) Use disks of different models if different manufacturers are not an option for you; 4) Use disks from different batches; 5) Proactively replace disks before they fail and not all at the same time. This may be a little paranoid and really depends on the criticity of the data you have. I use to have disks that have 6 months differences in age from each other; 6) Make regular backups (always, regardless if you use an array or not). Raid doesn't serve the same purpose of backups. Arrays assure you high availability, Backups allow you to restore lost files (including the ones that get accidentally deleted or are damaged by viruses, some examples of something that using arrays will not protect you from). OBS: Except for all the non-neglectable rational above, there aren't much further technical differences between using /dev/sd? vs /dev/sd?#. Good luck | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/320103",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
320,145 | Piping in some JSON, I want to be able to use a wildcard in the test used with select() : curl example.com/json | jq 'select(.[].properties.type == "dev*")' I was hoping it would print out anything with a type that starts with dev , for example development , devel , devil , but it doesn't. Is it possible to use a wildcard with select() in jq ? | You might consider the startswith() function. Using your example: curl example.com/json | jq '.[].properties | select(.type | startswith("dev"))' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/320145",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197940/"
]
} |
320,154 | I have a file as the following 200.000 1.353 0.086 200.250 1.417 0.000 200.500 1.359 0.091 200.750 1.423 0.000 201.000 1.365 0.093 201.250 1.427 0.000 201.500 1.373 0.093 201.750 1.432 0.000 202.000 1.383 0.091 202.250 1.435 0.000 202.500 1.392 0.087 202.750 1.436 0.000 203.000 1.402 0.081 203.250 1.437 0.001 203.500 1.412 0.073 204.000 1.423 0.065 204.500 1.432 0.055 205.000 1.441 0.045 I would like to grep only the rows that have in the first column the decimal .000 and .500 only so the output would be like this 200.000 1.353 0.086 200.500 1.359 0.091 201.000 1.365 0.093 201.500 1.373 0.093 202.000 1.383 0.091 202.500 1.392 0.087 203.000 1.402 0.081 203.500 1.412 0.073 204.000 1.423 0.065 204.500 1.432 0.055 205.000 1.441 0.045 | You don't use grep. Use awk . "your data" | awk '$1 ~ /\.[05]00/' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/320154",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112129/"
]
} |
320,201 | I'm trying to extend partition /dev/sda5 which is logical partition under extended partition /dev/sda2. I want to use fdisk . Procedure should be to delete both partitions and then to recreate them with exact same starting sectors (1001470 & 1001472). It goes well until creating logical partition where minimum starting sector is bigger ( 1003518 ) than it needs to be. $ sudo fdisk /dev/sdaCommand (m for help): pDisk /dev/sda: 9.8 GiB, 10485760000 bytes, 20480000 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0x0cd7105fDevice Boot Start End Sectors Size Id Type/dev/sda1 * 2048 999423 997376 487M 83 Linux/dev/sda2 1001470 16775167 15773698 7.5G 5 Extended/dev/sda5 1001472 16775167 15773696 7.5G 83 LinuxPartition 5 has been deleted.Partition 2 has been deleted.Command (m for help): nPartition type p primary (1 primary, 0 extended, 3 free) e extended (container for logical partitions)Select (default p): e Partition number (2-4, default 2):First sector (999424-20479999, default 999424): 1001470 Last sector, +sectors or +size{K,M,G,T,P} (1001470-20479999, default 20479999 ):Created a new partition 2 of type 'Extended' and of size 9.3 GiB.Command (m for help): nAll space for primary partitions is in use.Adding logical partition 5First sector (1003518-20479999, default 1003520 ): 1001472 Value out of range. I have done it with parted , but it should be possible with fdisk somehow. $ fdisk -Vfdisk from util-linux 2.27.1 | In the normal interface, Linux's fdisk applies alignment constraints to partitions. Which constraints depends on the version of fdisk. Older versions defaulted to cylinder alignment, for compatibility with older operating systems that were incompatible with LBA . When LBA was a little over two decades old, fdisk stopped catering for such ancient systems by default, and instead switched to 1MB alignment, which gives better performance on modern storage media. In current versions of fdisk, to create partitions with any sector (512B) alignment, you need to first create the partition with the desired end point, then go to the expert menu ( x ) and use the command b to adjust the beginning of the partition (this changes the partition size, not where it ends). It does seem rather clumsy. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/320201",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32129/"
]
} |
320,216 | I have to merge hundreds of .txt files side by side. I've been trying to use some of the already answered questions in the forums but while the files do merge, the second, and third (so on) files shift one line down every time. I would like for them to stay aligned, all of the files have the same number of rows (if not characters in each row). My files are separated by commas, and my ultimate goal is to get them all to excel for data processing. my files are 591.txt CT Analyser, Version: 1.9.3.2 Date and time,25.07.2014 09:56 Operator identity,svy557 Computer name,UT156805 Computation time,00:08:24Dataset,591_right__rec_tra_voi Location,D:\Pam Mandible Copy\591\Right\Region1\ 583.txt CT Analyser, Version: 1.9.3.2Date and time,31.07.2014 15:14Operator identity,svy557Computer name,UT156805Computation time,00:10:04Dataset,583_left__rec_traLocation,D:\Pam Mandible Copy\583 Left\Reoriented\ I have tried something like the following: paste 591.txt 593.txt | column -s $'\t' -t it merges like this (the second file one line below, instead of lines next to each other): CT Analyser, Version: 1.9.3.2 CT Analyser, Version: 1.9.3.2Date and time,25.07.2014 09:56 Date and time,25.07.2014 09:55Operator identity,svy557 Operator identity,svy557Computer name,UT156805 Computer name,UT156805Computation time,00:08:24 Computation time,00:08:13Dataset,591_right__rec_tra_voi Dataset,583_right__rec_tra_voiLocation,D:\Pam Mandible Copy\591 Right\Region1\ Location,D:\Pam Mandible Copy\583 Right\Region1\ This is been driving crazy for a few days and any help would be greatly appreciated, I'm pretty new with UNIX so I'm trying to learn enough to do this and then other few project that require similar skills. The actual files have about 50 rows and all of them look like that, if I try to do more than one file with something like this: paste -d '\n' *.txt > new.txt The results become unpredictable CT Analyser, Version: 1.9.3.2CT Analyser, Version: 1.9.3.2CT Analyser, Version: 1.9.3.2CT Analyser, Version: 1.9.3.2CT Analyser, Version: 1.9.3.2 CT Analyser, Version: 1.9.3.2Date and time,25.07.2014 09:55Date and time,25.07.2014 09:55Date and time,25.07.2014 09:56Date and time,25.07.2014 09:56Date and time,25.07.2014 09:56 Date and time,25.07.2014 09:55Operator identity,svy557Operator identity,svy557Operator identity,svy557Operator identity,svy557Operator identity,svy557 Operator identity,svy557Computer name,UT156805Computer name,UT156805Computer name,UT156805Computer name,UT156805Computer name,UT156805 Computer name,UT156805Computation time,00:08:13Computation time,00:08:13Computation time,00:08:24Computation time,00:08:24Computation time,00:08:24 Computation time,00:08:13Dataset,583_right__rec_tra_voiDataset,583_right__rec_tra_voiDataset,591_right__rec_tra_voiDataset,591_right__rec_tra_voiDataset,591_right__rec_tra_voi Dataset,583_right__rec_tra_voiLocation,D:\Pam Mandible Copy\583 Right\Region1\Location,D:\Pam Mandible Copy\583 Right\Region1\Location,D:\Pam Mandible Copy\591 Right\Region1\Location,D:\Pam Mandible Copy\591 Right\Region1\Location,D:\Pam Mandible Copy\591 Right\Region1\ Location,D:\Pam Mandible Copy\583 Right\Region1\ Thanks again for all the help | In the normal interface, Linux's fdisk applies alignment constraints to partitions. Which constraints depends on the version of fdisk. Older versions defaulted to cylinder alignment, for compatibility with older operating systems that were incompatible with LBA . When LBA was a little over two decades old, fdisk stopped catering for such ancient systems by default, and instead switched to 1MB alignment, which gives better performance on modern storage media. In current versions of fdisk, to create partitions with any sector (512B) alignment, you need to first create the partition with the desired end point, then go to the expert menu ( x ) and use the command b to adjust the beginning of the partition (this changes the partition size, not where it ends). It does seem rather clumsy. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/320216",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197980/"
]
} |
320,217 | I have a payroll file Can you help me to calculate the value for each row using awk command, In each value on line -10 ? I can only calculate the first line with this command : awk '{sum += $3*7} END {print sum}' RS= payroll.txt | In the normal interface, Linux's fdisk applies alignment constraints to partitions. Which constraints depends on the version of fdisk. Older versions defaulted to cylinder alignment, for compatibility with older operating systems that were incompatible with LBA . When LBA was a little over two decades old, fdisk stopped catering for such ancient systems by default, and instead switched to 1MB alignment, which gives better performance on modern storage media. In current versions of fdisk, to create partitions with any sector (512B) alignment, you need to first create the partition with the desired end point, then go to the expert menu ( x ) and use the command b to adjust the beginning of the partition (this changes the partition size, not where it ends). It does seem rather clumsy. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/320217",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197776/"
]
} |
320,233 | Is it possible to give the functionality of the directory that is typically assigned to /home (in distributions like Debian and Ubuntu) to another directory instead (entirely)? For example, if you could do this and you changed it to /xyz then all your new human-owned user directories would be installed under /xyz (e.g. so Sally's Desktop path would be /xyz/Sally/Desktop instead of /home/Sally/Desktop and /home wouldn't need to exist). I'm working on a portable program that saves paths and loads them. If it's used on a different computer with a home directory that isn't at /home (and consequently there is no /home , but rather another path with its functionality), then I'll want it to adjust the path to have the proper home directory location in it for the new computer when the path is loaded instead. | Home directories do not need to be placed in /home and your program is erroneous if it is hardwiring any such assumptions as that all home directories share a common parent or that that parent is named /home . /home is not even a universal convention. /home was an idea conceived a fair while after Unix was invented. In early Unices other directories were used. This can still be seen today on operating systems like FreeBSD (and its derivatives) where /home is a symbolic link and user directories actually live under /usr/home . Solaris likewise places "real" home directories in /export/home so that /home can be full of automatic NFS mounts and suchlike. /home is not the parent of many common home directories. There are plenty of home directories that don't live in /home . The most obvious one is /root , the home directory for the superuser, moved from its older location at / so that root's personal and "dot" files do not clutter the root directory, but kept on the root volume so that the superuser can log in even when mounting other disc volumes is failing. Various dæmon softwares have home directories in other places, for the dedicated accounts that those dæmons run as. qmail's various dæmon accounts use /var/qmail for example, or /var/qmail/alias . The latter is even commonly addressed as ~alias and is designed to be a home directory, with ~alias/.qmail files as in other (real) users' home directories. Various HTTP(S) and FTP(s) server softwares have (official or unofficial) conventions. For example: home directories for virtual hosts that have dedicated system accounts can be /var/www or /var/www/$VHOST . Other softwares can be found on various operating systems using home directories for non-personal user accounts such as /var/unbound , /var/db/mysql , and /var/db/tor . Various conventional non-personal user accounts have home directories such as /sbin , /var/adm , /var/spool/lpd , /var/spool/mail , /var/spool/news , /var/spool/uucp , and so forth. On OpenBSD the system operator account has the home directory /operator and various non-personal user accounts have /var/empty as their home directories. Home directories do not have to remain in /home . Home directories can be moved after account creation by using the -d ( --home ) and -m ( --move-home ) options to the usermod command on Linux operating systems. OpenBSD's usermod has the same options. (Don't do the same with the pw usermod command on FreeBSD, TrueOS/PC-BSD, et al.. The -m -d combination there has a subtly different meaning.) Home directories do not have to be created in /home . Even the conventional parent directory used when creating accounts can be changed, and isn't necessarily /home . On Linux operating systems and OpenBSD the useradd command's -b ( --base-dir ) option specifies the parent in which home directories are created if not explicitly named with -d ( --home ). The default base directory is the base_dir variable in /etc/usermgmt.conf on OpenBSD, and the HOME variable in /etc/default/useradd on many Linuxes. A system administrator can change this at whim. On FreeBSD, TrueOS/PC-BSD, et al. there's a similar -b option to the pw useradd command and a default for that modifiable via the home variable in /etc/pw.conf . Coping with this Your program should not hardwire any expectation at all about the locations of home directories or their parents. If you want to know the currently logged-in user's home directory, use the HOME environment variable. It's set up by programs such as login , userenv or systemd when the logged-in account is switched to. If there is no HOME environment variable, it's a valid design choice to just abort, on the grounds that the login session environment variables need to be present for your program to run. Otherwise you can fall back on obtaining the process' effective/real (as appropriate) UID and querying the password database. If you want to know a specific user's home directory, query the password database with the getpwnam() / getpwnam_r() or getpwuid() / getpwiud_r() library functions and pull out the pw_dir field. (Note that this field can be NULL or can point to a zero-length string .) If you want to symbolically denote the home directory for a user in a way that is independent of its actual location, you can adopt the convention of a shell-like tilde expansion: ~JdeBP . Many programs do this, from vim to mailx . Further reading Difference between “/export/home” and “/home” Jonathan de Boyne Pollard. userenv . nosh toolset manual pages. Jonathan de Boyne Pollard (2016). " False statements about dæmon environments ". Errata for systemd doco . Frequently Given Answers. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/320233",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82698/"
]
} |
320,240 | I'm using a Ubuntu 14.04.4 LTS on my server. Is there a way to zip everything on the server in a single archive, preferably with a single command? I've tried using zip -r backup.zip but that doesn't work, since I'm not providing it with what I would like to zip. I have also tried using zip -r backup.zip *.* but that only zips the files, and not the directories. I am aware that I could go with zip -r var.zip varzip -r root.zip rootzip -r media.zip mediaetc Since that would be time consuming, I'm looking for an easier solution, if it exists. EDIT Use of 3rd party software, like Clonezilla, is not allowed. I have to find a command line solution. | You'd be better off creating a compressed tarball archive. A typical command for doing this for an entire system is... tar -cvpzf /backup.tar.gz --exclude=/backup.tar.gz --one-file-system / See https://help.ubuntu.com/community/BackupYourSystem/TAR | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/320240",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197995/"
]
} |
320,270 | I have a script which always giving output where I have it redirected to a file, what I'm trying to do is to rotate the redirected file, below is where i start the script and redirect it: .somecode. su - username -c "command >> /path/to/directory/output.txt" &..code continues.. and below is the crontab I'm trying to create: cd /path/to/directory/timestamp=`date "+%Y%m%d"`mv ./output.txt ./logs/output.txt_$timestamptouch output.txtchmod 757 ./output.txtgzip ./logs/output.txt_$timestampfind ./logs/output.txt* -type f -mtime +2 | xargs rm or this one also failed to do the job: timestamp=`date "+%Y%m%d"`cp ./output.txt ./logs/output.txt_$timestampecho "" > ./output.txtgzip ./logs/output.txt_$timestamp in the first code the original script fails and is no longer working, and in the second script it doesn't clean the output.txt file. Is there a way to do it while keeping the script working ?Note; I'm running Unix Solaris. Thanks in advance. | You'd be better off creating a compressed tarball archive. A typical command for doing this for an entire system is... tar -cvpzf /backup.tar.gz --exclude=/backup.tar.gz --one-file-system / See https://help.ubuntu.com/community/BackupYourSystem/TAR | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/320270",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187503/"
]
} |
320,332 | i'm trying to obtain a sequence in ternary values organised like that : 0 0 0 0 1 0 0 0 0 2 0 0 0 1 0 0 0 0 1 1 ......... ......... 2 2 2 2 2 for that i use : for i in `seq 001 242` do echo 'obase=3; '$i'' | bc | sed 's/\(.\{1\}\)/\1 /g' done but i obtain 1 2 1 0 1 1 .... 2 2 2 2 2 How can i force echo of the missing 0 in the result as they are important to use as another script parameters ? | Use printf to format the numbers: for i in $( seq 1 242 ) ; do printf '%05d\n' $( bc <<< 'obase=3; '$i )done | sed 's/\(.\)/\1 /g' Also, no need to put empty string after $i , and no need to quantify {1} in the regex. It might be faster to use brace expansion in zsh, ksh93, bash or yash -o braceexpand : printf '%s\n' {0..2}\ {0..2}\ {0..2}\ {0..2}\ {0..2} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/320332",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/198089/"
]
} |
320,373 | I would like to remap the keys on my number pad so that they behave differently depending on how long the key is pressed. Here's an example: If I hold the Numpad 9 key down for less than 300ms it will send the "previous tab" key command Ctrl + Tab If I hold the Numpad 9 key down for 300-599ms it will send the "new tab" key command Ctrl + T If I hold the Numpad 9 key down for 600-899ms it will send the "close tab/window" key command Ctrl + W If I hold the Numpad 9 key down for more than 899ms, it does nothing in case I missed the time window I wanted. On Windows I could do this with AutoHotKey and on OS X I could do this with ControllerMate, but I cannot find a tool on UNIX/Linux that allows key remapping based on how long a key is held. If you are aware of a tool that can solve my problem, please make sure to provide a script or code sample that demonstrates the conditional key hold duration behavior I described above. It doesn't need to be the full code to solve my example, but it should be enough for me to repurpose it for my example. | I just wrote this in C : #include <stdio.h>#include <curses.h>#include <time.h> //time(0)#include <sys/time.h> // gettimeofday()#include <stdlib.h>void waitFor (unsigned int secs) { //credit: http://stackoverflow.com/a/3930477/1074998 unsigned int retTime = time(0) + secs; // Get finishing time. while (time(0) < retTime); // Loop until it arrives.}intmain(void) { struct timeval t0, t1, t2, t3; double elapsedTime; clock_t elapsed_t = 0; int c = 0x35; initscr(); cbreak(); noecho(); keypad(stdscr, TRUE); halfdelay(5); //increae the number if not working //adjust below `if (elapsedTime <= 0.n)` if this changed printf("\nSTART again\n"); elapsed_t = 0; gettimeofday(&t0, NULL); float diff; int first = 1; int atleast_one = 0; while( getch() == c) { //while repeating same char, else(ffff ffff in my system) break int atleast_one = 1; if (first == 1) { gettimeofday(&t1, NULL); first = 0; } //printf("DEBUG 1 %x!\n", c); gettimeofday(&t2, NULL); elapsedTime = (t2.tv_sec - t1.tv_sec) + ((t2.tv_usec - t1.tv_usec)/1000000.0); if (elapsedTime > 1) { //hit max time printf("Hit Max, quit now. %f\n", elapsedTime); system("gnome-terminal"); //waitFor(4); int cdd; while ((cdd = getch()) != '\n' && cdd != EOF); endwin(); exit(0); } if(halfdelay(1) == ERR) { //increae the number if not working //printf("DEBUG 2\n"); //waitFor(4); break; } else { //printf("DEBUG 3\n"); } } if (atleast_one == 0) { //gettimeofday(&t1, NULL); t1 = t0; } gettimeofday(&t3, NULL); elapsedTime = (t3.tv_sec - t1.tv_sec) + ((t3.tv_usec - t1.tv_usec)/1000000.0); printf("Normal quit %f\n", elapsedTime); if (elapsedTime > 0.6) { //this number based on halfdelay above system("gedit &"); //system("xdotool key shift+left &"); //system("mplayer -vo caca -quiet 'video.mp4' &"); //waitFor(4); } else if (elapsedTime <= 0.6) { system("xdotool key ctrl+shift+t &"); //waitFor(4); } int cdd; while ( (cdd = getch() ) != '\n' && cdd != EOF); endwin(); return 0; } Use showkey -a to get the bind keycode: xb@dnxb:/tmp$ sudo showkey -aPress any keys - Ctrl-D will terminate this program^[[24~ 27 0033 0x1b #pressed F12 91 0133 0x5b 50 0062 0x32 52 0064 0x34 126 0176 0x7e5 53 0065 0x35 #pressed Numpad 5, 5 is the keycode used in `bind`^C 3 0003 0x03^D 4 0004 0x04xb@dnxb:/tmp$ Put the bind keycode 5 and its command(e.g. run /tmp/.a.out ) in ~/.bashrc: bind '"5":"/tmp/a.out\n"' Note that relevant keycode need to change in the source code too (the hex value can get from sudo showkey -a above too): int c = 0x35; Compile with (output to /tmp/a.out in my example): cc filename.c -lcurses Demonstration: Numpad 5, short press open new tab, medium press open gedit, and long press open gnome-terminal. This is not direct applicable in any window on gnome desktop manager, but i think it should give you some idea how (hard) to implement it. It work in Virtual Console(Ctrl+Alt+N) too, and work in some terminal emulator (e.g. konsole, gnome-terminal, xterm). p/s: I'm not a c programmer, so forgive me if this code is not optimized. [UPDATE] The previous answer only work in shell and required focus, so i think parse the /dev/input/eventX is the solution to work in entire X session. I don't want to reinvent the wheel. I play around with evtest utility and modified the bottom part of evtest.c with my own code: int onHold = 0;struct timeval t0;double elapsedTime;int hitMax = 0;while (1) { rd = read(fd, ev, sizeof(struct input_event) * 64); if (rd < (int) sizeof(struct input_event)) { perror("\nevtest: error reading"); return 1; } system("echo 'running' >/tmp/l_is_running 2>/tmp/l_isrunning_E &"); for (i = 0; i < rd / sizeof(struct input_event); i++) { //system("date >/tmp/l_date 2>/tmp/l_dateE &"); if (ev[i].type == EV_KEY) { if ( (ev[i].code == 76) ) { if (!onHold) { onHold = 1; t0 = ev[i].time; hitMax = 0; } if (!hitMax) { //to avoid hitMax still do the time checking instruction, you can remove hitMax checking if you think it's overkill, but still hitMax itself is necessary to avoid every (max) 2 seconds will repeatly system(); elapsedTime = (ev[i].time.tv_sec - t0.tv_sec) + ((ev[i].time.tv_usec - t0.tv_usec)/1000000.0); printf("elapsedTime: %f\n", elapsedTime); if (elapsedTime > 2) { hitMax = 1; printf("perform max time action\n"); system("su - xiaobai -c 'export DISPLAY=:0; gedit &'"); } } if (ev[i].value == 0) { printf("reseted ...... %d\n", ev[i].value); onHold = 0; if (!hitMax) { if (elapsedTime > 1) { //just ensure lower than max 2 seconds system("su - xiaobai -c 'export DISPLAY=:0; gnome-terminal &'"); } else if (elapsedTime > 0.5) { system("su - xiaobai -c \"export DISPLAY=:0; vlc '/home/xiaobai/Downloads/videos/test/Pokémon Red_Blue_Yellow Gym Leader Battle Theme Remix-CbJTkx7QUJU.mp4' &\""); } else if (elapsedTime > 0.2) { system("su - xiaobai -c 'export DISPLAY=:0; nautilus &'"); } } else { //else's max system() already perform hitMax = 0; } } } } }} Note that you should change the username ( xiaobai is my username) part. And also the if ( (ev[i].code == 76) ) { is my Numpad 5 keycode, you might need to manually print the ev[i].code to double confirm. And of course you should change the video path too :) Compile and test it directly with (the `` part is in order to get the correct /dev/input/eventN ): $ gcc /home/put_your_path/my_long_press.c -o /home/put_your_path/my_long_press; sudo /home/put_your_path/my_long_press `ls -la /dev/input/by-path/* | grep kbd | echo "/dev/input/""$(awk -F'/' '{print $NF}')" ` & Note that /by-id/ doesn't work in Fedora 24, so i change it to /by-path/. Kali no such problem. My desktop manager is gdm3: $ cat /etc/X11/default-display-manager /usr/sbin/gdm3 So, i put this line in /etc/gdm3/PostLogin/Default to run this command as root on gdm startup ( /etc/X11/Xsession.d/* doesn't work): /home/put_your_path/my_long_press `ls -la /dev/input/by-id/* | grep kbd | echo "/dev/input/""$(awk -F'/' '{print $NF}')" 2>/tmp/l_gdm` 2>/tmp/l_gdmE & For unknown reason / etc/gdm/PostLogin/Default doesn't work on Fedora 24' gdm which give me " Permission denied " when check /tmp/l_gdmE log. Manually run no problem though. Demonstration: Numpad 5, instant-press (<=0.2 second) will be ignored, short-press (0.2 to 0.5 second) open nautilus , medium-press (0.5 to 1 second) open vlc to play video, long-press (1 to 2 seconds) open gnome-terminal , and timeout-press (2 seconds) open gedit . I uploaded the full code(only one file) here . [UPDATE again] [1] Added multiple keys flow and fixed notify-send failed by define DBUS_SESSION_BUS_ADDRESS . [2] Added XDG_CURRENT_DESKTOP and GNOME_DESKTOP_SESSION_ID to ensure konsole use gnome theme gui (Change it if you're not using gnome). I updated my code here . Note that this code doesn't handle for combination keys flow, e.g. Ctrl + t . UPDATE: There's multiple device interfaces which the /dev/input/by-path/XXX-eventN entries sequence is random. So I change the command in /etc/gdm3/PostLogin/Default as below ( Chesen is my keyboard name, for your case, you should changed it to grep Razer instead): /your_path/my_long_press "$(cat /proc/bus/input/devices | grep -i Chesen -A 4 | grep -P '^(?=.*sysrq)(?=.*leds)' | tr ' ' '\n' | ls /dev/input/`grep event`)" 2>/tmp/l_gdmE & You can try the eventN extract from cat /proc/bus/input/devices | grep -i Razer -A 4 : $ cat /proc/bus/input/devices | grep -i Razer -A 4N: Name="Razer Razer Naga Chroma"P: Phys=usb-0000:00:14.0-1.3/input0S: Sysfs=/devices/pci0000:00/0000:00:14.0/usb3/3-1/3-1.3/3-1.3:1.0/0003:1532:0053.0003/input/input6U: Uniq=H: Handlers=mouse2 event5 --N: Name="Razer Razer Naga Chroma"P: Phys=usb-0000:00:14.0-1.3/input1S: Sysfs=/devices/pci0000:00/0000:00:14.0/usb3/3-1/3-1.3/3-1.3:1.1/0003:1532:0053.0004/input/input7U: Uniq=H: Handlers=sysrq kbd event6 --N: Name="Razer Razer Naga Chroma"P: Phys=usb-0000:00:14.0-1.3/input2S: Sysfs=/devices/pci0000:00/0000:00:14.0/usb3/3-1/3-1.3/3-1.3:1.2/0003:1532:0053.0005/input/input8U: Uniq=H: Handlers=sysrq kbd leds event7 $ In this example above, only sudo cat /dev/input/event7 will print bizarre output when click the 12 digits on Razer mouse, which has the pattern "sysrq kbd leds event7" to use in grep -P '^(?=.*sysrq)(?=.*leds)' above (your pattern might vary). sudo cat /dev/input/event6 will print bizarre output only when click the middle up/down key. While sudo cat /dev/input/event5 will print bizarre output when move your mouse and scrolling the wheel. [Update: Support Replug keyboard cable to reload the program] The following should be self-explanation: $ lsusb #to know my keyboard is idVendor 0a81 and idProduct 0101...Bus 001 Device 003: ID 0a81:0101 Chesen Electronics Corp. Keyboard$ cat /etc/udev/rules.d/52-hole-keyboard.rules #add this line with your idVendor and idProduct above in custom udev rules fileACTION=="add", SUBSYSTEM=="usb", ATTR{idVendor}=="0a81", ATTR{idProduct}=="0101", MODE="0666", GROUP="plugdev", RUN+="/bin/bash -c 'echo 1 > /tmp/chesen_plugged'"$ cat /usr/local/bin/inotifyChesenPlugged #A long run listener script to listen for modification of /tmp/chesen_plugged #Ensures `inotifywait` has been installed first.touch /tmp/chesen_pluggedwhile inotifywait -q -e modify /tmp/chesen_plugged >/dev/null; do killall -9 my_long_press /usr/local/bin/startLongPress &done$ cat /usr/local/bin/startLongPress #the executable script run the long press executable #Change with your pattern as explained above.#!/bin/bash<YOUR_DIR>/my_long_press "$(cat /proc/bus/input/devices | grep -i Chesen -A 4 | grep -P '^(?=.*sysrq)(?=.*leds)' | tr ' ' '\n' | ls /dev/input/`grep event`)" 2>/tmp/l_gdmE) & disown$ cat /etc/gdm3/PostLogin/Default #the executable startup script run listener and long press script/usr/local/bin/inotifyChesenPlugged &/usr/local/bin/startLongPress & | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/320373",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/198114/"
]
} |
320,387 | Is it possible to change the screen resolution after fully booting the Arch Linux installation medium? I tried to add vga=795 to the boot line , and that worked for a few lines before the resolution was changed back to an unreadable 4K. And the various suggestions in the wiki seem to assume that I have already set up networking and can install packages. With the help of a guide to the default console fonts I was able to set the largest available using setfont /usr/share/consolefonts/iso01-12x22.psfu.gz , but that is still barely readable. | In addition to changing the font size to a larger one, with setfont , you can also pass kernel parameters on the boot loader line. The ISO uses Syslinux , so you can hit Tab when the menu appears and append these parameters to the kernel line . Two that would be most useful are: nomodeset to disable KMS video=1024x768 to force a specific resolution if KMS is required. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/320387",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3645/"
]
} |
320,390 | Is there a command line utility that checks if a specified mailserver is on some well-known blacklist ? I know amispammer but it is only available on Debian, it seems to be unmaintained and last time I checked it was very memory hungry. | In addition to changing the font size to a larger one, with setfont , you can also pass kernel parameters on the boot loader line. The ISO uses Syslinux , so you can hit Tab when the menu appears and append these parameters to the kernel line . Two that would be most useful are: nomodeset to disable KMS video=1024x768 to force a specific resolution if KMS is required. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/320390",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1131/"
]
} |
320,400 | I have multiple systemd services that require a generated EnvironmentFile. I have a shell script which generates this Environment file, but since I need that environment file before any Exec... commands execute, I cannot use ExecStartPre=generate_env_file.sh . Therefore, I have another service (generate_env_file.service) set to run that script as a oneshot: [Service]Type=oneshotExecStartPre=/usr/bin/touch /path/to/config.iniExecStart=/path/to/generate_env_file.sh and I have multiple other service files which have: [Unit]Requires=generate_env_file.serviceAfter=generate_env_file.service How can I guarantee that two or more dependent services (which require generate_env_file.service) will not run in parallel and spawn two parallel executions of generate_env_file.service? I've looked at using RemainAfterExit=true or possibly StartLimitIntervalSec= and StartLimitBurst= to ensure that only one copy will execute at a time during some period but I'm not sure the best way to go about doing this. | RemainAfterExit=true is the way to go. In this case Systemd starts the service and Systemd considers it as started and live. However this doesn't cover the use case of executing systemctl restart generate_env_file.service . In this case systemd will re-execute your service. To solve this, you could create a marker file in the run file system in ExecStartPost= and add ConditionFileExists= directive to check the existence of file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/320400",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
320,403 | I have ubuntu, fedora and windows installed in my laptop. I installed ubuntu after windows and then I installed fedora. How can I unninstall fedora without messing up the grub, which was reinstalled during the fedora installation? | RemainAfterExit=true is the way to go. In this case Systemd starts the service and Systemd considers it as started and live. However this doesn't cover the use case of executing systemctl restart generate_env_file.service . In this case systemd will re-execute your service. To solve this, you could create a marker file in the run file system in ExecStartPost= and add ConditionFileExists= directive to check the existence of file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/320403",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/198137/"
]
} |
320,411 | This image from TLDP is pretty awesome.It shows that before giving user space actual read, write, open access to the filesystem the blocks get mapped onto the virtual filesystem. and Wikipedia says there are 3 versions of file systems on different layers. So, are the standard (sd nodes) referring to the physical or, after the LVM mapped, virtual filesystem? Or are they referring to just the partition?(which would mean that writing, directly on too the partition would skip the filesystem driver, without which you couldn't even interact with files itself) If that is the case, what devices represent the filesystem drivers/ or filesystems or.... I just don't know.. could just anybody link me something where disk usage by the kernel is explained? | tl;dr : /dev/sdaX represents a partition. I think a fundamental misconception you have is the difference between filesystems and partitions. A partition is really simple - basically it's just a section of the disk which is defined in a partition table at the beginning of the disk. A filesystem, however, is a much more advanced thing. A filesystem is essentially a data structure used to keep track of files that the kernel (specifically, a filesystem driver) is able to read and write. That data structure can technically be put anywhere on disk, but it is expected that the beginning of the fs data structure is the same as the beginning of a partition. You mentioned LVM in your question - let's forget about that for the moment since that's a more advanced topic (I'll explain LVM at the end). Say you have a single 100GB hard disk with nothing but zeros. In this case, you will have a /dev/sda file which you can 100GB from (although e.g. du will report it as zero-length because it's a block special) and contains nothing but zeros. /dev/sda is the method by which the kernel exposes the raw device contents to userspace for reading and writing. This is why it has the same amount of data as your disk and has the same contents as your disk. If you flip the fifth bit on /dev/sda to be one instead of zero, the kernel will flip the fifth bit on the physical drive to match. In the diagram you provided, this write would go through the system call interface into the kernel, then through the IDE hard disk driver, and finally to the hard disk itself. Now let's say you want to do something useful with that drive, like store files on it. Now you need a filesystem. There are multiple a ridiculous amount of filesystems available to you in the Linux kernel. Each one of them uses a different data structure on disk to keep track of files, and they might also modify their data structures in different ways, for example to provide atomic write guarantees (i.e. writes either succeed or they don't; there can never be half-written data even if the machine crashes). This is what people mean when they talk about a "filesystem driver": a filesystem driver is a piece of code that understands how to read and write a particular filesystem's data structures on disk. Examples include ext4, btrfs, XFS, etc. So you want to store files. Let's say you pick ext4 as a filesystem. What you need to do now is format the disk so that the data structures for an empty filesystem exist on disk. To do this, you use mkfs.ext4 and tell it to write to /dev/sda . mkfs.ext4 will then write an empty ext4 filesystem starting at the beginning of /dev/sda . The kernel will then take the writes to /dev/sda and apply them to the beginning of the physical disk. Now that the disk contains a filesystem's data structures, you can do e.g. mount /dev/sda /mnt to mount the brand-new filesystem, move files into it, etc. Any writes to files in /mnt would then go through the system call interface, then to the ext4 filesystem driver (which knows how to turn the more abstract "write this data to such-and-such a file" into the concrete changes that need to be made to the fs data structures on disk), then to the IDE hard disk driver, then finally to the drive itself. Now, the above will work, but it's not normally how people do things. Usually they use partitions on the drive. A partition is basically just a particular section of the drive. When you use partitions, you have a partition table at the beginning of the drive that says where, physically, each partition is located. Partitions are neat because they allow you to divide up a drive into multiple sections that can be used for different purposes. So let's say you want to create two filesystems on the drive, both ~50GB (i.e. half-and-half). First you'd have to partition the drive. In order to do this you'd use a tool like fdisk or gdisk , both of which create different types of partition tables, and you'd tell your tool to write to /dev/sda . When you were done partitioning, you'd have /dev/sda , /dev/sda1 , and /dev/sda2 . /dev/sda1 and /dev/sda2 are the kernel's way of representing the different partitions in the disk. If you write to the beginning of /dev/sda2 , it will write to the beginning of the second partition, which is in the middle of the disk . Another way to explain this is by talking about the contents of /dev/sda . Recall that /dev/sda is, bit-for-bit, the contents of the physical hard drive. And /dev/sda1 is, bit-for-bit, the contents of the first partition of the hard drive. This means that /dev/sda has a little bit of data - the partition header - followed by the exact contents of /dev/sda1 , then /dev/sda2 . /dev/sda1 and /dev/sda2 are mapped to specific regions on the disk, which are partitions that you've configured. From here we can use mkfs.ext4 again to create a filesystem on /dev/sda1 , which will write to the disk starting directly after the partition header. If we use mkfs.ext4 on /dev/sda2 , it writes starting at the beginning of the partition, which is in the middle of the disk (and thus in the middle of /dev/sda 's contents). Now, you can do e.g. mount /dev/sda2 /mnt . This tells the kernel to read filesystem data starting at the beginning of the second partition and expose it to you in a more useful form - i.e. files and directories at the location /mnt . Again, the kernel uses a filesystem driver to actually perform this mapping. Now let's talk about LVM, briefly. LVM is basically just an abstraction over partitions. Partitions map very, very directly to physical locations on disk. In the two-partition example above, let's say you wanted to delete the first partition and expand the second into the newly freed space. Because partitions are mapped directly to disk regions, the only way to do this is to physically move the entire 50GB of partition data to the beginning of the disk, then expand the partition to the end. LVM is designed to make this less painful. Basically, you give LVM a bunch of raw storage, and then tell it how to use that storage. LVM provides you with a virtual "disk" that can be divided like partitions, but whose underlying storage can be anywhere in the raw storage pool you've allocated for it. To use the example above, if you gave LVM the entire disk to use, then divided it into two, you could delete the first "partition" and expand the second "partition" to fill that space instantly, because LVM is able to keep track of where data is on the disk without requiring it to be strictly "in order". For loads more details on how LVM works, see this answer: https://unix.stackexchange.com/a/106871/29146 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/320411",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/181999/"
]
} |
320,414 | After building apache2 http server from source ( 2.4.23 ) I don't have the a2dissite and a2ensite commands. Configure was: ./configure --with-included-apr --prefix=/usr/local/apache2 When I run: whereis apache2 I get: apache2: /etc/apache2 /usr/local/apache2 But which apache2 shows nothing, maybe there needs to be some symlinking to /usr/bin ? http://localhost is working fine. Version info for source: /usr/local/apache2/bin/apachectl -vServer version: Apache/2.4.23 (Unix)Server built: Nov 1 2016 22:52:26 Linux version: linux mint 173.13.0-37-generic #64-Ubuntu SMP Mon Sep 22 21:28:38 UTC 2014 x86_64 | I'm guessing you built from the source available from the Apache Software Foundation. The a2en... scripts (and the supporting configuration) are Debian-specific; you'll find the source code in the corresponding Debian repository . Your best bet to build the httpd server from source and still be able to use a2ensite etc. is to use the Debian source package: sudo apt-get install devscripts dpkg-dev build-essentialsudo apt-get build-dep apache2dget http://httpredir.debian.org/debian/pool/main/a/apache2/apache2_2.4.23-5.dsccd apache2-2.4.23dpkg-buildpackage -us -uc The first two commands install the packages necessary to build apache2 ; then dget downloads and extracts the source package, and dpkg-buildpackage builds it and produces a series of .deb packages you can install manually using dpkg as usual. If the build-dep line doesn't work, the following is equivalent for apache2 : sudo apt-get install debhelper lsb-release libaprutil1-dev libapr1-dev libpcre3-dev zlib1g-dev libnghttp2-dev libssl-dev perl liblua5.2-dev libxml2-dev autotools-dev gawk dh-systemd | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/320414",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/198103/"
]
} |
320,448 | #! /bin/bashfor (( l = 1 ; l <= 50; ++l )) ; do for (( k = 1 ; k <= 1000; ++k )) ; do sed -n '$l,$lp' $k.dat >> ~/Escritorio/$l.txt donedone The script is located in a folder together with 1000 dat files each one having 50 lines of text. The dat files are called 1.dat , 2.dat ,...., 1000.dat My purpose is to make files l.txt , where l.txt has the l line of 1.dat , the l line of 2.dat , etc. For that, I use the sed command to select the l file of each dat file. But when I run the above script, the txt created have nothing inside... Where is the mistake? | for LINE in {1..50}; do for FILE in {1..1000}; do sed -n "${LINE}p" "${FILE}.dat" >>"~/Escritorio/${LINE}.dat" donedone In your script you are using single quotes for the sed expression, variables don't expand inside single quotes, you need to use double quotes. Also there is a one liner with awk that can do the same: awk 'FNR<=50 {filename=sprintf("results/%d.dat", FNR); print >> filename; close(filename)}' *.dat Just create the results directory, or change it in the command to another one, ~ does not expand to home there. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/320448",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197020/"
]
} |
320,465 | Summary When I create a new tmux session, my prompt pulls from a default bash configuration and I have to manually run source ~/.bashrc for my customized prompt. Analysis I am using a RHEL 7 machine. I began noticing this behavior after a bash update a while back, but haven't gotten around to asking the question until now (and am not sure which update this began happening around). For example, I've customized my prompt to look like: [user@hostname ~]$ Whenever I start a new tmux session, it uses what appears to be the bash default: -sh-4.2$ A quick run of source ~/.bashrc always fixes the issue, but it's annoying that I have to do this every time I want to fix something small. Any ideas on how to get tmux to do this automatically again? If any more information is needed, I am happy to provide. tmux.conf For reference, I have my tmux.conf file below, although it is hardly what you could call custom. setw -g mode-keys vi# reload tmux.confbind r source-file ~/.tmux.conf \; display-message " ✱ ~/.tmux.conf is reloaded" | As far as I know, by default tmux runs a login shell. When bash is invoked as an interactive login shell, it looks for ~/.bash_profile , ~/.bash_login , and ~/.profile . So you have to put source ~/.bashrc in one of those files. Another way to solve this issue is to put in your file .tmux.conf the line: set-option -g default-shell "/bin/bash" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/320465",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47463/"
]
} |
320,523 | I created a directory named "$pattern" and now when ever i try to remove it, it says pattern: Undefined variable. I have tried: $ rm -r $pattern$ rm -rf $pattern$ rm "$ option[value='2016']" | $ , space, ' and [ are special characters in most shells. To remove their special meaning, you have to use the quoting mechanisms of the shell. The quoting syntax varies very much with the shell . In all shells that I know, you can use single quotes to quote all characters but single quote, backslash and newline (in Bourne-like shells, it does quote the last two as well, except in backticks for \ in some). rm -r '$pattern' Should work in most common shells. rm -r \$pattern Would work (except inside backticks for Bourne-like ones) in all shells but those of the rc family. Same for: rm "\$option[value='2016']" In rc -like shells, you'd use: rm '$option[value=''2016'']' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/320523",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/198235/"
]
} |
320,537 | If I execute dig google.com +trace , then I can see that one of the root DNS servers, which holds data for gTLDs, knows that authoritative name servers for google.com are ns2.google.com. , ns1.google.com. , ns3.google.com. and ns4.google.com. : google.com. 172800 IN NS ns2.google.com.google.com. 172800 IN NS ns1.google.com.google.com. 172800 IN NS ns3.google.com.google.com. 172800 IN NS ns4.google.com.CK0POJMG874LJREF7EFN8430QVIT8BSM.com. 86400 IN NSEC3 1 1 0 - CK0Q1GIN43N1ARRC9OSM6QPQR81H5M9A NS SOA RRSIG DNSKEY NSEC3PARAMCK0POJMG874LJREF7EFN8430QVIT8BSM.com. 86400 IN RRSIG NSEC3 8 2 86400 20161106045005 20161030034005 6404 com. R+VQ60BPw77hJx5ItvIWyWzbgq9aw5a5rT+wLLOILHNH1TUM+dlSwfux XrAwj6X/U7aWAaa5xsMM+ccAYj+GhJDWw3RnTlc3SVA1GPcRuC/R2dG+ QmAHoKLJ66XVeUoym6c6Gdxyy27vlKuJktDHgHL1G3Kcy8ljw1uBADKI jIs=S84AE3BIT99DKIHQH27TRC0584HV5KOH.com. 86400 IN NSEC3 1 1 0 - S84CFH3A62N0FJPC5D9IJ2VJR71OGLV5 NS DS RRSIGS84AE3BIT99DKIHQH27TRC0584HV5KOH.com. 86400 IN RRSIG NSEC3 8 2 86400 20161108054927 20161101033927 6404 com. bB3EZ+7N/iu7yHzAE4S9V1b20upQRV43pU6xjxWZ5OsJqaF0hSu7gxcj ScD+VIItFkPnab17RKTB96CGM6K9kYYvX3GKJjThFg63cXSl2LE7L7Ny BqQnhcCRXr2jfx5+kCtab8bRrCfSfW1UR7OBsj+I1DX21hs4OhNZQsNY ZiM=;; Received 660 bytes from 192.48.79.30#53(j.gtld-servers.net) in 33 ms I guess that google.com got into root DNS servers thanks to registrar. How domain registrars update root DNS servers? I guess they can't somehow directly edit those? Or they send their updates and root DNS servers administrators somehow verify those changes? Most likely those are naive and stupid guesses, but I would like to understand how does domain registrar update root DNS servers. | $ , space, ' and [ are special characters in most shells. To remove their special meaning, you have to use the quoting mechanisms of the shell. The quoting syntax varies very much with the shell . In all shells that I know, you can use single quotes to quote all characters but single quote, backslash and newline (in Bourne-like shells, it does quote the last two as well, except in backticks for \ in some). rm -r '$pattern' Should work in most common shells. rm -r \$pattern Would work (except inside backticks for Bourne-like ones) in all shells but those of the rc family. Same for: rm "\$option[value='2016']" In rc -like shells, you'd use: rm '$option[value=''2016'']' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/320537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33060/"
]
} |
320,552 | I am currently working on a deployment tool that will be configuring environment variables across CentOS7 machines. As it stands, my tool is aware of what variables need to be configured, but is not aware of what services will be using them as those services might not even be installed yet. So the challenges are: At the point my tool is running, I am unaware of what services are needing what environment variables (and therefore we don't know what .service file to put them into) Services run as non-interactive non-login as far as I can tell, so the other options of getting a sort of 'global' environment variable don't seem like a solution to this (i.e. profile.d and that sort of thing) Is there another way to persist these variables without needing to know what service will be referencing them? | In /etc/systemd/system.conf , you can use DefaultEnvironment= to set environment variables passed to all services. You can read about the details in man systemd-system.conf . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/320552",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/198153/"
]
} |
320,682 | I want to have a shortcut key for duplicating the currently selected line in gedit. Many other editors use Ctrl + D or Ctrl + Shift + D for that, but gedit is different. Here the default behaviour: Ctrl + D : removes a line Ctrl + Shift + D : opens GTK inspector I am fine with both current behaviours as long as the other hotkey would do the thing I actually want to do. So I saw this answer where it is shown you can actually patch the gedit binary. However I don't want to do this as patching binaries is probably the worst kind of workaround you can do (think of updates and binary changes). Additionally, in that question only the "delete line" shortcut was removed and the "duplicate line" shortcut was added with a plugin, which does not exist anymore. So how can I get the "duplicate this line" behaviour into gedit? | The plugin mentioned in the comments and the other answer has been recently updated and after install you should be able to use Ctrl+Shift+D to duplicate either a line or a selection. I've tested it in gedit 3.18.3 on Ubuntu 16.04 but it should work in any version >=3.14.0 even though that's a bit questionable because the gedit devs are not shy to introduce breaking changes in minor versions (or they follow something else than semantic versioning ) and there seems to be no up-to-date documentation for plugin development. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/320682",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146739/"
]
} |
320,701 | #!/bin/bash q=$(bc <<< "scale=2;$p*100")head -n$q numbers.txt > secondcoordinate.txt That's just part of the script, but I think it's enough to clarify my intentions. p is a variable with just two decimals, so q should be an integer... Nevertheless, bc shows, for example, 10.00 instead of 10 . How can I solve this? | You can't do this with the obvious scale=0 because of the way that the scale is determined. The documentation indirectly explains that dividing by one is sufficient to reset the output to match the value of scale , which defaults to zero: expr1 / expr2 The result of the expression is the quotient of the two expressions. The scale of the result is the value of the variable scale. p=12.34; echo "($p*100)" | bc1234.00p=12.34; echo "($p*100)/1" | bc1234 If your version of bc does not handle this, pipe it through sed instead: p=12.34; echo "($p*100)" | bc | sed -E -e 's!(\.[0-9]*[1-9])0*$!\1!' -e 's!(\.0*)$!!'1234 This pair of REs will strip trailing zeros from the decimal part of a number. So 3.00 will reduce to 3, and 3.10 will reduce to 3.1, but 300 will remain unchanged. Alternatively, use perl and dispense with bc in the first place: p=12.34; perl -e '$p = shift; print $p * 100, "\n"' "$p" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/320701",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197020/"
]
} |
320,783 | So if I've got a variable VAR='10 20 30 40 50 60 70 80 90 100' and echo it out echo "$VAR"10 20 30 40 50 60 70 80 90 100 However, further down the script I need to reverse the order of this variable so it shows as something like echo "$VAR" | <code to reverse it>100 90 80 70 60 50 40 30 20 10 I tried using rev and it literally reversed everything so it came out as echo "$VAR" | rev001 09 08 07 06 05 04 03 02 01 | On GNU systems, the reverse of cat is tac: $ tac -s" " <<< "$VAR " # Please note the added final space.100 90 80 70 60 50 40 30 20 10 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/320783",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/198441/"
]
} |
320,920 | I'm working with a logfile with the following format: Oct 12 01:28:26 server program: 192.168.1.105 text for 1.105 Oct 12 01:30:00 server program: 192.168.1.104 text for 1.104 Oct 12 01:30:23 server program: 192.168.1.103 text for 1.103Oct 12 01:32:39 server program: 192.168.1.101 text for 1.101 Oct 12 02:28:26 server program: 192.168.1.105 text for 1.105 Oct 12 02:30:00 server program: 192.168.1.104 text for 1.104Oct 12 02:30:23 server program: 192.168.1.103 text for 1.103 Oct 12 02:32:39 server program: 192.168.1.101 text for 1.101 I need to achieve this: Oct 12 02:28:26 server program: 192.168.1.105 text for 1.105 Oct 12 02:30:00 server program: 192.168.1.104 text for 1.104Oct 12 02:30:23 server program: 192.168.1.103 text for 1.103Oct 12 02:32:39 server program: 192.168.1.101 text for 1.101 How can I send the new output to a file? I have tried this: awk '!_[$6]++ {a=$6} END{print a}' logfile But it does not give me the results expected. How can I use awk or sed to give me only the unique lines with last time the string match was seen or based on date/time? | If you're going to do a second pass (which you pretty well have to), you may as well only store line numbers rather than full records. It makes the logic easier. awk 'NR == FNR {if (z[$6]) y[z[$6]]; z[$6] = FNR; next} !(FNR in y)' logfile logfile Proof of correctness: At the end of processing each line, every line number processed so far is either a value in z , or an index (not value) in y , but never both. The lines represented by values in z are, at the end of each iteration, exactly and only the latest records so far seen for each IP address. The indices of y are, therefore, the exact lines which we wish not to print. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/320920",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81926/"
]
} |
320,957 | I recently upgraded my disk from a 128GB SSD to 512GB SSD. The / partition is encrypted with LUKS. I'm looking for help extending the partition to use all the free space on the new disk. I've already dd'd the old drive onto the new one: [root@localhost ~]# fdisk -l /dev/sdaDisk /dev/sda: 477 GiB, 512110190592 bytes, 1000215216 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisklabel type: dosDisk identifier: 0x00009f33Device Boot Start End Sectors Size Id Type/dev/sda1 * 2048 1026047 1024000 500M 83 Linux/dev/sda2 1026048 250064895 249038848 118.8G 83 Linux There's about 380GB of unused space after sda2. More relevant info: [root@localhost ~]# vgs VG #PV #LV #SN Attr VSize VFree fedora_chocbar 1 3 0 wz--n- 118.75g 4.00m[root@localhost ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home fedora_chocbar -wi-a----- 85.55g root fedora_chocbar -wi-a----- 29.30g swap fedora_chocbar -wi-a----- 3.89g[root@localhost ~]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/encrypted fedora_chocbar lvm2 a-- 118.75g 4.00m There seems to be a lot of info regarding how to do this, but very little explanation. I appreciate any help on this. | OK! The definitive answer finally. My steps to expand a LUKS encrypted volume... cryptsetup luksOpen /dev/sda2 crypt-volume to open the encrypted volume. parted /dev/sda to extend the partition. resizepart NUMBER END . vgchange -a n fedora_chocbar . Stop using the VG so you can do the next step. cryptsetup luksClose crypt-volume . Close the encrypted volume for the next steps. cryptsetup luksOpen /dev/sda2 crypt-volume . Open it again. cryptsetup resize crypt-volume . Will automatically resize the LUKS volume to the available space. vgchange -a y fedora_chocbar . Activate the VG. pvresize /dev/mapper/crypt-volume . Resize the PV. lvresize -l+100%FREE /dev/fedora_chocbar/home . Resize the LV for /home to 100% of the free space. e2fsck -f /dev/mapper/fedora_chocbar-home . Throw some fsck magic at the resized fs. resize2fs /dev/mapper/fedora_chocbar-home . Resize the filesystem in /home (automatically uses 100% free space) I hope someone else finds this useful. I now have 300+GB for my test VMs on my laptop! | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/320957",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/198571/"
]
} |
320,964 | I have a shell script that looks for a file, /tmp/bbsnode1 , and if the existence of that file is true, it deletes it. What I'd like to do is if multiple files exist ( /tmp/bbsnode2 , /tmp/bbsnode3 , and /tmp/bbsnode4 ), delete all of them. But only delete them if all of them exist. Here's what I have so far: if [ -f /tmp/bbsnode1 ]then/usr/bin/rm /tmp/bbsnode1fi | I would use a shell function for this, rather than a script: rm-all-or-none() { for f; do [ -f "$f" ] || { printf '%s is not an existing file, no files removed\n' "$f" >&2 return 1;} done rm -fv -- "$@"} Then I would call it using brace expansion, rather than a glob. Globs only expand to files that exist , but in this case we want to specify the files and only remove them if all of them exist: rm-all-or-none /tmp/bbsnode{1..4} Longer equivalent version: rm-all-or-none() { for f in "$@"; do if [ -f "$f" ]; then : else printf '%s is not an existing file, no files removed\n' "$f" >&2 return 1 fi done rm -fv -- "$@"} Also see: In Bash, when to alias, when to script, and when to write a function? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/320964",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/161706/"
]
} |
320,965 | I've been trying for hours to figure this out but cant seems to do it My question Is I have a file name easy and there's 4 different sub directories in it now what I have to do is delete everything under foo (including hard links) and reclaim all the diskspace of foo. I tried removing all the file by typing. rm foo/* but it does not remove the hard links that are in the other sub directories. Then I tried to see if the Matching inodes find foo -type f -ls | sort and this is what I got with this command this command doesn't show the others sub directories only foo/ can someone please help me out. Thank you. | I would use a shell function for this, rather than a script: rm-all-or-none() { for f; do [ -f "$f" ] || { printf '%s is not an existing file, no files removed\n' "$f" >&2 return 1;} done rm -fv -- "$@"} Then I would call it using brace expansion, rather than a glob. Globs only expand to files that exist , but in this case we want to specify the files and only remove them if all of them exist: rm-all-or-none /tmp/bbsnode{1..4} Longer equivalent version: rm-all-or-none() { for f in "$@"; do if [ -f "$f" ]; then : else printf '%s is not an existing file, no files removed\n' "$f" >&2 return 1 fi done rm -fv -- "$@"} Also see: In Bash, when to alias, when to script, and when to write a function? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/320965",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194555/"
]
} |
321,038 | I've got a VirtualBox instance of Oracle Linux 7.2 which won't start because of Failed to start Login Service . On the booting sequence the process hangs on this message and doesn't continue, so I can't even log in and execute systemctl status systemd-logind.service . The probable cause for this is, that I removed zsh while all my users (including root) have zsh set as the default shell (duh!). After that the machine started and I got to the login prompt, but I couldn't login since the shell couldn't be found. I then inserted a Live CD and went into /etc/passwd to change the default shell for users to /bin/bash . After this the login service won't start at all. Any ideas how to fix this? | I found out that after changing /etc/passwd it didn't have the right SELinux settings anymore. I don't really need SELinux on my machine so I solved the problem by disabling SELinux altogether. This is easily done by modifying the file /etc/selinux/config and setting the option SELINUX=permissive (if you want to keep SELinux file labeling to enable it later) or SELINUX=disabled (turning it off completely). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/321038",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140458/"
]
} |
321,079 | This command will get the PID of the xterm process launched: xterm & export APP_PID=$! How can I get the window ID associated to that process (the xterm window ID)? I mean, the ID that xdotool selectwindow would return after clicking on the xterm window. | It's been discussed in the "other" forum: Is there a linux command to determine the window IDs associated with a given process ID? How to get an X11 Window from a Process ID? In the first, @Patrick points out that xwininfo can return information on all windows, and by using xprop for each window, you can check for the _NET_WM_PID property, matching it against your process-id. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/321079",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91304/"
]
} |
321,095 | I'd like to use OpenVPN inside Whonix Workstation so need to open 1194 port. I was trying to add some lines to iptables and still failing. How can I do it and then check it, for example with nmap scanning localhost ? | It's been discussed in the "other" forum: Is there a linux command to determine the window IDs associated with a given process ID? How to get an X11 Window from a Process ID? In the first, @Patrick points out that xwininfo can return information on all windows, and by using xprop for each window, you can check for the _NET_WM_PID property, matching it against your process-id. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/321095",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/198662/"
]
} |
321,108 | I have a script that rsyncs and ends like this rsync -azvh /dir -e ssh [email protected]:/RESULT="$?"# check result of rsync db'sif [ "$RESULT" != "0" ]; then echo -e "rsync exit Code:" $RESULT "\nFAILED to rsync backups"else echo "SUCCESSFULL rsync of backups"fi I have just been asked to wrap it in an API but the API states that 0=fail and 1=success . How can I change the exit code to reflect this? Do I need to assign it a variable? | exit 1 will exit with an error code of 1 and exit 0 will exit with an error code of 0. For instance: rsync -azvh /dir -e ssh [email protected]:/RESULT="$?"# check result of rsync db'sif [ "$RESULT" != "0" ]; then echo -e "rsync exit Code:" $RESULT "\nFAILED to rsync backups" exit 0else echo "SUCCESSFULL rsync of backups" exit 1fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/321108",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106673/"
]
} |
321,136 | I am trying to get the size of an user's folder named allysek and I am using this command du -hLlxcs allysek . I know I don't have permissions to some of the locations. In the end, I get an output as follows, du: cannot access ‘/export/mite-09/bc/users/allysek/charlet/PS-tools-mjyger/PS-NOVA/IMR90.NOMe-seq.bam’du: cannot access ‘/export/mite-09/bc/users/allysek/charlet/PS-tools-mfter/PS-NOVA/IMR90.NOMe-seq.bam.bai’du: cannot access ‘/export/mite-09/bc/users/allysek/charlet/PS-tools-iuhgi/PS-NOVA/colon.WGBS.bam’du: cannot access ‘/export/mite-09/bc/users/allysek/charlet/PS-tools-kh/PS-NOVA/colon.WGBS.bam.bai’du: cannot access ‘/export/mite-09/bc/users/allysek/charlet/PS-tools-h/PS-NOVA/dbNOVA_135.hg19.sort.vcf’du: cannot access ‘/export/mite-09/bc/users/allysek/charlet/PS-tools-master/PS-NOVA/hg19_rCRSchrm.fa’du: cannot access ‘/export/mite-09/bc/users/allysek/charlet/PS-tools-master/PS-plot/DKO1.NOMe-seq.bam’du: cannot access ‘/export/mite-09/bc/users/allysek/charlet/PS-tools-master/PS-plot/DKO1.NOMe-seq.bam.bai’896M /export/mite-09/bc/users/allysek896M total So my question is, does the 896M total include sizes of items which I wasn't able to access as well? | Simply not.Look this example du -shc *4,0K AUDIO_TS4,4G VIDEO_TS4,4G totalchmod 000 * #don't use this in wrong dir!du -shc *du: cannot read directory 'VIDEO_TS': Permission denieddu: cannot read directory 'AUDIO_TS': Permission denied4,0K AUDIO_TS4,0K VIDEO_TS8,0K total | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/321136",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/198697/"
]
} |
321,186 | I don't know how to use variables for further executing in a script. I tried the following: #!/bin/bashNUM = 0echo Number $NUM > text.txt but I get the following error: num.sh: 3: num.sh: NUM: not found | There must not be any whitespace around = in variable declaration in shell. Remove the whitespaces: NUM=0 Also if you don't have any good reason, don't use all uppercases for a user defined shell variable name as there is chance that this could conflict with any environment variable. Better do: number=0 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/321186",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112187/"
]
} |
321,216 | I have a fresh version of Ubuntu 16.04.1 installed and have tried to create a new user account through root . I have changed the SHELL line in /etc/default/useradd to read as follows: SHELL=/bin/bash (it previously read /bin/sh ) Executing useradd -D provides the following output: GROUP=100HOME=/homeINACTIVE=-1EXPIRE=SHELL=/bin/bashSKEL=/etc/skelCREATE_MAIL_SPOOL=no I then try to create a user as follows: useradd -m -G sudo -c "David Buckley" david Yet the default shell is still /bin/sh . More specifically, the /etc/passwd file reads as follows: david:x:1000:1000:David Buckley:/home/david: On a slightly, potentially related note, the new user does not receive sudo access. It is given the groups david sudo , and the /etc/sudoers file includes the lines (uncommented): # Allow members of group sudo to execute any commandsudo ALL=(ALL:ALL) ALL What might I be doing wrong to cause this? | Oddly enough, this happened to me too yesterday on a server running Ubuntu 16.04 LTS. I have no concrete answer as to why this happens, but here is a quick solution that worked for me: Don't use useradd , use adduser instead! DESCRIPTION adduser and addgroup add users and groups to the system according to command line options and configuration information in /etc/adduser.conf . They are friendlier front ends to the low level tools like useradd , groupadd and usermod programs, by default choosing Debian policy conformant UID and GID values, creating a home directory with skeletal configuration, running a custom script, and other features. As for sudo , you have to log out that user — and then log back in — for the new group settings to have an effect. Here's a good link on useradd vs adduser . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/321216",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/198735/"
]
} |
321,219 | Suppose I have a directory on a local machine, behind a firewall: local:/home/meee/workdir/ And a directory on a remote machine, on the other side of the firewall: remote:/a1/a2/.../aN/one/two/remote:/a1/a2/.../aN/one/dont-copy-me{1,2,3,...}/ ...such that N >= 0. My local machine has a script that uses rsync . I want this script to copy only one/two/ from the remote machine for a variable-but-known 'N' such that I end up with: local:/home/meee/workdir/one/two/ If I use rsync remote:/a1/a2/.../aN/one/two/ ~/workdir/ , I end up with: local:/home/meee/workdir/two/ If I use rsync --relative remote:/a1/a2/.../aN/one/two/ ~/workdir/ , I end up with: local:/home/meee/workdir/a1/a2/.../aN/one/two/ Neither one of these is what I want. Are there rsync flags which can achieve the desired result? If not, can anyone think of a straightforward solution? | For -- relative you have to insert a dot into the source directory path: rsync -av --relative remote:/a1/a2/.../aN/./one/two ~/workdir/ See the manual: -R, --relative [...] It is also possible to limit the amount of path information that is sent as implied directories for each path you specify. With a modern rsync on the sending side (beginning with 2.6.7), you can insert a dot and a slash into the source path, like this: rsync -avR /foo/./bar/baz.c remote:/tmp/ | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/321219",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108009/"
]
} |
321,282 | These are the files in the directory: Ford-Mustang-001.jpgFord-Mustang-002.jpgFord-Mustang-003.jpgChevy-Impala-001.jpgChevy-Impala-002.jpgChevy-Impala-003.jpg I would like to sort these into subfolders: /Mustang/Impala | using prename (perl renamer) prename 'if(/(.+?)-(.+?)-(.*)/){mkdir $2; $_="$2/$_"}' *.jpg | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/321282",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/198790/"
]
} |
321,364 | I have installed Jenkins and I have jenkins as a user in my /etc/passwd: jenkins:x:995:993:Jenkins Continuous Integration Server:/var/lib/jenkins:/bin/bashnginx:x:994:992:Nginx web server:/var/lib/nginx:/sbin/nologinsetroubleshoot:x:993:990::/var/lib/setroubleshoot:/sbin/nologin I have tried to su - jenkins while root and I get this response: [root@li1078-244 ~]# su - jenkinsLast login: Sun Nov 6 02:50:18 UTC 2016 on pts/0su: failed to execute /bin/bash : No such file or directory I want to su - jenkins into bash so I can continue some configurations.I thought I would login as jenkins, but I can't, I get this: ldco2016@DCortes-MacBook-Pro-3 ~ $ ssh jenkins@localhost [ruby-2.3.1]jenkins@localhost's password:Permission denied, please try again. | On many installations, the login shell for the Jenkins user is set to false or nologin : $ grep jenkins /etc/passwdjenkins:x:495:441:Jenkins Continuous Integration Server:/var/lib/jenkins:/bin/false So if you try to login as or switch to the Jenkins user, the system will not allow it. The best way to work around this is to start a shell using the Jenkins user: $ sudo su - jenkins -s/bin/bash-bash-4.1$ whoamijenkins-bash-4.1$ echo $HOME/var/lib/jenkins-bash-4.1$-bash-4.1$ cd .ssh-bash-4.1$ pwd/var/lib/jenkins/.ssh-bash-4.1$ I use this method to install SSH keys that I want my jenkins server to have access to at the CLI level. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/321364",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/198841/"
]
} |
321,406 | I have a Display that I want to write to. This is possible over the serial port. When I use a USB to RS-232 Converter, that thing works like a charm. I even tried using only the RX, TX and GND wires of the serial converter, and it still works. Now I want to use this Display in a small case paired with a Raspberry Pi, so i don't have any space left for the big USB-RS-232 converter.I have tried using the internal serial port of the Raspberry. It is set to 9600 baud using $ sudo stty -F /dev/ttyAMA0 9600 . But when I connect it to the display, it only shows up garbage and normal control-commands (that were working using the RS-232 converter) don't work either.Using $ sudo minicom -b 9600 -o -D /dev/ttyAMA0 and looping the GPIOs TX to RX, it shows up the right characters in the minicom console.Now looping the GPIO-Serial-Port to the USB-RS-232 Converter's RX and TX pins and connecting ground and opening both ports in minicom with baud set to 9600, only sometimes shows some output on the other terminal, but when it shows any output, it is also just garbage. | I'm quite confident the problem is that the Pi does not have an RS232 interface, while the display has. The Pi has an (LV-)UART interface, its TX-pin outputs 0V for a logical 0 and 3.3V for a logical 1 . This is quite easy to implement, since 3.3V is already available on the Pi. But this only works for communications on a single PCB or within a single device. For communication between devices over longer distances, a system less prone to interfering signals like RS232 is used. While the logical structure of the waveform (bitrate, timing, start-, stop-, parity- and data-bits) is the same as for UART, the voltage levels are -15V...-3V for a logical 1 and +15V...+3V for a logical 0 . This means, there are not only higher (and negative) voltages, their meaning is also inverted. So, if the display expects RS232 levels and gets that 3.3V levels from the Pi, it mostly doesn't recognize the data, and if it does, it's often just garbage. And of course, if you connect RX and TX of the same interface, you get what you expect. But: If the RS232 TX output is not current limited, it could even damage your Pi! There are UART to RS232 converter boards out there, but if you like to solder, the boards just contain a MAX3232 (plus four capacitors). This IC also generates the higher (and negative) voltage levels from the 3.3V supply voltage from the Pi. The more common is the MAX232 (guess why it's called so), but it is for 5V, not 3.3V operation. Finally, because the UART and the RS232 use the same logical structure, it's often not distinguished between both of them, especially by software (programmers). They are often also just called "serial interface", though there are other interfaces like I²C and SPI, which are a type of serial interface, but never considered to be "the" serial interface. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/321406",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197714/"
]
} |
321,422 | I have a little open source project that for various reasons I've tried to write in reasonably portable shell script. Its automated integration tests check that hostile characters in path expressions are treated properly, among other things. Users with /bin/sh provided by bash are seeing a failure in a test that I've simplified down to the following: echo "A bug\\'s life"echo "A bug\\\\'s life" On bash, it produces this expected result: A bug\'s lifeA bug\\'s life With dash, which I've developed against, it does this: A bug\'s lifeA bug\'s life I'd like to think that I haven't found a bug in dash, that I might be missing something instead. Is there a rational explanation for this? | In echo "A bug\\'s life" Because those are double quotes, and \ is special inside double quotes, the first \ is understood by the shell as escaping / quoting the second \ . So a A bug\'s life argument is being passed to echo . echo "A bug\'s life" Would have achieved exactly the same. ' being not special inside double quotes, the \ is not removed so it's the exact same argument that is passed to echo . As explained at Why is printf better than echo? , there's a lot of variation between echo implementations. In UNIX-conformant implementations like dash 's echo builtin¹, \ is used to introduce escape sequences: \n for newline, \b for backspace, \0123 for octal sequences... and \\ for backslash itself. Some (non-POSIX) ones require a -e option for that, or do it only when in conformance mode (like bash 's when built with the right options like for the sh of OS/X or when called with SHELLOPTS=xpg_echo in the environment). So in standard (Unix standard only; POSIX leaves the behaviour unspecified) echo s, echo '\\' same as: echo "\\\\" outputs one backslash, while in bash when not in conformance mode: echo '\\' will output two backslashes. Best it to avoid echo and use printf instead: $ printf '%s\n' "A bug\'s life"A bug\'s life Which works the same in this instance in all printf implementations. ¹ dash 's echo is compliant in that regard, but not in that echo -n output nothing while the UNIX specification (POSIX + XSI) requires it to output -n<newline> . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/321422",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107455/"
]
} |
321,427 | The problem is that I really don't know if I am confused with PermitRootLogin or it is not working well.I put it in the sshd_config, and when I connect via ssh, I am able to do su - in order to have root permissions. So shouldn't PermitRootLogin no permit me that? | PermitRootLogin only configures whether root can login directly via ssh (e.g. ssh [email protected] ). When you login using a different user account, whatever you do in your shell is not influenced by sshd 's config. From man sshd_config : PermitRootLogin Specifies whether root can log in using ssh(1). The argument must be “yes”, “without-password”, “forced-commands-only”, or “no”. The default is “yes”. […] If this option is set to “no”, root is not allowed to log in. You can however use your login.defs or pam config to limit which users can use the su command: Server Fault: Disable su on machine | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/321427",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194664/"
]
} |
321,440 | I have the pid and I just stopped a program using kill -stop PID Now I want to continue it by doing kill -cont PID But only if it's already stopped. How would I check to see if it's stopped or running? | You can check whether the process is in stopped state, T is ps output. You can do: [ "$(ps -o state= -p PID)" = T ] && kill -CONT PID [ "$(ps -o state= -p PID)" = T ] tests whether the output of ps -o state= -p PID is T , if so send SIGCONT to the process. Replace PID with the actual process ID of the process. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/321440",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103531/"
]
} |
321,492 | I am using wget to download a static html page. The W3C Validator tells me the page is encoded in UTF-8. Yet when I cat the file after download, I get a bunch of binary nonsense. I'm on Ubuntu, and I thought the default encoding was UTF-8? That's what my locale file seems to say. Why is this happening and how can I correct it? Also, looks like Content-Encoding: gzip . Perhaps this makes a diff? This is the simple request: wget https://www.example.com/page.html I also tried this: wget https://www.example.com/page.html -q -O - | iconv -f utf-16 -t utf-8 > output.html Which returned: iconv: illegal input sequence at position 40 cat'ing the file returns binary that looks like this: l�?חu�`�q"�:)s��dġ__��~i��6n)T�$H�#���QJ Result of xxd output.html | head -20 : 00000000: 1f8b 0800 0000 0000 0003 bd56 518f db44 ...........VQ..D00000010: 107e a6bf 62d4 8a1e 48b9 d8be 4268 9303 .~..b...H...Bh..00000020: 8956 082a 155e 7a02 21dd cbd8 3bb6 97ae .V.*.^z.!...;...00000030: 77cd ee38 39f7 a1bf 9d19 3bb9 0bbd 9c40 w..89.....;....@00000040: 2088 12c5 de9d 9df9 be99 6f67 f751 9699 .........og.Q..00000050: 500d 1d79 5eee a265 faec 7151 e4ab 6205 P..y^..e..qQ..b.00000060: 4dd3 0014 1790 e7d0 77c0 ef2f cbf8 cde3 M.......w../....00000070: cf1f 7d6c 7d69 ec16 d0d9 c67f 7d7d 56c9 ..}l}i......}}V.00000080: 04c5 eb33 35fc e49e 2563 e908 ca10 0d45 ...35...%c.....E00000090: 31ce afcf a022 e77a 34c6 fa46 46be d88f 1....".z4..FF...000000a0: a41e ab79 446d 76d6 702b cf45 9e7f ba77 ...yDmv.p+.E...w000000b0: 7dc2 779c 274e cc18 483c 3a12 0f75 f07c }.w.'N..H<:..u.|000000c0: 5e63 67dd b886 ab48 e550 b5c4 f0e3 db0d ^cg....H.P......000000d0: 54c1 85b8 8627 2ff3 2ff3 17f9 0626 d31d T....'/./....&..000000e0: d9a6 e5b5 4076 663f 94ec 7b5a 17cf 7ade ....@vf?..{Z..z.000000f0: 00d3 0d9f 4fcc d733 ef8d a0bb 0a06 c7eb ....O..3........00000100: b304 6fb1 b1cc 18ed 90e0 8710 43aa 424f ..o.........C.BO00000110: 50c7 d0c1 2bac 09be 4d1c 2566 335e 666c P...+...M.%f3^fl00000120: 1e20 951d 58fd 6774 f3e9 f317 749f 7fc4 . ..X.gt....t...00000130: d651 cdca f5a7 b0a5 aea4 08ab 055c e4c5 .Q...........\.. Also, strangely, the output file seems to open properly in TextWrangler! | This is a gzip compressed file. You can find this out by running the file command, which figures out the file format from magic numbers in the data (this is how programs such as Text Wrangler figure out that the file is compressed as well): file output.htmlwget -O - … | file - The server (I guessed it from the content you showed) is sending gzipped data and correctly setting the header Content-Encoding: gzip but wget doesn't support that. In recent versions, wget sends Accept-encoding: identity , to tell the server not to compress or otherwise encode the data. In older versions, you can send the header manually: wget --header 'Accept-encoding: identity' … However this particular server appears to be broken: it sends compressed data even when told not to encode the data in any way. So you'll have to decompress the data manually. wget -O output.html.gz … && gunzip output.html.gz | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/321492",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/198965/"
]
} |
321,559 | I have a sample as shown here: input.txt USERS position ref rslt usr1 X B usr2 2980 C usr3 3323 P usr4 A usr5 5251 U usr6 9990 A usr7 10345 T I need to print "rslt" column and corresponding "USERS", output file should be like this: output.txt USERS rslt usr1 B usr2 C usr4 A usr6 A I tried to use awk command but it didn't work. Note that, all black position of the table is filled with spaces (No. of spaces is different in each row) | In this case, one possible solution is to provide the widths of the fieldsin the beginning section: awk 'BEGIN {FIELDWIDTHS = "16 11 6 7"} $4 ~/[^ ]/ {print $1 $4}' Fieldwidths may be counted by hand, but for complex headers I like to start with head -1 f | grep -Po '.*? (?=\S|$)' | awk '{print length}' UPDATE: ... or in order to deal with initial and final spaces in the header: head -1 f | grep -Po '(^ *|\S).*?( (?=\S)|$)' | awk '{print length}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/321559",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/184985/"
]
} |
321,561 | I am longing for a tool for linux that would give me a general idea what is happening on a machine. Example information I am looking for: What daemons are installed (www, db, others) Config locations for those services What kernel and distro is the server running and how old the distro is (would love if the tool would also tell if the distro is still supported) RAM, CPU, and disk space information Mounted drives/paths Information about my account: am I a sudoer, other usefull information General network information (blocked, or rather open ports) Edit: Installed interpreters/compilers and their versions (C, Python, etc.) X version and any DE installed | You can use inxi , it can be installed on the must known linux distro: Debian users: sudo apt-get install inxi RHLE/CentOS/Fedora users sudo yum install inxi Arch users: sudo pacman -S inxi Check inxi -h to get the list of options , there is an example to display the system info: inxi -v7 -c 0 What daemons are installed (www, db, others) You can check the installed daemons through the systemd features to analyse the system state | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/321561",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/195527/"
]
} |
321,579 | I've given a variable which hold a number daysAgo=1 I would like to expand this variable in a get date expression. Like this: $(date +%d -d '$daysAgo days ago') What do I need to do that the $daysAgo variable gets expanded? I tried like that without success: daysAgo=1exp="'${daysAgo} days ago'"$(date +%d -d $exp) | You can use inxi , it can be installed on the must known linux distro: Debian users: sudo apt-get install inxi RHLE/CentOS/Fedora users sudo yum install inxi Arch users: sudo pacman -S inxi Check inxi -h to get the list of options , there is an example to display the system info: inxi -v7 -c 0 What daemons are installed (www, db, others) You can check the installed daemons through the systemd features to analyse the system state | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/321579",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/192072/"
]
} |
321,643 | I am having difficulties finding how Rsync "chooses" the extension for the temporary file created while copying the file if I don't use the --inplace option. Example : I want to copy sourceDirectory/myFile.txt into targetDirectory/ with Rsync. While copying myFile.txt into targetDirectory/ Rsync will create a file named .myFile.txt.W4zvLi in targetDirectory/ . Then Rsync will rename .myFile.txt.W4zvLi into myFile.txt . The question is how why Rsync uses the W4zvLi extension and why it seems to change each time I execute the Rsync program? | rsync uses the mktemp(3) POSIX function to generate a unique temporary file name. You pass a template string to the mktemp function, and it will return a file name with any X characters in the template replaced by a random character. In particular, rsync passes .XXXXXX to mktemp . If you want to try it out from the command line you can use the mktemp(1) binary like so: mktemp -u "/tmp/foo.XXXXXX" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/321643",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/199077/"
]
} |
321,679 | Is it possible to use gawk 's -i inplace option and also print things to stdout ? For example, if I wanted to update a file, and if there are any changes print the name of the file and the changed lines to stderr I could do something like find -type f -name 'myfiles' -exec gawk -i inplace '{if(gsub(/pat/, "repl")) { print FILENAME > "/proc/self/fd/2" ; print > "/proc/self/fd/2"; } print;}' {} + but is there a way to use stdout instead, or a cleaner way to print that block to the alternate stream? | You should use /dev/stderr or /dev/fd/2 instead of /proc/self/fd/2 . gawk handles /dev/fd/x and /dev/stderr by itself (regardless of whether the system has those files or not). When you do a: print "x" > "/dev/fd/2" gawk does a write(2, "x\n") , while when you do: print "x" > "/proc/self/fd/2" since it doesn't treat /proc/self/fd/x specially, it does a: fd = open("/proc/self/fd/2", O_WRONLY|O_CREAT|O_TRUNC);write(fd, "x\n"); First /proc/self/fd is Linux specific and on Linux they are problematic. The two versions above are not equivalent when stderr is to a regular or other seekable file or to a socket (for which the latter would fail) (not to mention that it wastes a file descriptor). That being said, if you need to write to the original stdout, you need to save it away in another fd like: gawk -i inplace '{ print "goes into the-file" print "to stdout" > "/dev/fd/3"}' the-file 3>&1 gawk does redirect stdout with in-place to the file. It's needed because for instance, you'd want: awk -i inplace '{system("uname")}' file to store the uname output into the file . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/321679",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109842/"
]
} |
321,687 | I need to add a route that won't be deleted after reboot. I read these two ways of doing it : Add ip route add -net 172.X.X.0/24 gw 172.X.X.X dev ethX to the file /etc/network/interfaces or Create the file /etc/network/if-up.d/route with: #!/bin/shroute add -net 172.X.X.0/24 gw 172.X.X.X dev ethX and make it executable : chmod +x /etc/network/if-up.d/route So I'm confused. What is the best way of doing it? | You mentioned /etc/network/interfaces , so it's a Debian system... Create a named routing table. As an example, I have used the name, "mgmt," below. echo '200 mgmt' >> /etc/iproute2/rt_tables Above, the kernel supports many routing tables and refers to these by unique integers numbered 0-255. A name, mgmt, is also defined for the table. Below, a look at a default /etc/iproute2/rt_tables follows, showing that some numbers are reserved. The choice in this answer of 200 is arbitrary; one might use any number that is not already in use, 1-252. ## reserved values#255 local254 main253 default0 unspec## local# Below, a Debian 7/8 interfaces file defines eth0 and eth1 . eth1 is the 172 network. eth0 could use DHCP as well. 172.16.100.10 is the IP address to assign to eth1 . 172.16.100.1 is the IP address of the router. source /etc/network/interfaces.d/*# The loopback network interfaceauto loiface lo inet loopback# The production network interfaceauto eth0allow-hotplug eth0# iface eth0 inet dhcp # Remove the stanzas below if using DHCP.iface eth0 inet static address 10.10.10.140 netmask 255.255.255.0 gateway 10.10.10.1# The management network interfaceauto eth1allow-hotplug eth1iface eth1 inet static address 172.16.100.10 netmask 255.255.255.0 post-up ip route add 172.16.100.0/24 dev eth1 src 172.16.100.10 table mgmt post-up ip route add default via 172.16.100.1 dev eth1 table mgmt post-up ip rule add from 172.16.100.10/32 table mgmt post-up ip rule add to 172.16.100.10/32 table mgmt Reboot or restart networking. Update - Expounding on EL I noticed in a comment that you were "wondering for RHEL as well."In Enterprise Linux ("EL" - RHEL/CentOS/et al), create a named routing table as mentioned, above. The EL /etc/sysconfig/network file: NETWORKING=yesHOSTNAME=host.sld.tldGATEWAY=10.10.10.1 The EL /etc/sysconfig/network-scripts/ifcfg-eth0 file, using a static configuration (without NetworkManager and not specifying "HWADDR" and "UUID" for the example, below) follows. DEVICE=eth0TYPE=EthernetONBOOT=yesNM_CONTROLLED=noBOOTPROTOCOL=noneIPADDR=10.10.10.140NETMASK=255.255.255.0NETWORK=10.10.10.0BROADCAST=10.10.10.255 THE EL /etc/sysconfig/network-scripts/ifcfg-eth1 file (without NetworkManager and not specifying "HWADDR" and "UUID" for the example, below) follows. DEVICE=eth1TYPE=EthernetONBOOT=yesNM_CONTROLLED=noBOOTPROTOCOL=noneIPADDR=172.16.100.10NETMASK=255.255.255.0NETWORK=172.16.100.0BROADCAST=172.16.100.255 The EL /etc/sysconfig/network-scripts/route-eth1 file: 172.16.100.0/24 dev eth1 table mgmtdefault via 172.16.100.1 dev eth1 table mgmt The EL /etc/sysconfig/network-scripts/rule-eth1 file: from 172.16.100.0/24 lookup mgmt Update for RHEL8 This method described above works with RHEL 6 & RHEL 7 as well as the derivatives, but for RHEL 8 and derivatives, one must first install network-scripts to use the method described above. dnf install network-scripts The installation produces a warning that network-scripts will be removed in one of the next major releases of RHEL and that NetworkManager provides ifup / ifdown scripts as well. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/321687",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103808/"
]
} |
321,697 | This question is inspired by Why is using a shell loop to process text considered bad practice ? I see these constructs for file in `find . -type f -name ...`; do smth with ${file}; done and for dir in $(find . -type d -name ...); do smth with ${dir}; done being used here almost on a daily basis even if some people take the time to comment on those posts explaining why this kind of stuff should be avoided... Seeing the number of such posts (and the fact that sometimes those comments are simply ignored) I thought I might as well ask a question: Why is looping over find 's output bad practice and what's the proper way to run one or more commands for each file name/path returned by find ? | Why is looping over find 's output bad practice? The simple answer is: Because filenames can contain any character. Therefore, there is no printable character you can reliably use to delimit filenames. Newlines are often used (incorrectly) to delimit filenames, because it is unusual to include newline characters in filenames. However, if you build your software around arbitrary assumptions, you at best simply fail to handle unusual cases, and at worst open yourself up to malicious exploits that give away control of your system. So it's a question of robustness and safety. If you can write software in two different ways, and one of them handles edge cases (unusual inputs) correctly, but the other one is easier to read, you might argue that there is a tradeoff. (I wouldn't. I prefer correct code.) However, if the correct, robust version of the code is also easy to read, there is no excuse for writing code that fails on edge cases. This is the case with find and the need to run a command on each file found. Let's be more specific: On a UNIX or Linux system, filenames may contain any character except for a / (which is used as a path component separator), and they may not contain a null byte. A null byte is therefore the only correct way to delimit filenames. Since GNU find includes a -print0 primary which will use a null byte to delimit the filenames it prints, GNU find can safely be used with GNU xargs and its -0 flag (and -r flag) to handle the output of find : find ... -print0 | xargs -r0 ... However, there is no good reason to use this form, because: It adds a dependency on GNU findutils which doesn't need to be there, and find is designed to be able to run commands on the files it finds. Also, GNU xargs requires -0 and -r , whereas FreeBSD xargs only requires -0 (and has no -r option), and some xargs don't support -0 at all. So it's best to just stick to POSIX features of find (see next section) and skip xargs . As for point 2— find 's ability to run commands on the files it finds—I think Mike Loukides said it best: find 's business is evaluating expressions -- not locating files. Yes, find certainly locates files; but that's really just a side effect. --Unix Power Tools POSIX specified uses of find What's the proper way to run one or more commands for each of find 's results? To run a single command for each file found, use: find dirname ... -exec somecommand {} \; To run multiple commands in sequence for each file found, where the second command should only be run if the first command succeeds, use: find dirname ... -exec somecommand {} \; -exec someothercommand {} \; To run a single command on multiple files at once: find dirname ... -exec somecommand {} + find in combination with sh If you need to use shell features in the command, such as redirecting the output or stripping an extension off the filename or something similar, you can make use of the sh -c construct. You should know a few things about this: Never embed {} directly in the sh code. This allows for arbitrary code execution from maliciously crafted filenames. Also, it's actually not even specified by POSIX that it will work at all. (See next point.) Don't use {} multiple times, or use it as part of a longer argument. This isn't portable. For example, don't do this: find ... -exec cp {} somedir/{}.bak \; To quote the POSIX specifications for find : If a utility_name or argument string contains the two characters "{}", but not just the two characters "{}", it is implementation-defined whether find replaces those two characters or uses the string without change. ... If more than one argument containing the two characters "{}" is present, the behavior is unspecified. The arguments following the shell command string passed to the -c option are set to the shell's positional parameters, starting with $0 . Not starting with $1 . For this reason, it's good to include a "dummy" $0 value, such as find-sh , which will be used for error reporting from within the spawned shell. Also, this allows use of constructs such as "$@" when passing multiple files to the shell, whereas omitting a value for $0 would mean the first file passed would be set to $0 and thus not included in "$@" . To run a single shell command per file, use: find dirname ... -exec sh -c 'somecommandwith "$1"' find-sh {} \; However it will usually give better performance to handle the files in a shell loop so that you don't spawn a shell for every single file found: find dirname ... -exec sh -c 'for f do somecommandwith "$f"; done' find-sh {} + (Note that for f do is equivalent to for f in "$@"; do and handles each of the positional parameters in turn—in other words, it uses each of the files found by find , regardless of any special characters in their names.) Further examples of correct find usage: (Note: Feel free to extend this list.) Filter files generated by `find` by parsed output of `file` command substring removal in find -exec How to Do this List Comparison with Find? Using literal empty curly braces {} inside sed command from find -exec How do I delete file by filename that are set as dates? bash: Deleting directories not containing given strings Grep word within a file then copy the file Remove certain types of files except in a folder | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/321697",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22142/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.