source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
366,553 | Ok, so I am in my anaconda environment and I ran which python. I get /home/comp/anaconda3/envs/env1/bin/python Now if I start tmux, then run source activate env1, then which python, I get /home/comp/anaconda3/bin/python even though I do have my environment activated. How can I make anaconda see the same path inside tmux ? | The solution seems to be to deactivate the conda environment, then start tmux, then reactivate the environment inside tmux. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/366553",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/232497/"
]
} |
366,557 | I updated my /etc/spamassassin/local.cf spamassassin file to update some score rules. However, even after restarting spamassassin (via service), the new score are not shown in spam emails. In fact, nothing in that file seem to influence how spamassassin work. I use exim as my MTA but that should not matter. All packages were installed via apt-get and are at the latest version for 14.04. For example, I have this: score HTML_MESSAGE 2.0 define in /etc/spamassassin/local.cf . I restarted both exim and spamassassin. spamassassing --lint shows that there are no errors int hat file. Then got yet another spam with this: 0.0 HTML_MESSAGE BODY: HTML included in message In the X-Spam-Report field. I ran spamassassin -D < spam and the order of loading of cfg files seems to be wrong: Jun 8 13:34:07.300 [21668] dbg: config: read file /etc/spamassassin/local.cf...Jun 8 13:34:07.600 [21668] dbg: config: read file /var/lib/spamassassin/3.004000/updates_spamassassin_org/50_scores.cf...Jun 8 13:34:07.787 [21668] dbg: config: read file /var/lib/spamassassin/3.004000/updates_spamassassin_org/73_sandbox_manual_scores.cfJun 8 13:34:07.788 [21668] dbg: config: fixed relative path: /var/lib/spamassassin/3.004000/updates_spamassassin_org/local.cf... What is going on? Based on a comment from Centimane : I tried strace -f -e trace=file spamassassin -D < spam with the same result: Spamassassin is reading system files after the local.cf file. Thus, trashing any score changes. From comments, here is the local.cf file, which is more or less the vanilla one. # This is the right place to customize your installation of SpamAssassin.## See 'perldoc Mail::SpamAssassin::Conf' for details of what can be# tweaked.## Only a small subset of options are listed below############################################################################# Add *****SPAM***** to the Subject header of spam e-mails#rewrite_header Subject *****SPAM*****add_header spam Flag _YESNOCAPS_add_header all Checker-Version SpamAssassin _VERSION_ (_SUBVERSION_) on _HOSTNAME_add_header all Status _YESNO_, score=_SCORE_ required=_REQD_ tests=_TESTS_ autolearn=_AUTOLEARN_ bayes=_BAYES_add_header all Report _SUMMARY_# Save spam messages as a message/rfc822 MIME attachment instead of# modifying the original message (0: off, 2: use text/plain instead)## report_safe 1# Set which networks or hosts are considered 'trusted' by your mail# server (i.e. not spammers)## trusted_networks 212.17.35.# Set file-locking method (flock is not safe over NFS, but is faster)## lock_method flock# Set the threshold at which a message is considered spam (default: 5.0)#required_score 5.0# Use Bayesian classifier (default: 1)#use_bayes 1bayes_path /var/lib/spamassassin/bayes/bayesbayes_file_mode 0777# Bayesian classifier auto-learning (default: 1)#bayes_auto_learn 1# Set headers which may provide inappropriate cues to the Bayesian# classifier#bayes_ignore_header X-Bogositybayes_ignore_header X-Spam-Flagbayes_ignore_header X-Spam-Status# Some shortcircuiting, if the plugin is enabled# ifplugin Mail::SpamAssassin::Plugin::Shortcircuit## default: strongly-whitelisted mails are *really* whitelisted now, if the# shortcircuiting plugin is active, causing early exit to save CPU load.# Uncomment to turn this on#shortcircuit USER_IN_WHITELIST onshortcircuit USER_IN_DEF_WHITELIST onshortcircuit USER_IN_ALL_SPAM_TO onshortcircuit SUBJECT_IN_WHITELIST on# the opposite; blacklisted mails can also save CPU#shortcircuit USER_IN_BLACKLIST onshortcircuit USER_IN_BLACKLIST_TO onshortcircuit SUBJECT_IN_BLACKLIST on# if you have taken the time to correctly specify your "trusted_networks",# this is another good way to save CPU## shortcircuit ALL_TRUSTED on# and a well-trained bayes DB can save running rules, too#shortcircuit BAYES_99 spamshortcircuit BAYES_00 hamblacklist_from wokfrance.comblacklist_from brother-mailer.comblacklist_from *.sd-soft.netblacklist_from woifrance.comblacklist_from adimacocl.netblacklist_from bletspuranawyat.netblacklist_from sd-soft.netblacklist_from m1web-track.comblacklist_from winntoniecline.netblacklist_from kafod.orgblacklist_from *.kafod.orgblacklist_from [email protected]_from *.bhlive.co.ukblacklist_from *.regionasm.netblacklist_from regionasm.net## Tweaks.score AC_BR_BONANZA 1.0score ADMITS_SPAM 10.0score A_HREF_TO_REMOVE 2.0score DEAR_FRIEND 4.0score FREEMAIL_FORGED_FROMDOMAIN 4.0score FREEMAIL_FROM 1.0score FROM_LOCAL_HEX 9.0score HTML_MESSAGE 2.0score RCVD_IN_MSPIKE_BL 2.0score RCVD_IN_SORBS_WEB 2.0score RCVD_IN_XBL 3.0score RDNS_NONE 2.0score SCVD_IN_DNSWL_BLOCKED 3.0score T_DKIM_INVALID 1.0score T_FREEMAIL_DOC_PDF 3.0score T_REMOTE_IMAGE 3.0score URIBL_BLOCKED 3.0score URIBL_DBL_SPAM 3.0score URIBL_JP_SURBL 3.0score URIBL_WS_SURBL 3.0endif # Mail::SpamAssassin::Plugin::Shortcircuit And the whole output of spamassassin -D is too big for this. However, the relevant lines are above. If you want more information, tell me what to look for and I will add it. | The problem is that you are setting the HTML_MESSAGE score inside the Shortcircuit plugin . But that plugin comes disabled by default. Try to set the score in the last line of the file, after the Shortcircuit endif instruction: # Some shortcircuiting, if the plugin is enabled# ifplugin Mail::SpamAssassin::Plugin::Shortcircuit# [...]endif # Mail::SpamAssassin::Plugin::Shortcircuitscore HTML_MESSAGE 2.0 If you prefer to enable the Shortcuit plugin, you need to uncomment it from the /etc/spamassassin/v320.pre file: # Shortcircuit - stop evaluation early if high-accuracy rules fire# loadplugin Mail::SpamAssassin::Plugin::Shortcircuit | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/366557",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10526/"
]
} |
366,572 | I try to run a series of commands as a whole inside the main shell, but the way I was teached only works inside the subshell: echo $BASHPID18884 (echo "hello $BASHPID";sleep 5;echo "hello again $BASHPID")hello 22268hello again 22268 I also tried: . (echo "hello $BASHPID";sleep 5;echo "hello again $BASHPID")source (echo "hello $BASHPID";sleep 5;echo "hello again $BASHPID") to use the source command, because I learned it forces the script to run inside the main shell. I guess, it would work, if I put the commands inside a file and run it with the source command, but I would like to know if there is a way beyound a script file. | Instead of ( something ) , which launches something in a subshell, use { something ; } , which launches something in the current shell You need spaces after the { , and should also have a ; (or a newline) before the } . Ex: $ { echo "hello $BASHPID";sleep 5;echo "hello again $BASHPID" ; }hello 3536hello again 3536 Please note however that if you launch some complex commands (or piped commands), those will be in a subshell most of the time anyway. And the "portable" way to get your current shell's pid is $$ . So I'd instead write your test as: { echo "hello $$"; sleep 5 ; echo "hello again $$" ; } (the sleep is not really useful anyway here) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/366572",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229576/"
]
} |
366,581 | Is there a way to print an entire array ([key]=value) without looping over all elements? Assume I have created an array with some elements: declare -A arrayarray=([a1]=1 [a2]=2 ... [b1]=bbb ... [f500]=abcdef) I can print back the entire array with for i in "${!array[@]}"doecho "${i}=${array[$i]}"done However, it seems bash already knows how to get all array elements in one "go" - both keys ${!array[@]} and values ${array[@]} . Is there a way to make bash print this info without the loop? Edit: typeset -p array does that! However I can't remove both prefix and suffix in a single substitution: a="$(typeset -p array)"b="${a##*(}"c="${b%% )*}" Is there a cleaner way to get/print only the key=value portion of the output? | I think you're asking two different things there. Is there a way to make bash print this info without the loop? Yes, but they are not as good as just using the loop. Is there a cleaner way to get/print only the key=value portion of the output? Yes, the for loop. It has the advantages that it doesn't require external programs, is straightforward, and makes it rather easy to control the exact output format without surprises. Any solution that tries to handle the output of declare -p ( typeset -p )has to deal with a) the possibility of the variables themselves containing parenthesis or brackets, b) the quoting that declare -p has to add to make it's output valid input for the shell. For example, your expansion b="${a##*(}" eats some of the values, if any key/value contains an opening parenthesis. This is because you used ## , which removes the longest prefix. Same for c="${b%% )*}" . Though you could of course match the boilerplate printed by declare more exactly, you'd still have a hard time if you didn't want all the quoting it does. This doesn't look very nice unless you need it. $ declare -A array=([abc]="'foobar'" [def]='"foo bar"')$ declare -p arraydeclare -A array='([def]="\"foo bar\"" [abc]="'\''foobar'\''" )' With the for loop, it's easier to choose the output format as you like: # without quoting$ for x in "${!array[@]}"; do printf "[%s]=%s\n" "$x" "${array[$x]}" ; done[def]="foo bar"[abc]='foobar'# with quoting$ for x in "${!array[@]}"; do printf "[%q]=%q\n" "$x" "${array[$x]}" ; done[def]=\"foo\ bar\"[abc]=\'foobar\' From there, it's also simple to change the output format otherwise (remove the brackets around the key, put all key/value pairs on a single line...). If you need quoting for something other than the shell itself, you'll still need to do it by yourself, but at least you have the raw data to work on. (If you have newlines in the keys or values, you are probably going to need some quoting.) With a current Bash (4.4, I think), you could also use printf "[%s]=%s" "${x@Q}" "${array[$x]@Q}" instead of printf "%q=%q" . It produces a somewhat nicer quoted format, but is of course a bit more work to remember to write. (And it quotes the corner case of @ as array key, which %q doesn't quote.) If the for loop seems too weary to write, save it a function somewhere (without quoting here): printarr() { declare -n __p="$1"; for k in "${!__p[@]}"; do printf "%s=%s\n" "$k" "${__p[$k]}" ; done ; } And then just use that: $ declare -A a=([a]=123 [b]="foo bar" [c]="(blah)")$ printarr aa=123b=foo barc=(blah) Works with indexed arrays, too: $ b=(abba acdc)$ printarr b0=abba1=acdc | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/366581",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77319/"
]
} |
366,583 | I'm making a minor change to a very large file image file (just a few pixels difference) which takes a long time to transfer over the network. Is there a way for rsync to identify the difference in the file and only send the small diff over the network? | rsync delta-transfer algorithm does this by default. Quoting rsync manpage : DESCRIPTION Rsync is a fast and extraordinarily versatile file copying tool. It can copy locally, to/from another host over any remote shell, or to/from a remote rsync daemon. It offers a large number of options that control every aspect of its behavior and permit very flexible specification of the set of files to be copied. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination . Rsync is widely used for backups and mirroring and as an improved copy command for everyday use. If you want to disable it, you will have to use the -W or --whole-file option. -W, --whole-file This option disables rsync's delta-transfer algorithm, which causes all transferred files to be sent whole. The transfer may be faster if this option is used when the bandwidth between the source and destination machines is higher than the bandwidth to disk (especially when the "disk" is actually a networked filesystem). This is the default when both the source and destination are specified as local paths, but only if no batch-writing option is in effect. If you really know how much your file have changed doing, you could even optimize this delta transfer behavior by tunning your delta block size: -B, --block-size=BLOCKSIZE This forces the block size used in rsync's delta-transfer algorithm to a fixed value. It is normally selected based on the size of each file being updated. See the technical report for details. And if you want more information about the algorithm itself, you can find it here: The Rsync algorithm | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/366583",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9035/"
]
} |
366,613 | Suddenly all the available disk space on / has disappeared. If I make room in the disk (by deleting ~50GB of stuff, for example), after a few minutes I am back to 0 available disk space (according to df ). Clearly, some process is eating up disk space at a rapid rate, but I can't figure out what it is. One thing is certain, though: whatever it is, it must be creating many small files, because there are no files bigger than 10GB on the disk, and all the ones bigger than 1GB are much older than today. How can I find what's eating up disk space? FWIW, only df sees the problem, not du . For example, below I show several "snapshots" from du and df taken 60s. apart. (I did this after I had made some room in the disk.) Notice how du 's output remains steady (at 495G ), but df shows a steadily shrinking amount of available space. (I've followed the recommendation given here . IOW, /mnt/root is pointing to / .) # while true; do du -sh /mnt/root && df -h /mnt/root; sleep 60; done495G /mnt/rootFilesystem Size Used Avail Use% Mounted on/dev/sdb1 880G 824G 12G 99% /mnt/root495G /mnt/rootFilesystem Size Used Avail Use% Mounted on/dev/sdb1 880G 825G 11G 99% /mnt/root495G /mnt/rootFilesystem Size Used Avail Use% Mounted on/dev/sdb1 880G 827G 8.9G 99% /mnt/root495G /mnt/rootFilesystem Size Used Avail Use% Mounted on/dev/sdb1 880G 827G 8.1G 100% /mnt/root495G /mnt/rootFilesystem Size Used Avail Use% Mounted on/dev/sdb1 880G 828G 7.5G 100% /mnt/root | You are dealing with deleted files, that is why du does not register used space, but df does. Deleted files only disappear after the owner process is stopped; they remain in use while that does not happen. So to find the culprit process, I recommend you doing: sudo lsof -nP | grep '(deleted)' Then for killing the process. sudo kill -9 $(lsof | grep deleted | cut -d " " -f4) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/366613",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
366,618 | I am using my ubuntu system as an internet gateway. I am searching for a Network monitoring tool (Web based or commandline based) with which I can see which computers from my network are communicating which domains and IP address on the internet Also, if I can find out the top domains or IP to/from which data is sent or received from. The thing is some system from my network is throwing bruteforce attacks and spamming outwards. I want to know exactly which system is sending out data and is causing problems to me. All help, advices would be appreciated !Thanks | You are dealing with deleted files, that is why du does not register used space, but df does. Deleted files only disappear after the owner process is stopped; they remain in use while that does not happen. So to find the culprit process, I recommend you doing: sudo lsof -nP | grep '(deleted)' Then for killing the process. sudo kill -9 $(lsof | grep deleted | cut -d " " -f4) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/366618",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/213274/"
]
} |
366,640 | I am reading about bootstrapping and am confused because the term is used so much in tech (specially bootstrap itself as the CSS framework). But as far as I know bootstrapping in terms of Linux machines is this: http://www.tldp.org/LDP/LG/issue70/ghosh.html - Describes a way to start up a computer. Is this correct? If so, then bootstrapping is boot loading? | In the general sense, "bootstrapping" is a process through which a complex system is set up using a much simpler system. A bootstrap system (the simpler system) is in itself inherently incomplete. Bootstrapping an OS ("booting it") includes getting the computer's firmware (BIOS, or equivalent) to run a simple program which is sometimes located on a fixed location on disk, which in turn starts more complex initialisation routines (see first and second stage bootloaders ). Bootstrapping a compiler is done by compiling a simple compiler that can handle a subset of a language in which the full compiler is written, possibly in several successive steps. The term is also used in business and in other fields to describe the use of intermediate stages of investment/development needed to initiate later stages of increasing complexity and/or size. From the Wikipedia article on Bootstrapping : Tall boots may have a tab, loop or handle at the top known as a bootstrap, allowing one to use fingers or a boot hook tool to help pulling the boots on. The saying "to pull oneself up by one's bootstraps" was already in use during the 19th century as an example of an impossible task. Related question: Which man page describes the process of a computer turning on? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/366640",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/232401/"
]
} |
366,667 | All the solutions I've found on Google are for different distro or older version of Debian and Shift+Numlock didn't resolve the issue, Also I can't find the Preferences->Keyboard->Mouse Keys. I'm using Gnome. There's no way to change Region Language and keyboard format, I only have US / Imperial and I'm using AZERTY format. I'm so confused. My numlock works somehow because when I press 4, the mouse cursor goes to the left direction (8 to the upper, 6 to the right, 2 to the down etc....) I can also use the keys to page up or down etc... So physically the numlock works (at least the directions and page up/down, position 1 and end, print etc.) I do not know why the number functionalilites are not activated. Do I have to configure something? I really appreciate any kind of help. | In the general sense, "bootstrapping" is a process through which a complex system is set up using a much simpler system. A bootstrap system (the simpler system) is in itself inherently incomplete. Bootstrapping an OS ("booting it") includes getting the computer's firmware (BIOS, or equivalent) to run a simple program which is sometimes located on a fixed location on disk, which in turn starts more complex initialisation routines (see first and second stage bootloaders ). Bootstrapping a compiler is done by compiling a simple compiler that can handle a subset of a language in which the full compiler is written, possibly in several successive steps. The term is also used in business and in other fields to describe the use of intermediate stages of investment/development needed to initiate later stages of increasing complexity and/or size. From the Wikipedia article on Bootstrapping : Tall boots may have a tab, loop or handle at the top known as a bootstrap, allowing one to use fingers or a boot hook tool to help pulling the boots on. The saying "to pull oneself up by one's bootstraps" was already in use during the 19th century as an example of an impossible task. Related question: Which man page describes the process of a computer turning on? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/366667",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/232571/"
]
} |
366,746 | When I select multiple files within ranger (using <Space> or V ), how do I move these selected files to another directory? I've tried to use dd and pp , but this only moves the file that's currently highlighted. | With the files / directories marked, press dd , then navigate to the directory you want to paste them in and press p . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/366746",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70343/"
]
} |
366,780 | I'm using the Perl rename command line tool to search recursively through a directory to rename any directories as well as files it finds. The issue I'm running into is the rename command will rename a sub-directory of a file then attempt to rename the parent directory of the same file. This will fail because the sub-directory has been renamed resulting in a "No such file or directory" Command: rename -f 's/foo/bar/' **rename -f 's/Foo/Bar/' ** For example, here is an original file that I would like to replace 'foo' with 'bar' File: /test/foo/com/test/foo/FooMain.java Failure: Can't rename /test/foo/com/test/foo/FooMain.java /test/bar/com/test/foo/FooMain.java: No such file or directory Preferred File: /test/bar/com/test/bar/BarMain.java You can see from the error message that it's attempting to rename the parent directory but at that point the subdirectory has already been changed resulting in the file not found error. Is there parameters for the rename command that will fix this or do I need to go about this in a different way? | I would go about this in a different way - specifically, using a depth-first search in place of the shell globstar ** For example, using GNU find , given: $ tree.βββ dir βββ foo βΒ Β βββ baz βΒ Β βββ MainFoo.c βββ Foo βββ baz βββ MainFoo.c5 directories, 2 files then find . -depth -iname '*foo*' -execdir rename -- 's/Foo/Bar/;s/foo/bar/' {} + results in $ tree.βββ dir βββ bar βΒ Β βββ baz βΒ Β βββ MainBar.c βββ Bar βββ baz βββ MainBar.c5 directories, 2 files | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/366780",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/232663/"
]
} |
366,790 | I'm not sure if this is the best place to ask this - please point me in the right direction if there's a better place. Let's say, hypothetically, that I have two machines - A is a development machine, and B is a production machine. A has software like a compiler that can be used to build software from source, while B does not. On A, I can easily build software from source by following the usual routine: ./configuremake Then, I can install the built software on A by running sudo make install . However, what I'd really like to do is install the software that I just built on B. What is the best way to do that? There are a few options that I have considered: Use a package manager to install software on B: this isn't an option for me because the software available in the package manager is very out of date. Install the compiler and other build tools on B: I'd rather not install build tools on the production machine due to various constraints. Manually copy the binaries from A to B: this is error-prone, and I'd like to make sure that the binaries are installed in a consistent manner across production machines. Install only make on B, transfer the source directory, and run sudo make install on B: this is the best solution I've found so far, but for some reason (perhaps clock offsets), make will attempt to re-build the software that should have already been built, which fails since the build tools aren't installed on B. Since my machines also happen to have terrible I/O speeds, transferring the source directory takes a very long time. What would be really nice is if there were a way to make some kind of package containing the built binaries that can be transferred and executed to install the binaries and configuration files. Does any such tool exist? | I would go about this in a different way - specifically, using a depth-first search in place of the shell globstar ** For example, using GNU find , given: $ tree.βββ dir βββ foo βΒ Β βββ baz βΒ Β βββ MainFoo.c βββ Foo βββ baz βββ MainFoo.c5 directories, 2 files then find . -depth -iname '*foo*' -execdir rename -- 's/Foo/Bar/;s/foo/bar/' {} + results in $ tree.βββ dir βββ bar βΒ Β βββ baz βΒ Β βββ MainBar.c βββ Bar βββ baz βββ MainBar.c5 directories, 2 files | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/366790",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70299/"
]
} |
366,898 | From below command able to generate Base64 pin for only first depth certificate. But need to generate pin for all depth of certificate. openssl s_client -servername example.com -connect example.com:443 -showcerts | openssl x509 -pubkey -noout | openssl rsa -pubin -outform der | openssl dgst -sha256 -binary | openssl enc -base64 Gives only one key instead of three, cUPcTAZWKaASuYWhhneDttWpY3oBAkE3h2+soZS7sWs= So, how can we generate all three level of pins? | Although I mostly concur with Romeo that you should have the cert files on the server already, if you do need to process the multiple certs from one s_client you can do something like: openssl s_client ..... -showcerts \ | awk '/-----BEGIN/{f="cert."(n++)} f{print>f} /-----END/{f=""}' # or input from bundle or chain file for c in cert.*; do openssl x509 <$c -noout -pubkey ..... done rm cert.* # use better temp name/location if you want | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/366898",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62019/"
]
} |
366,949 | I am using OpenStack Cloud and using LVM on RHEL 7 to manage volumes. As per my use case, I should be able to detach and attach these volumes to different instances. While updating fstab, I have used defaults,nofail for now but I am not sure what exactly I should be using. I am aware of these options: rw, nofail, noatime, discard, defaults But I don't how to use them. What should be the ideal configuration for my use case ? | As said by @ilkkachu, if you take a look at the mount(8) manpage, all your doubts should go away. Quoting the manpages: -w, --rw, --read-write Mount the filesystem read/write. This is the default. A synonym is -o rw. Means : Not needed at all, since rw is the default, and it is part of the defaults option nofail Do not report errors for this device if it does not exist. Means : If the device is not enable after you boot and mount it using fstab, no errors will be reported. You will need to know if a disk can be ignored if not mounted. Pretty useful on usb drivers, but i see no point on using this on a server... noatime Do not update inode access times on this filesystem (e.g., for faster access on the news spool to speed up news servers). Means : No read operation is a "pure" read operation on filesystems. Even if you only cat file for example, a little write operation will update the last time the inode of this file was accessed. It's pretty useful on some situations(like caching servers), but it can be dangerous if used on sync technologies like Dropbox. I'm no one to judge here what is best for you, if noatime set or ignored... discard/nodiscard Controls whether ext4 should issue discard/TRIM commands to the underlying block device when blocks are freed.This is useful for SSD devices and sparse/thinly -provisioned LUNs, but it is off by default until sufficient testing has been done. Means : TRIM feature from ssds . Take your time to read on this guy, and probe if your ssd support this feature(pretty much all modern ssds suport it). hdparm -I /dev/sdx | grep "TRIM supported" will tell you if trim is supported on your ssd. As for today, you could achieve better performance and data health by Periodic trimming instead of a continuous trimming on your fstab . There is even a in-kernel device blacklist for continuous trimming since it can cause data corruption due to non-queued operations. defaults Use default options: rw, suid, dev, exec, auto, nouser, and async. tl;dr: on your question, rw can be removed( defaults already imply rw), nofail is up to you, noatime is up to you, the same way discard is just up to your hardware features. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/366949",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135219/"
]
} |
366,973 | I need to run some memory heavy tests in a remote computer through SSH. Last time I did this, the computer stopped responding, and it was necessary for someone to physically reboot it. Is there a way I can set it up so that the system restarts instead of freezing if too much memory is being used? (I do have root access). The kernel version is 4.9.0. | To monitor/recover the control of a "unstable"/starver server, I would advise to use an hardware, or failing that a software watchdog; in Debian you can install it with: sudo apt-get install watchdog Then you edit /etc/watchdog.conf and add thresholds or tests; from the top of my head, the watchdog is also activated as such that if the kernel does not see it for a good while it reboots. e.g. if a software routine does not talk in a fixed time with /dev/watchdog0 or something similar. For instance, you can define load thresholds in /etc/watchdog.conf : max-load-1 = 40max-load-5 = 18max-load-15 = 12 Be aware also that some boards/chipsets come with built-in watchdogs; if I am not wrong the Arm A20 is one of them. From man watchdog The Linux kernel can reset the system if serious problems are detected. This can be implemented via special watchdog hardware, or via a slightly less reliable software-only watchdog inside the kernel. Either way, there needs to be a daemon that tells the kernel the system is working fine. If the daemon stops doing that, the system is reset. watchdog is such a daemon. It opens /dev/watchdog, and keeps writing to it often enough to keep the kernel from resetting, at least once per minute. Each write delays the reboot time another minute. After a minute of inactivity the watchdog hardware will cause the reset. In the case of the software watchdog the ability to reboot will depend on the state of the machines and interrupts. The watchdog daemon can be stopped without causing a reboot if the device /dev/watchdog is closed correctly, unless your kernel is compiled with the CONFIG_WATCHDOG_NOWAYOUT option enabled. see also Raspberry Pi and Arduino: Building Reliable Systems With WatchDog Timers | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/366973",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164427/"
]
} |
367,000 | I have a file consisted of two columns with bunch of numbers, and I'd like to search and find lines in which second column starts with 1.008 or 1.009 or 1.01but I'd like to have printed both, 1st and 2nd column. I tried: grep -Ev '^1.008|^1.009|^1.01' but it doesn't work. | When searching one field in tabulated data, awk is your golden ticket: awk '$2 ~ /^1.0(0[89]|1$)/ { print $1,$2 }' /path/to/inputfile This will apply the pattern you specify ("starts with 1.009 or 1.009 or is equal to 1.01 ") to the second field, and for matches, output the first and second fields. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/367000",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/232882/"
]
} |
367,008 | On Unix systems path names have usually virtually no length limitation (well, 4096 characters on Linux)... except for socket files paths which are limited to around 100 characters (107 characters on Linux ). First question: why such a low limitation? I've checked that it seems possible to work around this limitation by changing the current working directory and creating in various directories several socket files all using the same path ./myfile.sock : the client applications seem to correctly connect to the expected server processes even-though lsof shows all of them listening on the same socket file path. Is this workaround reliable or was I just lucky? Is this behavior specific to Linux or may this workaround be applicable to other Unixes as well? | Compatibility with other platforms, or compatibility with older stuff to avoid overruns while using snprintf() and strncpy() . Michael Kerrisk explain in his book at the page 1165 - Chapter 57, Sockets: Unix domain : SUSv3 doesnβt specify the size of the sun_path field. Early BSD implementations used 108 and 104 bytes, and one contemporary implementation (HP-UX 11) uses 92 bytes. Portable applications should code to this lower value, and use snprintf() or strncpy() to avoid buffer overruns when writing into this field. Docker guys even made fun of it, because some sockets were 110 characters long: lol 108 chars ETOOMANY This is why LINUX uses a 108 char socket. Could this be changed? Of course. And this, is the reason why in the first place this limitation was created on older Operating Systems: Why is the maximal path length allowed for unix-sockets on linux 108? Quoting the answer: It was to match the space available in a handy kernel data structure. Quoting "The Design and Implementation of the 4.4BSD Operating System" by McKusick et. al. (page 369): The memory management facilities revolve around a data structure called an mbuf. Mbufs, or memory buffers, are 128 bytes long, with 100 or 108 bytes of this space reserved for data storage. Other OSs(unix domain sockets): OpenBSD : 104 characters FreeBSD : 104 characters Mac OS X 10.9 : 104 characters | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/367008",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53965/"
]
} |
367,039 | The bash, for instance, is located under /bin/bash, this means it is a command and each command has the three (0,1,2) pores: standard input, standard output, standard error. Is this also 100 percent true for the shell or is there something different since the special meaning of the shell as a command or process? | It's the same as any other program. This allows you to redirect and pipe the I/O like other programs. echo "cat filename" | bash will execute the cat filename command when bash reads its standard input from the pipe. bash -c "echo foo" > filename will execute the echo foo command, and the output will be redirected to the file. On Unix, there's nothing "special" about the shell. It's just an ordinary program whose primary purpose is executing other programs. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/367039",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229576/"
]
} |
367,055 | I have a file, that has trash (binary header and footer) at the beginning and end of the file. I would like to know how to nuke these bytes. For an example, let's assume 25 bytes from the beginning. And, 2 bytes from the end. I know I can use truncate and dd, but truncate doesn't work with a stream and it seems kind of cludgey to run two commands on the hard file. It would be nicer if truncate , knowing how big the file was, could cat the file to dd . Or, if there was a nicer way to do this? | You can combine GNU tail and head : tail -c +26 file | head -c -2 will output the contents of file starting at byte 26, and stopping two bytes (minus two -2 ) before the end. ( -c operates on bytes, not characters.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/367055",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3285/"
]
} |
367,070 | I want to scp files from machineA into my machineB and this is how I am doing it. I am copying bunch of files one by one from primary folder of machineA to primary folder of machineB and secondary folder of machineA to secondary folder of machineB . trinity@machineB:~$ scp trinity@machineA:/data01/primary/* /data01/primary/trinity@machineB:~$ scp trinity@machineA:/data02/secondary/* /data02/secondary/ Is there any way by which I can copy multiple files in parallel? Like five files at a time from a folder? So instead of copying one files at a time, I want to copy five files from primary or secondary folders respectively? Basically I want to copy whatever is there in primary and secondary folders of machineA into machineB parallely. I also have GNU Parallel installed on my box if I can use that. I tried below command but it doesn't work. I was expecting that it should copy 5 files in parallel at a time until everything gets copied from that folder. parallel -j 5 scp trinity@machineA:/data01/primary/* /data01/primary/ Anything wrong with my parallel syntax? What is the best way by which I can copy five files in parallel from a remote folder until everything gets copied from it? | You can combine GNU tail and head : tail -c +26 file | head -c -2 will output the contents of file starting at byte 26, and stopping two bytes (minus two -2 ) before the end. ( -c operates on bytes, not characters.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/367070",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64455/"
]
} |
367,090 | This is the set of files given: ./20170524/18909-20170524182010-PBS74C2VTTCKBMKGQC7YUVEJ3U-362511-19614379.XFA.SOFS_EDI./20170524/18909-20170524182009-PBS74C2VTTCKBMKGQC7YUVEJ3U-362514-19614381.XFA.SOFS_EDI./20170524/18909-20170524182010-PBS74C2VTTCKBMKGQC7YUVEJ3U-362532-19614390.XFA.SOFS_EDI./20170524/18909-20170524182009-PBS74C2VTTCKBMKGQC7YUVEJ3U-362503-19614371.XFA.SOFS_EDI./20170524/18909-20170524182009-PBS74C2VTTCKBMKGQC7YUVEJ3U-362506-19614372.XFA.SOFS_EDI This is what's inside in every file. They have different AK9 segments. Like AK9*A , AK9*P , AK9*R or AK9*E . ISA*00* *00* *SS*252649841464SS *01*12564486M *102453*1254*U*025402*21651681320*0*S*>~SS*SS*5648408456SS*0150158011S*20170228*1921*020151018*X*0210540~SS*997*008609070~AK1*SH*107405~AK2*856*362518~AK5*A~AK9*A*1*1*1~SE*6*008609070~GE*1*008604488~IEA*1*008602662~ I'm looking for a file with this pattern: AK9*P or AK9*R or AK9*E | You can list the files containing that pattern with: grep 'AK9\*[PRE]' -l ./20170524/* | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/367090",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231571/"
]
} |
367,108 | I know what a while loop is. However, I've only seen it work with: while [condition]while ![condition]while TRUE (infinite loop) Where the statement after while has to be either TRUE or FALSE . There is a shell builtin command named : . It is described as a dummy command doing nothing, but I do not know if it is the same here, even if can it be TRUE or FALSE . Maybe it is something different, but what? | The syntax is: while first list of commandsdo second list of commandsdone which runs the second list of commands in a loop as long as the first list of commands (so the last run in that list) is successful. In that first list of commands , you can use the [ command to do various kinds of tests, or you can use the : null command that does nothing and returns success, or any other command. while :; do cmd; done Runs cmd over and over forever as : always returns success. That's the forever loop. You could use the true command instead to make it more legible: while true; do cmd; done People used to prefer : as : was always builtin while true was not (a long time ago; most shells have true builtin nowadays)ΒΉ. Other variants you might see: while [ 1 ]; do cmd; done Above, we're calling the [ command to test whether the "1" string is non-empty (so always true as well) while ((1)); do cmd; done Using the Korn/bash/zsh ((...)) syntax to mimic the while(1) { ...; } of C. Or more convoluted ones like until false; do cmd; done , until ! true ... Those are sometimes aliased like: alias forever='while :; do' So you can do something like: forever cmd; done Few people realise that the condition is a list of commands. For instance, you see people writing: while :; do cmd1 cmd2 || break cmd3done When they could have written: while cmd1 cmd2do cmd3done It does make sense for it to be a list as you often want to do things like while cmd1 && cmd2; do...; done which are command lists as well. In any case, note that [ is a command like any other (though it's built-in in modern Bourne-like shells), it doesn't have to be used solely in the if / while / until condition lists, and those condition lists don't have to use that command more than any other command. ΒΉ : is also shorter and accepts arguments (which it ignores). While the behaviour of true or false is unspecified if you pass it any argument. So one may do for instance: while : you wait; do somethingdone But, the behaviour of: until false is true; do somethingdone is unspecified (though it would work in most shell/ false implementations). | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/367108",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229576/"
]
} |
367,138 | According to a rapid7 article there are some vulnerable Samba versions allowing a remote code execution on Linux systems: While the WannaCry ransomworm impacted Windows systems and was easily identifiable, with clear remediation steps, the Samba vulnerability will impact Linux and Unix systems and could present significant technical obstacles to obtaining or deploying appropriate remediations. CVE-2017-7494 All versions of Samba from 3.5.0 onwards are vulnerable to a remotecode execution vulnerability, allowing a malicious client to upload ashared library to a writable share, and then cause the server to loadand execute it. Possible attack scenario: Starting from the two factors: The Samba vulnerability isn't fixed yet on some Linux distributions. There is a non-patched local privilege escalation vulnerability on some Linux kernel versions (for example, CVE-2017-7308 on the 4.8.0-41-generic Ubuntu kernel). An attacker can access a Linux machine and elevate privileges using a local exploit vulnerability to gain the root access and installing a possible future ramsomware, similar to this mock up WannaCry ransomware for Linux . Update A newest article "Warning! Hackers Started Using "SambaCry Flaw" to Hack Linux Systems" demonstrate how to use the Sambacry flaw to infecte a linux machine. The prediction came out to be quite accurate, as honeypots set up by the team of researchers from Kaspersky Lab have captured a malware campaign that is exploiting SambaCry vulnerability to infect Linux computers with cryptocurrency mining software. Another security researcher, Omri Ben Bassat, independently discovered the same campaign and named it "EternalMiner" . According to the researchers, an unknown group of hackers has started hijacking Linux PCs just a week after the Samba flaw was disclosed publicly and installing an upgraded version of "CPUminer," a cryptocurrency mining software that mines "Monero" digital currency. After compromising the vulnerable machines using SambaCry vulnerability, attackers execute two payloads on the targeted systems: INAebsGB.so β A reverse-shell that provides remote access to the attackers. cblRWuoCc.so β A backdoor that includes cryptocurrency mining utilities β CPUminer. TrendLab report posted on July 18, 2017: Linux Users Urged to Update as a New Threat Exploits SambaCry How do I secure a Linux system to prevent being attacked? | This Samba new vulnerability is already being called "Sambacry", while the exploit itself mentions "Eternal Red Samba", announced in twitter (sensationally) as: Samba bug, the metasploit one-liner to trigger is just: simple.create_pipe("/path/to/target.so") Potentially affected Samba versions are from Samba 3.5.0 to 4.5.4/4.5.10/4.4.14. If your Samba installation meets the configurations described bellow, the fix/upgrade should be done ASAP as there are already exploits , other exploit in python and metasploit modules out there. More interestingly enough, there are already add-ons to a know honeypot from the honeynet project, dionaea both to WannaCry and SambaCry plug-ins . Samba cry seems to be already being (ab)used to install more crypto-miners "EternalMiner" or double down as a malware dropper in the future . honeypots set up by the team of researchers from Kaspersky Lab have captured a malware campaign that is exploiting SambaCry vulnerability to infect Linux computers with cryptocurrency mining software. Another security researcher, Omri Ben Bassat, independently discovered the same campaign and named it "EternalMiner." The advised workaround for systems with Samba installed (which also is present in the CVE notice) before updating it, is adding to smb.conf : nt pipe support = no (and restarting the Samba service) This is supposed to disable a setting that turns on/off the ability to make anonymous connections to the windows IPC named pipes service. From man samba : This global option is used by developers to allow or disallow Windows NT/2000/XP clients the ability to make connections to NT-specific SMB IPC$ pipes. As a user, you should never need to override the default. However from our internal experience, it seems the fix is not compatible with older? Windows versions ( at least some? Windows 7 clients seem to not work with the nt pipe support = no ), and as such the remediation route can go in extreme cases into installing or even compiling Samba. More specifically, this fix disable shares listing from Windows clients, and if applied they have to manually specify the full path of the share to be able to use it. Other known workaround is to make sure Samba shares are mounted with the noexec option. This will prevent the execution of binaries residing on the mounted filesystem. The official security source code patch is here from the samba.org security page . Debian already pushed yesterday (24/5) an update out the door, and the corresponding security notice DSA-3860-1 samba To verify in if the vulnerability is corrected in Centos/RHEL/Fedora and derivates, do: #rpm -q βchangelog samba | grep -i CVEβ resolves: #1450782 β Fix CVE-2017-7494β resolves: #1405356 β CVE-2016-2125 CVE-2016-2126β related: #1322687 β Update CVE patchset There is now an nmap detection script : samba-vuln-cve-2017-7494.nse for detecting Samba versions, or a much better nmap script that checks if the service is vulnerable at http://seclists.org/nmap-dev/2017/q2/att-110/samba-vuln-cve-2017-7494.nse , copy it to /usr/share/nmap/scripts and then update the nmap database , or run it as follows: nmap --script /path/to/samba-vuln-cve-2017-7494.nse -p 445 <target> About long term measures to protect the SAMBA service: The SMB protocol should never be offered directly to the Internet at large. It goes also without saying that SMB has always been a convoluted protocol, and that these kind of services ought to be firewalled and restricted to the internal networks [to which they are being served]. When remote access is needed, either to home or specially to corporate networks, those accesses should be better done using VPN technology. As usual, on this situations the Unix principle of only installing and activating the minimum services required does pay off. Taken from the exploit itself: Eternal Red Samba Exploit -- CVE-2017-7494. Causes vulnerable Samba server to load a shared library in root context. Credentials are not required if the server has a guest account. For remote exploit you must have write permissions to at least one share. Eternal Red will scan the Samba server for shares it can write to. It will also determine the fullpath of the remote share. For local exploit provide the full path to your shared library to load. Your shared library should look something like this extern bool change_to_root_user(void); int samba_init_module(void) { change_to_root_user(); /* Do what thou wilt */ } It is also known systems with SELinux enabled are not vulnerable to the exploit. See 7-Year-Old Samba Flaw Lets Hackers Access Thousands of Linux PCs Remotely According to the Shodan computer search engine, more than 485,000 Samba-enabled computers exposed port 445 on the Internet, and according to researchers at Rapid7, more than 104,000 internet-exposed endpoints appeared to be running vulnerable versions of Samba, out of which 92,000 are running unsupported versions of Samba. Since Samba is the SMB protocol implemented on Linux and UNIX systems, so some experts are saying it is "Linux version of EternalBlue," used by the WannaCry ransomware. ...or should I say SambaCry? Keeping in mind the number of vulnerable systems and ease of exploiting this vulnerability, the Samba flaw could be exploited at large scale with wormable capabilities. Home networks with network-attached storage (NAS) devices [that also run Linux] could also be vulnerable to this flaw. See also A wormable code-execution bug has lurked in Samba for 7 years. Patch now! The seven-year-old flaw, indexed as CVE-2017-7494, can be reliably exploited with just one line of code to execute malicious code, as long as a few conditions are met. Those requirements include vulnerable computers that: (a) make file- and printer-sharing port 445 reachable on the Internet, (b) configure shared files to have write privileges, and (c) use known or guessable server paths for those files. When those conditions are satisfied, remote attackers can upload any code of their choosing and cause the server to execute it, possibly with unfettered root privileges, depending on the vulnerable platform. Given the ease and reliability of exploits, this hole is worth plugging as soon as possible. It's likely only a matter of time until attackers begin actively targeting it. Also Rapid 7 - Patching CVE-2017-7494 in Samba: Itβs the Circle of Life And more SambaCry: The Linux Sequel to WannaCry . Need-to-Know Facts CVE-2017-7494 has a CVSS Score of 7.5 (CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:H)3. Threat Scope A shodan.io query of "port:445 !os:windows" shows approximately one million non-Windows hosts that have tcp/445 open to the Internet, more than half of which exist in the United Arab Emirates (36%) and the U.S. (16%). While many of these may be running patched versions, have SELinux protections, or otherwise don't match the necessary criteria for running the exploit, the possible attack surface for this vulnerability is large. P.S. The commit fix in the SAMBA github project appear to be commit 02a76d86db0cbe79fcaf1a500630e24d961fa149 | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/367138",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153195/"
]
} |
367,220 | I have a PKCS12 file containing the full certificate chain and private key. I need to break it up into 3 files for an application. The 3 files I need are as follows (in PEM format): an unecrypted key file a client certificate file a CA certificate file (root and all intermediate) This is a common task I have to perform, so I'm looking for a way to do this without any manual editing of the output. I tried the following: openssl pkcs12 -in <filename.pfx> -nocerts -nodes -out <clientcert.key>openssl pkcs12 -in <filename.pfx> -clcerts -nokeys -out <clientcert.cer>openssl pkcs12 -in <filename.pfx> -cacerts -nokeys -chain -out <cacerts.cer> This works fine, however, the output contains bag attributes, which the application doesn't know how to handle. After some searching I found a suggested solution of passing the results through x509 to strip the bag attributes. openssl x509 -in <clientcert.cer> -out <clientcert.cer> This works, but I run into an issue on the cacert file. The output file only contains one of the 3 certs in the chain. Is there a way to avoid including the bag attributes in the output of the pkcs12 command, or a way to have the x509 command output include all the certificates? Additionally, if running it through x509 is the simplest solution, is there a way to pipe the output from pkcs12 into x509 instead of writing out the file twice? | The solution I finally came to was to pipe it through sed. openssl pkcs12 -in <filename.pfx> -nocerts -nodes | sed -ne '/-BEGIN PRIVATE KEY-/,/-END PRIVATE KEY-/p' > <clientcert.key>openssl pkcs12 -in <filename.pfx> -clcerts -nokeys | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <clientcert.cer>openssl pkcs12 -in <filename.pfx> -cacerts -nokeys -chain | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <cacerts.cer> | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/367220",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/160045/"
]
} |
367,223 | I have files with column-wise dates and time in "YYYY MM DD HHMM" format plus a variable (temperature) and want to convert them into YYYY DDD format (and keep hour and temperature as is). They look like this but same date appears several times in file: 1980 01 01 0100 3.31982 04 11 0400 2.21985 12 04 0700 1.71995 12 31 1000 2.2 I have created an index file (1980-2017) with the number of days to be added to each date of the first file to get the cumulative day of year DDD (last column). First year looks like this (1980 was a leap year): 1980 01 31 0001980 02 29 0311980 03 31 0601980 04 30 0901980 05 31 1211980 06 30 1521980 07 31 1821980 08 31 2131980 09 30 2441980 10 31 2741980 11 30 3051980 12 31 335 I am trying to compare the two files based on first two columns and if they match to add the fourth column of file2 to third column of file 1 and end up with something like this: 1980 001 0100 3.3 1982 101 0400 2.2 1985 346 0700 1.7 1995 365 1000 2.2 I managed to compare the two columns of the files and add the two columns with awk below: awk -F' ' 'NR==FNR{c[$1$2]++;next};c[$1$2] > 0' junktemp matrix_sample | awk '{print $1, $3+$4}' but this way I lose $4 and $5 (hour and temperature). Is there a way to combine the two awk functions and get $4 and $5 of file1 in the result as well? Any help much appreciated. | The solution I finally came to was to pipe it through sed. openssl pkcs12 -in <filename.pfx> -nocerts -nodes | sed -ne '/-BEGIN PRIVATE KEY-/,/-END PRIVATE KEY-/p' > <clientcert.key>openssl pkcs12 -in <filename.pfx> -clcerts -nokeys | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <clientcert.cer>openssl pkcs12 -in <filename.pfx> -cacerts -nokeys -chain | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <cacerts.cer> | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/367223",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/232924/"
]
} |
367,224 | I have 2 lists, one containing all 32 bit IP addresses, and the other is a list of ip ranges and other IP addresses. I need to find if each IP within list A exists in any IP range or address in list B. The end result would be to display the addresses from list A that do not exists in list B. This would be easy using diff if the IP ranges were not involved. The list itself contains nearly 10,000 lines so to go through this manually would take forever. | The solution I finally came to was to pipe it through sed. openssl pkcs12 -in <filename.pfx> -nocerts -nodes | sed -ne '/-BEGIN PRIVATE KEY-/,/-END PRIVATE KEY-/p' > <clientcert.key>openssl pkcs12 -in <filename.pfx> -clcerts -nokeys | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <clientcert.cer>openssl pkcs12 -in <filename.pfx> -cacerts -nokeys -chain | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <cacerts.cer> | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/367224",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/207764/"
]
} |
367,261 | I have a folder with folder names looks like: enum_1 enum_118 enum_140 enum_16 enum_178 enum_209 enum_227 enum_246 enum_27 enum_45 enum_63 enum_88enum_10 enum_119 enum_141 enum_160 enum_179 enum_21 enum_228 enum_247 enum_28 enum_46 enum_64 enum_9enum_100 enum_12 enum_142 enum_161 enum_18 enum_210 enum_229 enum_248 enum_29 enum_47 enum_65 enum_90enum_102 enum_120 enum_143 enum_162 enum_180 enum_211 enum_23 enum_249 enum_3 enum_48 enum_66 enum_91enum_103 enum_121 enum_144 enum_163 enum_181 enum_212 enum_230 enum_25 enum_30 enum_49 enum_67 enum_92enum_104 enum_122 enum_145 enum_164 enum_182 enum_213 enum_231 enum_250 enum_31 enum_5 enum_68 enum_93enum_105 enum_123 enum_146 enum_165 enum_183 enum_214 enum_232 enum_251 enum_32 enum_50 enum_69 enum_94enum_106 enum_124 enum_147 enum_166 enum_184 enum_215 enum_233 enum_252 enum_33 enum_51 enum_7 enum_95enum_107 enum_125 enum_149 enum_167 enum_185 enum_216 enum_234 enum_253 enum_34 enum_52 enum_70 enum_96enum_108 enum_126 enum_15 enum_168 enum_186 enum_217 enum_235 enum_254 enum_35 enum_53 enum_71 enum_98enum_109 enum_127 enum_150 enum_169 enum_187 enum_218 enum_236 enum_255 enum_36 enum_54 enum_72 enum_99 I would like to rename all of them so that, they look like enum_00001 enum_00118 ... How should I achieve it? Thank you. | Use printf to format the numerical part of the name properly in a loop: for name in enum_*; do mv -i -- "$name" "$( printf 'enum_%05d' "${name#*_}" )"done The ${name#*_} will expand to the numerical part of the original name, i.e. 73 for enum_73 (it removes everything up to and including the first _ in the name). The enum_%05d formatting string will format this integer so that it becomes a zero-filled five-digit number prefixed by enum_ , i.e. enum_00073 . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/367261",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118930/"
]
} |
367,294 | How do I change the default session I get when I log in? I'm on Debian jessie. I tried changing settings on gdm3, tried installing lightdm and following this but it's just not working. For more specificity, I'm trying to default to gnome-classic instead of gnome. I want to turn on the computer, log in as any arbitrary user, and see gnome-classic, not gnome3 (preferably I'd remove the gnome3 default session, if there's a way to do that). | On Debian, you should set the x-session-manager default command to choose your default session manager: # update-alternatives --config x-session-manager There, you can select the session manager you want GDM3 to use by default. If gnome-session-classic does not appear in the listing, try creating the link on your own. Something like the following: # update-alternatives --install /usr/bin/x-session-manager x-session-manager /usr/bin/gnome-session-classic 60 Then you should be able to select gnome-classic with update-alternatives --config x-session-manager . To customize the session managers listed by GDM, I think the only way is to go to /usr/share/xsessions and create/remove Desktop Entry files there. The format is easy to understand, but in case you need help, you can consult the Desktop Entry specification or the GNOME documentation about Desktop Entry files . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/367294",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116911/"
]
} |
367,295 | I have csv file with 100 rows and only one column. I need to add 100 rows of commas at the end of it. How do I do it? | On Debian, you should set the x-session-manager default command to choose your default session manager: # update-alternatives --config x-session-manager There, you can select the session manager you want GDM3 to use by default. If gnome-session-classic does not appear in the listing, try creating the link on your own. Something like the following: # update-alternatives --install /usr/bin/x-session-manager x-session-manager /usr/bin/gnome-session-classic 60 Then you should be able to select gnome-classic with update-alternatives --config x-session-manager . To customize the session managers listed by GDM, I think the only way is to go to /usr/share/xsessions and create/remove Desktop Entry files there. The format is easy to understand, but in case you need help, you can consult the Desktop Entry specification or the GNOME documentation about Desktop Entry files . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/367295",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233109/"
]
} |
367,321 | For a series of targets (IPs), Id like to determine which SMB shares my account has no access to, which it has read access to, and which it has read/write access to. Currently I am using smbclient. The command I run first is smbclient -L [targetIP] -U [user] -p 445 This gives me a list of shares. For example; Sharename Type Comment --------- ---- ------- ADMIN$ Disk Remote Admin C$ Disk Default share IPC$ IPC Remote IPC print$ Disk Printer Drivers MySecrets Disk I then can connect to a file share with this command smbclient //[target]/[name_of_share_from_list] -U [user] -p 445 Which results in an SMB prompt. From the prompt I type ls and if I see files I know I have read access. I'm guessing I have to push a file to see if I have write access. This is tedious. How do I automate this such that for the given list of targets, I get a list of all shares, and the level of access my account has to them? | You had much of the work already in place. Reading the man page for smbclient would have given you the -c <command> parameter, which can be used to provide one or more commands directly rather than interactively. #!/bin/bashusername="DOMAIN\\USER" # Double backslashpassword="PASSWORD" # For demonstration purposes onlyhostname="TARGET_HOST" # SMB hostname of targetcd "${TMPDIR:-/tmp}"touch tmp_$$.tmp # Required locally to copy to targetsmbclient -L "$hostname" -g -A <( echo "username=$username"; echo "password=$password" ) 2>/dev/null | awk -F'|' '$1 == "Disk" {print $2}' | while IFS= read -r share do echo "Checking root of share '$share'" if smbclient "//$hostname/$share/" "$password" -U "$username" -c "dir" >/dev/null 2>&1 then status=READ # Try uprating to read/write if smbclient "//$hostname/$share/" "$password" -U "$username" -c "put tmp_$$.tmp ; rm tmp_$$.tmp" >/dev/null 2>&1 then status=WRITE fi else status=NONE fi case "$status" in READ) echo "Well, $username has read access" ;; WRITE) echo "Yes, $username has write access" ;; *) echo "No, $username has no access" ;; esac donerm -f tmp_$$.tmp | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/367321",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227481/"
]
} |
367,329 | I have an ARM based computer that seems to work alright running Ubuntu Linux (non GUI). How do I get a list of all the components that are on this computer ? like type of Ethernet chip, Wifi chip, Bluetooth, CPU, power management chip etc if possible. | The variation of ARM implementations is too high to be covered with the standard tools. Digging down /sys/class you will find all your components, but it's a pain to do so. You can't use find /sys/class -name name to find all the components because of the symbolic links. You neither can use find -L because of the circle links. cat /sys/class/*/*/device/*/{,*/,*/*/}name */*/device/*/name|sort -u gives you some impression of the devices, but if you really want to know the devices with actually loaded drivers, you will have to read manually through your dmesg . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/367329",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/232016/"
]
} |
367,348 | Running in some trouble using a variable with a number.. This works; sourceid_2="blah"echo $sourceid_2 But this doesnt work; sourceid_2="blah"i=2echo $sourceid_$i Any ideas how to fix this? I tried without the underscore.. but that didn't help. My end goal is to do something with the variable in a for i in {2..7} loop.. like this; for i in {2..7}doecho $sourceid_$idone | echo $sourceid_$i expands two separate variables: $sourceid_ and $i . The simpliest way to do what you're trying to do is with an indirect reference: sourceid_2="blah"i=2var=sourceid_$iecho "${!var}" But as @dave_thompson_085 pointed out, arrays are usually a better way to do this sort of thing: declare -a sourceidsourceid[2]="blah"i=2echo "${sourceid[i]}" Note that arrays are a bash extension, and not available in more basic shells. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/367348",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233153/"
]
} |
367,405 | Git includes Vi in its Git Bash shell on Windows through MinGW64. I'm not a Vi user, so Git really screws things up for me when it launches Vi. It usually ends in me forcefully closing the terminal, deleting the clone, and then re-cloning (because it wastes so much time trying to fix the mess). I'd like to use Vi in Emacs mode if there is such a thing. Other editors, like Notepad++ and Visual Studio have similar modes (or plugins to provide them), so I'm guessing Vi probably has it too. Does Vi have an Emacs mode of operation? If so, how do I tell Vi to behave like Emacs? Or, how do I tell Git to provide me with an Emacs-like editor? | You can't do it that way. vi is vi and emacs is emacs . If you are not happy with the default editor, do git config --global core.editor path-to-emacs.exe-on-your-machine You can install emacs separately, it doesn't need to be part of your git bash. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/367405",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
367,452 | Currently, ranger will only open text files with nano, and I want it to open them with vim. As per the Arch Wiki I have tried adding has xdg-open, flag f = xdg-open "$1"ext txt = vim "$@" to rifle.conf, but that didn't work. | You don't have to edit .bashrc , only the ranger config. Here is how: After startup, ranger creates a directory ~/.config/ranger . You want to edit the rifle.conf file. Rifle is the program that chooses what to open files with. To copy the default configuration for rifle to this directory, issue the following command: $ ranger --copy-config=rifle.conf (Alternatively, add all of rangers config files with $ ranger --copy-config=all ) In rifle.conf , find this part. Change the $EDITOR variable on the two lines below: #-------------------------------------------# Misc#-------------------------------------------# Define the "editor" for text files as first actionmime ^text, label editor = $EDITOR -- "$@"mime ^text, label pager = "$PAGER" -- "$@"!mime ^text, label editor, ext xml|json|csv|tex|py|pl|rb|js|sh|php = $EDITOR -- "$@"!mime ^text, label pager, ext xml|json|csv|tex|py|pl|rb|js|sh|php = "$PAGER" -- "$@" Change it to whatever you want to edit text files with, like vim . I use Kakoune, so I change it to kak : #-------------------------------------------# Misc#-------------------------------------------# Define the "editor" for text files as first actionmime ^text, label editor = kak -- "$@"mime ^text, label pager = "$PAGER" -- "$@"!mime ^text, label editor, ext xml|json|csv|tex|py|pl|rb|js|sh|php = kak -- "$@"!mime ^text, label pager, ext xml|json|csv|tex|py|pl|rb|js|sh|php = "$PAGER" -- "$@" This was done on ranger version 1.8.1. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/367452",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/223417/"
]
} |
367,454 | What is the meaning of the name "Devuan"? (It is the "systemd"-less distribution forked from debian !) The same way I wanted to know what debian means (and I was disappointed after I found out ^^) I would like to know what "Devuan" means. Is it (like the naming-convention of their releases) a planet ? (I didn't found that). Sound like a dragon , but I don't think so. Or does it have something to do with deviant ? | So, from the comment and further search, it seems that it'sDebian forked by some people who call themselves V eteran U nix A dministrators (see the comment to the question) Debian + VUA = deVUAn https://www.reddit.com/r/linux/comments/2nuhjb/one_of_the_individuals_behind_the_debian_fork/ https://en.wikipedia.org/wiki/Devuan | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/367454",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/161003/"
]
} |
367,547 | I have read that the /dev directory contains device files that points to device drivers. Now my question is, when i do ls -l, i get output something like this what does this 5th and 6th column value represent and its significance? | these are major, minor numbers, more info on which you can find here : http://www.makelinux.net/ldd3/chp-3-sect-2.shtml Traditionally, the major number identifies the driver associated with the device. For example, /dev/null and /dev/zero are both managed by driver 1, whereas virtual consoles and serial terminals are managed by driver 4; similarly, both vcs1 and vcsa1 devices are managed by driver 7. Modern Linux kernels allow multiple drivers to share major numbers, but most devices that you will see are still organized on the one-major-one-driver principle. The minor number is used by the kernel to determine exactly which device is being referred to. Depending on how your driver is written (as we will see below), you can either get a direct pointer to your device from the kernel, or you can use the minor number yourself as an index into a local array of devices. Either way, the kernel itself knows almost nothing about minor numbers beyond the fact that they refer to devices implemented by your driver. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/367547",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233311/"
]
} |
367,584 | How is it possible to control the fan speed of multiple consumer NVIDIA GPUs such as Titan and 1080 Ti on a headless node running Linux? | The following is a simple method that does not require scripting, connecting fake monitors, or fiddling and can be executed over SSH to control multiple NVIDIA GPUs' fans. It has been tested on Arch Linux. Create xorg.conf sudo nvidia-xconfig --allow-empty-initial-configuration --enable-all-gpus --cool-bits=7 This will create an /etc/X11/xorg.conf with an entry for each GPU, similar to the manual method. Note: Some distributions (Fedora, CentOS, Manjaro) have additional config files (eg in /etc/X11/xorg.conf.d/ or /usr/share/X11/xorg.conf.d/ ), which override xorg.conf and set AllowNVIDIAGPUScreens . This option is not compatible with this guide. The extra config files should be modified or deleted. The X11 log file shows which config files have been loaded. Alternative: Create xorg.conf manually Identify your cards' PCI IDs: nvidia-xconfig --query-gpu-info Find the PCI BusID fields. Note that these are not the same as the bus IDs reported in the kernel. Alternatively, do sudo startx , open /var/log/Xorg.0.log (or whatever location startX lists in its output under the line "Log file:"), and look for the line NVIDIA(0): Valid display device(s) on GPU-<GPU number> at PCI:<PCI ID> . Edit /etc/X11/xorg.conf Here is an example of xorg.conf for a three-GPU machine: Section "ServerLayout" Identifier "dual" Screen 0 "Screen0" Screen 1 "Screen1" RightOf "Screen0" Screen 1 "Screen2" RightOf "Screen1"EndSectionSection "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:5:0:0" Option "Coolbits" "7" Option "AllowEmptyInitialConfiguration"EndSectionSection "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:6:0:0" Option "Coolbits" "7" Option "AllowEmptyInitialConfiguration"EndSectionSection "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:9:0:0" Option "Coolbits" "7" Option "AllowEmptyInitialConfiguration"EndSectionSection "Screen" Identifier "Screen0" Device "Device0"EndSectionSection "Screen" Identifier "Screen1" Device "Device1"EndSectionSection "Screen" Identifier "Screen2" Device "Device2"EndSection The BusID must match the bus IDs we identified in the previous step. The option AllowEmptyInitialConfiguration allows X to start even if no monitor is connected. The option Coolbits allows fans to be controlled. It can also allow overclocking. Note: Some distributions (Fedora, CentOS, Manjaro) have additional config files (eg in /etc/X11/xorg.conf.d/ or /usr/share/X11/xorg.conf.d/ ), which override xorg.conf and set AllowNVIDIAGPUScreens . This option is not compatible with this guide. The extra config files should be modified or deleted. The X11 log file shows which config files have been loaded. Edit /root/.xinitrc nvidia-settings -q fansnvidia-settings -a [gpu:0]/GPUFanControlState=1 -a [fan:0]/GPUTargetFanSpeed=75nvidia-settings -a [gpu:1]/GPUFanControlState=1 -a [fan:1]/GPUTargetFanSpeed=75nvidia-settings -a [gpu:2]/GPUFanControlState=1 -a [fan:2]/GPUTargetFanSpeed=75 I use .xinitrc to execute nvidia-settings for convenience, although there's probably other ways. The first line will print out every GPU fan in the system. Here, I set the fans to 75%. Launch X sudo startx -- :0 You can execute this command from SSH. The output will be: Current version of pixman: 0.34.0 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version.Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown.(==) Log file: "/var/log/Xorg.0.log", Time: Sat May 27 02:22:08 2017(==) Using config file: "/etc/X11/xorg.conf"(==) Using system config directory "/usr/share/X11/xorg.conf.d" Attribute 'GPUFanControlState' (pushistik:0[gpu:0]) assigned value 1. Attribute 'GPUTargetFanSpeed' (pushistik:0[fan:0]) assigned value 75. Attribute 'GPUFanControlState' (pushistik:0[gpu:1]) assigned value 1. Attribute 'GPUTargetFanSpeed' (pushistik:0[fan:1]) assigned value 75. Attribute 'GPUFanControlState' (pushistik:0[gpu:2]) assigned value 1. Attribute 'GPUTargetFanSpeed' (pushistik:0[fan:2]) assigned value 75. Monitor temperatures and clock speeds nvidia-smi and nvtop can be used to observe temperatures and power draw. Lower temperatures will allow the card to clock higher and increase its power draw. You can use sudo nvidia-smi -pl 150 to limit power draw and keep the cards cool, or use sudo nvidia-smi -pl 300 to let them overclock. My 1080 Ti runs at 1480 MHz if given 150W, and over 1800 MHz if given 300W, but this depends on the workload. You can monitor their clock speed with nvidia-smi -q or more specifically, watch 'nvidia-smi -q | grep -E "Utilization| Graphics|Power Draw"' Returning to automatic fan management. Reboot. I haven't found another way to make the fans automatic. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/367584",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120349/"
]
} |
367,593 | I'm writing a script which will use the filename of running processes. However, I'm unable to determine the full executable name of some processes. Initially, I decided to query the Name entry in /proc/PID/status (or the second field in /proc/PID/stat ). However, according to the manpage , that field is always truncated to 15 characters, but I need the full name to avoid conflict/confusion. An answer of this question suggests to use /proc/PID/cmdline , but there are problems too. Some programs (e.g. chromium, electron) do stupid/smart things to the value in /proc/PID/cmdline so I can't just split the data there by NULL and directly get the program name as suggested in the manpage - they fill in a lot of things to the original argv[0] field and separate them by space, and I don't think merely splitting by space is a good choice because the path/filename may contain spaces. This is even more complicated when I find out that some scripts (e.g. python scripts) are in the form /usr/bin/python /path/to/script while some are in the form /path/to/script . Though this is much easier to deal with as long as I have that field (without jams as above) and manually check and split. Any ideas how to get the full program name/filename?It doesn't matter if the name contains the full path or not because that can be easily dealt with (as far as I can see now). | /proc/$PID/exe seems to be what you're looking for: ( proc(5) /proc/[pid]/exe Under Linux 2.2 and later, this file is a symbolic link containing the actual pathname of the executed command. This symbolic link can be dereferenced normally; attempting to open it will open the executable. So, simply: $ /bin/cat & readlink /proc/$!/exe/bin/cat It actually follows renames on the executable file: /tmp$ cp /bin/cat . ; ./cat & mv cat dog ; readlink /proc/$!/exe/tmp/dog | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/367593",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233343/"
]
} |
367,597 | I am connected with SSH to a machine on which I don't have root access. To install something I uploaded libraries from my machine and put them in the ~/lib directory of the remote host. Now, for almost any command I run, I get the error below (example is for ls ) or a Segmentation fault (core dumped) message. ls: relocation error: /lib/libpthread.so.0: symbol __getrlimit, version GLIBC_PRIVATE not defined in file libc.so.6 with link time reference The only commands I have been successful running are cd and pwd until now. I can pretty much find files in a directory by using TAB to autocomplete ls , so I can move through directories. uname -r also returns the Segmentation fault (core dumped) message, so I'm not sure what kernel version I'm using. | Since you can log in, nothing major is broken; presumably your shellβs startup scripts add ~/lib to LD_LIBRARY_PATH , and that, along with the bad libraries in ~/lib , is what causes the issues youβre seeing. To fix this, run unset LD_LIBRARY_PATH This will allow you to run rm , vim etc. to remove the troublesome libraries and edit your startup scripts if appropriate. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/367597",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164427/"
]
} |
367,600 | This is not a duplicate because this is dealing with a peculiarity I noticed when I use /etc/ld.so.conf . To get the paths that the dynamic linker searches in for libraries, I run the command ldconfig -v | grep -v "^"$'\t' | sed "s/:$//g" . When /etc/ld.so.conf has no paths listed in it. The output from the previous command is /lib/usr/lib I figured that it searches /lib first and then /usr/lib . When I add a new path, such as /usr/local/lib , to /etc/ld.so.conf and then remake /etc/ld.so.cache , the output from ldconfig -v | grep -v "^"$'\t' | sed "s/:$//g" becomes /usr/local/lib/lib/usr/lib I find this strange because if I am correct that the order that the listed directories are searched in is from top to bottom, then additional directories are searched before /lib and /usr/lib . That the additional directories are searched before the trusted directories is not strange on its own, but when /lib is searched before /usr/lib , that is strange because /bin & /sbin are searched after /usr/bin & /usr/sbin in PATH . Even if the paths listed by ldconfig -v | grep -Ev "^"$'\t' | sed "s/:$//g" were searched from bottom to top, it would still be a skewed ordering because additional directories would be searched after the trusted ones while /lib would be searched after /usr/lib . So, what is the order that ld.so searches paths for libraries in? Why is /lib searched before /usr/lib ? If it's not, then why are additional directories searched after /lib ? | The order is documented in the manual of the dynamic linker, which is ld.so . It is: directories from LD_LIBRARY_PATH ; directories from /etc/ld.so.conf ; /lib ; /usr/lib . (I'm simplifying a little, see the manual for the full details.) The order makes sense when you consider that it's the only way to override a library in a default location with a custom library. LD_LIBRARY_PATH is a user setting, it has to come before the others. /etc/ld.so.conf is a local setting, it comes before the operating system default. So as a user, if I want to run a program with a different version of a library, I can run the program with LD_LIBRARY_PATH containing the location of that different library version. And as an administrator, I can put a different version of the library in /usr/local/lib and list /usr/local/lib in /etc/ld.so.conf . Trust doesn't enter into this. Any directory listed on this search path has to be trusted, because any library could end up being loaded from there. In theory, you could list the library names used by all the programs βrequiring more trustβ on your system and make sure that all these libraries are present in the βmost trustedβ directories, and then the βless trustedβ directories wouldn't be used if they came after the more trusted directories on the search path, except for the programs βrequiring less trustβ. But that would be extremely fragile. It would also be pretty pointless: if an attacker can inject a value of LD_LIBRARY_PATH or an element of /etc/ld.so.conf , they surely have a more direct route to executing arbitrary code, such as injecting a value of PATH , of LD_PRELOAD , etc. Trust in the library load path does matter when execution crosses a trust boundary, i.e. when running a program with additional privileges (e.g. setuid/setgid program, or via sudo ). What happens in this case is that LD_LIBRARY_PATH is blanked out. As for /lib vs /usr/lib , it doesn't matter much: they're provided by the same entity (the operating system) and there shouldn't be a library that's present in both. It makes sense to list /lib first because it provides a (very tiny) performance advantage: the most-often-used libraries, especially the libraries used by small basic programs (for which load time is a higher fraction of total running time than large, long-running program), are located in /lib . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/367600",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89807/"
]
} |
367,636 | I am trying to have my stderr be printed red in terminal. The below script redirects the 2 to a custom 8 upon debug trap. exec 9>&2exec 8> >( while IFS='' read -r line || [ -n "$line" ]; do echo -e "${RED}${line}${COLORRESET}" done)function undirect(){ exec 2>&9; } # reset to original 9 (==2)function redirect(){ exec 2>&8; } # set to custom 8trap "redirect;" DEBUGPROMPT_COMMAND='undirect;' It comes from here , with a clear explanation. Seems to work very fine, however non-newline-terminated input doesn't get printed out at all. Quoting the author gospes again: bash> echo -en "hi\n" 1>&2 hi <-- this is redbash> echo -en "hi" 1>&2bash> echo -en "hi" 1>&2bash> echo -en "hi\n" 1>&2 hihihi <-- this is red I cannot figure out why. The non-newline content seems to end up in some kind of buffer. It either doesn't even reach the file descriptor 8 , or somehow doesn't want to be printed out right away. Where does it go? redirect gets called properly every time. Also, IFS='' means there is no delimiter, so i don't quite understand why the echoing out in 8 happens line-wise. A bugfix would be much appreciated, I linked the quoted answer to this question. This entire solution is, as pointed out by Gilles, not quite perfect. I am having issues with read, stdin, progress bars, can neither su nor source . And frequently major problems like broken pipes and unexpected terminal exits. If anybody got here by my linking, please do consider using https://github.com/sickill/stderred instead, it is much better (no problems yet) (however echo bla >&2 remains non-red and the respective issue is closed ) | You did get the partial lines output, as part of the same line at the point where the newline was printed. The parts of the line are buffered within read , that's what it does : The read utility shall read a single logical line from standard input For example, this prints <foobar> after one second, not <foo><bar> . (echo -n foo ; sleep 1 ; echo bar) | (read x ; echo "<$x>") If you want to catch input in smaller parts than full lines, you'll need to do something else, e.g. with Perl. This would print <foo><bar\n> (with the newline before the last > , since unlike read , Perl doesn't handle the final newline specially. Shouldn't matter with coloring.) (echo -n foo ; sleep 1 ; echo bar) | perl -e '$|=1; while(sysread STDIN,$a,9999) { print "<$a>"}' If you have the control codes for colors ( RED and COLORRESET ) exported in the environment, you can use them from the Perl script as here: perl -e '$|=1; while(sysread STDIN,$a,9999) {print "$ENV{RED}$a$ENV{COLORRESET}"}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/367636",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103120/"
]
} |
367,749 | I want to find duplicate files, within a directory, and then delete all but one, to reclaim space. How do I achieve this using a shell script? For example: pwdfolder Files in it are: log.bkplogextract.bkpextract I need to compare log.bkp with all the other files and if a duplicate file is found (by it's content), I need to delete it. Similarly, file 'log' has to be checked with all other files, that follow, and so on. So far, I have written this, But it's not giving desired result. #!/usr/bin/env kshcount=`ls -ltrh /folder | grep '^-'|wc -l`for i in `/folder/*`do for (( j=i+1; j<=count; j++ )) do echo "Current two files are $i and $j" sdiff -s $i $j if [ `echo $?` -eq 0 ] then echo "Contents of $i and $j are same" fi done done | If you're happy to simply use a command line tool, and not have to create a shell script, the fdupes program is available on most distros to do this. There's also the GUI based fslint tool that has the same functionality. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/367749",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193909/"
]
} |
367,840 | How do I start the document viewer Zathura maximised? Is there any way to get my window manager xfce to remember the last open window's size? The default window size is only about 1/8th of my screen. | You can set height and width to a large value, so that it will cover your entire screen. In my case the following did the trick: set window-height 3000 set window-width 3000 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/367840",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
367,849 | $ echo 123 | cat 123 is doing what I expected, both commands run inside the same shell. But when I connect them with the >( ... ) expression, which connects the output of one command in the shell to a second in a subshell, I get this: $ echo 123 >(cat)123 /dev/fd/63 this is also true with other values: $ echo 101 >(cat)101 /dev/fd/63$ echo $BASHPID >(cat)3252 /dev/fd/63 I thought command1 >(command2) is the same as command1 | command2 , but in command1 >(command2) , each command is inside a different shell, therefore they should have the same output. Where am I wrong? | The process substitution >(thing) will be replaced by a file name. This file name corresponds to a file that is connected to the standard input of the thing inside the substitution. The following would be a better example of its use: $ sort -o >(cat -n >/tmp/out) ~/.profile This would sort the file ~/.profile and send the output to cat -n which would enumerate the lines and store the result in /tmp/out . So, to answer you question: You get that output because echo gets the two arguments 123 and /dev/fd/63 . /dev/fd/63 is the file connected to the standard input of the cat process in the process substitution. Modifying your example code slightly: $ echo 101 > >(cat) This would produce just 101 on standard output (the output of echo would be redirected to the file that serves as the input to cat , and cat would produce the contents of that file on standard output). Also note that in the cmd1 | cmd2 pipeline, cmd2 may not at all be running in the same shell as cmd1 (depending on the shell implementation you are using). ksh93 works the way you describe (same shell), while bash creates a subshell for cmd2 (unless its lastpipe shell option is set and job control is not active). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/367849",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229576/"
]
} |
367,866 | I am writing a shell script to install all my required applications on my Ubuntu PC in one shot (while I can take a stroll or do something else). For most applications adding -y to the end of the apt-get install statement, has worked well to avoid the need for any user involvement. My script looks something like this: #!/bin/bashadd-apt-repository ppa:webupd8team/sublime-text-3 -yapt-get update -yapt-get upgrade -yapt-get install synaptic -yapt-get install wireshark -y Though I no longer have to worry about Do you want to continue? [Y/n] or Press [ENTER] to continue or ctrl-c to cancel adding it , the problem is with wireshark , which requires a response to an interactive prompt as shown below: How can I avoid this mandatory intervention? | Configure the debconf database: echo "wireshark-common wireshark-common/install-setuid boolean true" | sudo debconf-set-selections Then, install Wireshark : sudo DEBIAN_FRONTEND=noninteractive apt-get -y install wireshark You might also want to suppress the output of apt-get . In that case: sudo DEBIAN_FRONTEND=noninteractive apt-get -y install wireshark > /dev/null | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/367866",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138465/"
]
} |
367,867 | I'm interested in who builds the Debian main packages for distribution. I'm aware that packages need to be reproducably buildable and I'm not asking about any specific individuals but the process in general (e.g. how "trust" would be involved here and how decentralized it is). At https://lwn.net/Articles/676799/ it says: More generally, Mozilla trusts the Debian packagers to use their best judgment to achieve the same quality as the official Firefox binaries. At https://wiki.debian.org/Packaging it says: Debian packages are maintained by a community of Debian Developers and volunteers. I'm new to Debian so please edit this question if that's needed. | Configure the debconf database: echo "wireshark-common wireshark-common/install-setuid boolean true" | sudo debconf-set-selections Then, install Wireshark : sudo DEBIAN_FRONTEND=noninteractive apt-get -y install wireshark You might also want to suppress the output of apt-get . In that case: sudo DEBIAN_FRONTEND=noninteractive apt-get -y install wireshark > /dev/null | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/367867",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233262/"
]
} |
367,872 | sudo apt-get update install all update easily in Ubuntu. Is sudo dnf update similar to this in Fedora 25? | As apt-get update (or apt update ), a dnf check-update updates the local repository cache. The (general) dnf update equivalent in Debian/Ubuntu is a combination of apt update , apt upgrade and apt autoremove . There is a nice comparison between the package management tools apt, yum, dnf and pkg. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/367872",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/176227/"
]
} |
367,890 | I need to get the device names of all connected USB disks (ie sdd ). I have 3 USB disks plugged in, and 2 SATA disks: $ find /sys/devices/ -name block /sys/devices/pci0000:00/0000:00:14.0/usb3/3-7/3-7:1.0/host5/target5:0:0/5:0:0:0/block/sys/devices/pci0000:00/0000:00:14.0/usb4/4-2/4-2:1.0/host6/target6:0:0/6:0:0:0/block/sys/devices/pci0000:00/0000:00:14.0/usb4/4-5/4-5:1.0/host4/target4:0:0/4:0:0:0/block/sys/devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0/block/sys/devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block I want to ignore the SATA disks, but I need to list all the USB disks. In the terminal, I can us ls and it will give me sdd : $ ls /sys/devices/pci0000:00/0000:00:14.0/usb3/3-7/3-7:1.0/host5/target5:0:0/5:0:0:0/blocksdd But I need to use this in a script. I need to iterate over all USB disks, and I don't know the exact path in advance, so I have to use wildcards ( * or ? ): for DISK in $(ls /sys/devices/pci0000:00/0000:00:14.0/usb?/*/*:1.0/host?/target?:0:0/?:0:0:0/block) ; doecho /dev/$DISKdone the above only works if one USB disk is plugged in. If two or more disks are plugend in, I get sdd as well as the /sys path, which I don't want, ie: /dev//sys/devices/pci0000:00/0000:00:14.0/usb3/3-7/3-7:1.0/host5/target5:0:0/5:0:0:0/block:/dev/sdd/dev//sys/devices/pci0000:00/0000:00:14.0/usb4/4-2/4-2:1.0/host6/target6:0:0/6:0:0:0/block:/dev/sde/dev//sys/devices/pci0000:00/0000:00:14.0/usb4/4-5/4-5:1.0/host4/target4:0:0/4:0:0:0/block:/dev/sdc how can I iterate only over sdd sde sdc ? I am looking for a solution not using udev infrastructure, ie /dev/disk/by-path/ | You can do it with lsblk command. lsblk -l -o name,tran gives NAME TRANsda satasda1 sdb usbsdc usbsr0 sata -l stands for "list" format, so it's easier to parse. Otherwise, you would get a tree format like this: NAME TRANsda sataββsda1sdb usbsr0 sata Specifying other flags will give you more information like FSTYPE, LABEL, UUID, MOUNTPOINT and many other, just run lsblk --help to see all options. You may want to use --paths --noheadings --scsi flags to have output printed like this: sata /dev/sdausb /dev/sdbusb /dev/sdcsata /dev/sr0 and then grep over the input to filter out those lines with usb at the beginning of the line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/367890",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
367,893 | If for benchmarking purposes one has to use hdparm or dd directly on the device, I wonder what the correct way would be to do this safely. Let's say the disk in question is /dev/sda : root@igloo:~# ls -l /dev/sd*brw-rw---- 1 root disk 8, 0 May 29 08:23 /dev/sdabrw-rw---- 1 root disk 8, 1 May 29 08:23 /dev/sda1 I really don't want to write to sda under any circumstances. So would it be advised to do chmod o+r /dev/sda* and run dd or hdparm on it as a normal user? | chmod o+r /dev/sda* is quite dangerous, as it allows any program to read your whole disk (including e.g. password hashes in /etc/shadow , if your root partition is on sda )! There are (at least) two ways to do this more safely: Add all users that need to read the disk to the disk group and run chmod g-w /dev/sda* to prevent write access for that group. Change the group of /dev/sda* to a group that only contains the users that need to read the disk, e.g. chgrp my-benchmarkers /dev/sda* and prevent write access for this group with chmod . Please note that the group and permission changes on device nodes in /dev are only temporary until the device in question is disconnected or the computer is rebooted. One problem could be that hdparm needs write access for most of its functionality. You must check if everything you want works with read-only access. EDIT: It looks like hdparm doesn't need write access. It rather needs the CAP_SYS_RAWIO capability to perform most ioctls.You can use setcap cap_sys_rawio+ep /sbin/hdparm to give this capability to hdparm. Please note that this allows anyone who can execute hdparm and has at least read access to a device file to do practically anything hdparm can do on that device, including --write-sector and all other hdparm commands the man page describes as "VERY DANGEROUS", "EXTREMELY DANGEROUS" or "EXCEPTIONALLY DANGEROUS". Wrapper scripts might be a better solution. If not you either have to give write access or write wrapper scripts that can be executed by your users as root using sudo rules. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/367893",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150422/"
]
} |
367,917 | I am running the following command on a Raspberry Pi 3 Debian latest release: cat /dev/ttyUSB0 | tee -a /media/pi/KINGSTON/klima.out | grep -F $ | tee -a /media/pi/KINGSTON/log The command works fine and does what it should; however, when I delete (manually or by CRON) the klima.out file, it is not re-created. The command keeps running, the log file continues to be appended, but the klima.out file doesn't come back. (also no buffering). I want to delete it once a week for not letting it grow over all boundaries.Any suggestions? | If you want to recover the file blocks, you need to blank the file, not unlink it: This portable way should work with most shells : : > /media/pi/KINGSTON/klima.out Unlinking the file (i.e. rm ) is removing the directory entry but doesn't affect the file contents (inode) as long as the file is kept open by readers or writers. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/367917",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233584/"
]
} |
367,982 | Commands , for instance sed , are programs and programs are codified logic inside a file and these files are somewhere on the hard disk. However when commands are being run, a copy of their files from the hard disk is put into the RAM , where they come to life and can do stuff and are called processes . Processes can make use of other files, read or write into them, and if they do those files are called open files. There is a command to list all open files by all running processes: lsof . OK, so what I wonder about is if the double life of a command, one on the hard disk, the other in the RAM is also true for other kind of files, for instance those who have no logic programmed, but are simply containers for data. My assumption is, that files opened by processes are also loaded into the RAM. I do not know if it is true, it is just an intuition. Please, could someone make sense of it? | However when commands are being run, a copy of their files from the hard disk is put into the RAM, This is wrong (in general). When a program is executed (thru execve(2) ...) the process (running that program) is changing its virtual address space and the kernel is reconfiguring the MMU for that purpose. Read also about virtual memory . Notice that application programs can change their virtual address space using mmap(2) & munmap & mprotect(2) , also used by the dynamic linker (see ld-linux(8) ). See also madvise(2) & posix_fadvise(2) & mlock(2) . Future page faults will be processed by the kernel to load (lazily) pages from the executable file. Read also about thrashing . The kernel maintains a large page cache . Read also about copy-on-write . See also readahead(2) . OK, so what I wonder about is if the double life of a command, one on the hard disk, the other in the RAM is also true for other kind of files, for instance those who have no logic programmed, but are simply containers for data. For system calls like read(2) & write(2) the page cache is also used. If the data to be read is sitting in it, no disk IO will be done. If disk IO is needed, the read data would be very likely put in the page cache.So, in practice, if you run the same command twice, it could happen that no physical I/O is done to the disk on the second time (if you have an old rotating hard disk - not an SSD - you might hear that; or observe carefully your hard disk LED). I recommend reading a book like Operating Systems : Three Easy Pieces (freely downloadable, one PDF file per chapter) which explains all this. See also Linux Ate My RAM and run commands like xosview , top , htop or cat /proc/self/maps or cat /proc/$$/maps (see proc(5) ). PS. I am focusing on Linux, but other OSes also have virtual memory and page cache. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/367982",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229576/"
]
} |
367,998 | If I run this bash command and prefix the statement, so that the variable fruit should exist, but only for the duration of this command: $ fruit=apple echo $fruit$ The result is an empty line. why? To quote a comment from wildcard on this question : parameter expansion is done by the shell, and the "fruit" variable is not a shell variable; it's only an environment variable within the environment of the "echo" command An environment variable is still a variable, so surely this should still be available to the echo command? | The issue is the current shell is expanding the variable too early; it is not set in its context so the echo command doesn't get any argument, i.e. the commands ends up being: $ fruit=apple echo Here is a workaround where the variable doesn't get expanded too early because of the single quotes: $ fruit=apple sh -c 'echo $fruit' Alternatively, you can also use a one line shell script which demonstrates the fruit variable is correctly passed to the executed command: $ cat /tmp/echofecho $fruit$ /tmp/echof$ fruit=apple /tmp/echofapple$ echo $fruit$ A few comments as this question has sparkled some unexpected controversy and discussions: The fact the variable fruit is already exported or not doesn't affect the behavior, what matters is what the variable value is at the precise moment the shell is expanding it. $ export fruit=banana$ fruit=apple echo $fruitbanana The fact the echo command is a builtin one doesn't affect the OP issue. However, there are cases where using builtins or shell functions with this syntax have unexpected side effects, e.g.: $ export fruit=banana$ fruit=apple eval 'echo $fruit'apple$ echo $fruit apple While there is a similarity between the question asked here and that one, this is not exactly the same issue. With that other question, the temporary IFS variable value is not yet available when the shell word split another variable $var while here, the temporary fruit variable value is not yet available when the shell expands the same variable. There is also that other question where the OP is asking about the significance of the syntax used and more precisely is asking "why does this work?". Here the OP is aware of the significance but reports an unexpected behavior and asks about its cause, i.e. "why doesn't this work?". Ok, after reading more closely the poor screenshot posted on the other question, the same situation is indeed described there ( BAZ=jake echo $BAZ ) so yes, after all this is a duplicate | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/367998",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106525/"
]
} |
368,011 | When I excute the: yum clean all I get the below Error: Existing lock /var/run/yum.pid: another copy is running as pid 2287 | The issue is the current shell is expanding the variable too early; it is not set in its context so the echo command doesn't get any argument, i.e. the commands ends up being: $ fruit=apple echo Here is a workaround where the variable doesn't get expanded too early because of the single quotes: $ fruit=apple sh -c 'echo $fruit' Alternatively, you can also use a one line shell script which demonstrates the fruit variable is correctly passed to the executed command: $ cat /tmp/echofecho $fruit$ /tmp/echof$ fruit=apple /tmp/echofapple$ echo $fruit$ A few comments as this question has sparkled some unexpected controversy and discussions: The fact the variable fruit is already exported or not doesn't affect the behavior, what matters is what the variable value is at the precise moment the shell is expanding it. $ export fruit=banana$ fruit=apple echo $fruitbanana The fact the echo command is a builtin one doesn't affect the OP issue. However, there are cases where using builtins or shell functions with this syntax have unexpected side effects, e.g.: $ export fruit=banana$ fruit=apple eval 'echo $fruit'apple$ echo $fruit apple While there is a similarity between the question asked here and that one, this is not exactly the same issue. With that other question, the temporary IFS variable value is not yet available when the shell word split another variable $var while here, the temporary fruit variable value is not yet available when the shell expands the same variable. There is also that other question where the OP is asking about the significance of the syntax used and more precisely is asking "why does this work?". Here the OP is aware of the significance but reports an unexpected behavior and asks about its cause, i.e. "why doesn't this work?". Ok, after reading more closely the poor screenshot posted on the other question, the same situation is indeed described there ( BAZ=jake echo $BAZ ) so yes, after all this is a duplicate | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/368011",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231948/"
]
} |
368,017 | After some period of time, I'm experiencing problems with starting applications, for example, Viber. $ /opt/viber/ViberQSqlDatabasePrivate::removeDatabase: connection 'ConfigureDBConnection' is still in use, all queries will cease to work.Maximum number of clients reached(Viber:1279): Gtk-WARNING **: cannot open display: :0 Skype $ skypeMaximum number of clients reached Gnote $ gnoteMaximum number of clients reached** (gnote:21284): WARNING **: Could not open X displayMaximum number of clients reached(gnote:21284): Gtk-WARNING **: cannot open display: :0 xrestop $ xrestopMaximum number of clients reachedxrestop: Unable to open display! After some research, I've found that it is related to some limit of unix sockets. $ lsof -U +c 15 | wc -l1011$ lsof -U +c 15 | cut -f1 -d' ' | sort | uniq -c | sort -rn | head -3 382 zenity 256 dbus-daemon 212 chrome On some sources people are talking about 256 max X client number limit. It looks like this is proved by the following command output: # lsof -p `pidof X` | tail -n 50lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete.Xorg 1672 root 207u unix 0xffff880141189e00 0t0 3759941 @/tmp/.X11-unix/X0Xorg 1672 root 208u unix 0xffff88001a50a940 0t0 3759944 @/tmp/.X11-unix/X0Xorg 1672 root 209u unix 0xffff88003a0430c0 0t0 3802123 @/tmp/.X11-unix/X0Xorg 1672 root 210u unix 0xffff8801117c3c00 0t0 3817272 @/tmp/.X11-unix/X0Xorg 1672 root 211u unix 0xffff8801225e2580 0t0 4395710 @/tmp/.X11-unix/X0Xorg 1672 root 212u unix 0xffff88015ed3a580 0t0 4425629 @/tmp/.X11-unix/X0Xorg 1672 root 213u unix 0xffff88013095f800 0t0 4427059 @/tmp/.X11-unix/X0Xorg 1672 root 214u unix 0xffff8802e75d9a40 0t0 4427075 @/tmp/.X11-unix/X0Xorg 1672 root 215u unix 0xffff8801225e12c0 0t0 4608310 @/tmp/.X11-unix/X0Xorg 1672 root 216u unix 0xffff88031bc8fbc0 0t0 4608314 @/tmp/.X11-unix/X0Xorg 1672 root 217u unix 0xffff8801309a5dc0 0t0 4608318 @/tmp/.X11-unix/X0Xorg 1672 root 218u unix 0xffff8801309a2940 0t0 4607747 @/tmp/.X11-unix/X0Xorg 1672 root 219u unix 0xffff880130958b40 0t0 4786413 @/tmp/.X11-unix/X0Xorg 1672 root 220u unix 0xffff8800b1382d00 0t0 4787103 @/tmp/.X11-unix/X0Xorg 1672 root 221u unix 0xffff88011f350000 0t0 5001136 @/tmp/.X11-unix/X0Xorg 1672 root 222u unix 0xffff88011f352d00 0t0 5144089 @/tmp/.X11-unix/X0Xorg 1672 root 223u unix 0xffff88011f351a40 0t0 5144417 @/tmp/.X11-unix/X0Xorg 1672 root 224u unix 0xffff88011f357bc0 0t0 5145648 @/tmp/.X11-unix/X0Xorg 1672 root 225u unix 0xffff88014108a940 0t0 5145652 @/tmp/.X11-unix/X0Xorg 1672 root 226u unix 0xffff88001a50c740 0t0 5145655 @/tmp/.X11-unix/X0Xorg 1672 root 227u unix 0xffff88006c7b6cc0 0t0 5161703 @/tmp/.X11-unix/X0Xorg 1672 root 228u unix 0xffff8802e75dddc0 0t0 5225428 @/tmp/.X11-unix/X0Xorg 1672 root 229u unix 0xffff88015ed3cb00 0t0 5228455 @/tmp/.X11-unix/X0Xorg 1672 root 230u unix 0xffff880111b203c0 0t0 5235401 @/tmp/.X11-unix/X0Xorg 1672 root 231u unix 0xffff88013089bfc0 0t0 5259828 @/tmp/.X11-unix/X0Xorg 1672 root 232u unix 0xffff8800b10030c0 0t0 5310616 @/tmp/.X11-unix/X0Xorg 1672 root 233u unix 0xffff88010d461e00 0t0 5349971 @/tmp/.X11-unix/X0Xorg 1672 root 234u unix 0xffff88001a50ddc0 0t0 5530781 @/tmp/.X11-unix/X0Xorg 1672 root 235u unix 0xffff880142e703c0 0t0 5529146 @/tmp/.X11-unix/X0Xorg 1672 root 236u unix 0xffff880142e73c00 0t0 5654363 @/tmp/.X11-unix/X0Xorg 1672 root 237u unix 0xffff88025087f800 0t0 5260838 @/tmp/.X11-unix/X0Xorg 1672 root 238u unix 0xffff880142e712c0 0t0 5814164 @/tmp/.X11-unix/X0Xorg 1672 root 239u unix 0xffff8802508a21c0 0t0 5917312 @/tmp/.X11-unix/X0Xorg 1672 root 240u unix 0xffff8800b1387080 0t0 5851281 @/tmp/.X11-unix/X0Xorg 1672 root 241u unix 0xffff8802e6854380 0t0 5851284 @/tmp/.X11-unix/X0Xorg 1672 root 242u unix 0xffff88011f3503c0 0t0 5851295 @/tmp/.X11-unix/X0Xorg 1672 root 243u unix 0xffff8801041d8f00 0t0 5917315 @/tmp/.X11-unix/X0Xorg 1672 root 244u unix 0xffff8801041d83c0 0t0 5917322 @/tmp/.X11-unix/X0Xorg 1672 root 245u unix 0xffff88000aeb4ec0 0t0 5917325 @/tmp/.X11-unix/X0Xorg 1672 root 246u unix 0xffff880111b21e00 0t0 5993474 @/tmp/.X11-unix/X0Xorg 1672 root 247u unix 0xffff880143546180 0t0 6115119 @/tmp/.X11-unix/X0Xorg 1672 root 248u unix 0xffff88000aeb30c0 0t0 6120777 @/tmp/.X11-unix/X0Xorg 1672 root 249u unix 0xffff88013089da00 0t0 6119223 @/tmp/.X11-unix/X0Xorg 1672 root 250u unix 0xffff8801309a5280 0t0 6121614 @/tmp/.X11-unix/X0Xorg 1672 root 251u unix 0xffff88000aeb6cc0 0t0 6139354 @/tmp/.X11-unix/X0Xorg 1672 root 252u unix 0xffff88010d460000 0t0 6635385 @/tmp/.X11-unix/X0Xorg 1672 root 253u unix 0xffff88013095b840 0t0 6659213 @/tmp/.X11-unix/X0Xorg 1672 root 254u unix 0xffff88005c96b480 0t0 6661835 @/tmp/.X11-unix/X0Xorg 1672 root 255u unix 0xffff88011f350f00 0t0 6710815 @/tmp/.X11-unix/X0Xorg 1672 root 256u REG 0,16 4096 22306 /sys/devices/pci0000:00/0000:00:02.0/drm/card1/card1-LVDS-1/intel_backlight/brightness I can close some application, for example, Chrome and then Viber can be started. I'm wondering if this is normal to have 200+ connections for the top three apps? Or just suggest how to solve the problem, please. Note, I can use my system for months without rebooting (suspend/resume). Linux Mint 17.3 64-bit Cinnamon | I've found how to fix it situationally. Following command made a hint for me $ lsof -U +c 15 | cut -f1 -d' ' | sort | uniq -c | sort -rn | head -3 382 zenity 256 dbus-daemon 212 chrome I've took the top process name and have read little about it. From Wikipedia Zenity is free software and a cross-platform program that allows the execution of GTK+ dialog boxes in command-line and shell scripts. Then I've listed processes and filtered them by zenity key $ ps axwwu | grep -i zenitytom 762 0.0 0.2 390752 27476 ? Sl Jun06 0:01 /usr/bin/zenity --notification --window-icon /usr/share/icons/gnome/32x32/status/mail-unread.png --text You have new mailtom 1239 0.0 0.2 390756 27700 ? Sl Jun06 0:01 /usr/bin/zenity --notification --window-icon /usr/share/icons/gnome/32x32/status/mail-unread.png --text You have new mailtom 1249 0.0 0.2 390760 27752 ? Sl Jun02 0:01 /usr/bin/zenity --notification --window-icon /usr/share/icons/gnome/32x32/status/mail-unread.png --text You have new mail... Aha! This is related to mail notification toasts $ pidof zenity | wc -w186$ killall zenity$ pidof zenity | wc -w0$ lsof -U +c 15 | cut -f1 -d' ' | sort | uniq -c | sort -rn | head -3 140 chrome 61 dbus-daemon 37 skypeforlinux# lsof -p `pidof X` | tail -n 10lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete.Xorg 1672 root 58u unix 0xffff8801c05221c0 0t0 9714900 @/tmp/.X11-unix/X0Xorg 1672 root 59u unix 0xffff8801c0527440 0t0 9715809 @/tmp/.X11-unix/X0Xorg 1672 root 62u CHR 13,79 0t0 9540231 /dev/input/event15Xorg 1672 root 69u unix 0xffff88031c155280 0t0 175280 @/tmp/.X11-unix/X0Xorg 1672 root 79u unix 0xffff880063b103c0 0t0 9243076 @/tmp/.X11-unix/X0Xorg 1672 root 90u unix 0xffff880111b22940 0t0 2858278 @/tmp/.X11-unix/X0Xorg 1672 root 96u unix 0xffff88000aeb2d00 0t0 9301134 @/tmp/.X11-unix/X0Xorg 1672 root 113u unix 0xffff880063b14b00 0t0 939782 @/tmp/.X11-unix/X0Xorg 1672 root 153u unix 0xffff880111a47080 0t0 1819503 @/tmp/.X11-unix/X0Xorg 1672 root 256u REG 0,16 4096 22306 /sys/devices/pci0000:00/0000:00:02.0/drm/card1/card1-LVDS-1/intel_backlight/brightness# lsof -p `pidof X` | wc -llsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete.524 VoilΓ ! I can start other apps now (until zenity or another buggy app eats all available connections). NOTE. I still will have to sort how to prevent zenity to keep connections | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/368017",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37713/"
]
} |
368,100 | Where should I put systemd file for e.g. Nginx nginx.service or something like that on Ubuntu 16.04 ? | The recommended place is /etc/systemd/system/nginx.service Then issue the command : systemctl enable nginx And finally systemctl start nginx | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/368100",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154659/"
]
} |
368,122 | I'm trying to let my Raspberry Pi (running Debian Jessie) push a notification to Pushbullet when for some reason my HDD is disconnected, using udev rules. Now I've managed to make this work sort of. The problem however is that the script runs 14 times instead of just once and it runs on disconnect AND connect actions, which is not my intention.. I've tried a lot of different configurations of the rules file, such as: ACTION==βremoveβ,KERNEL==βsda1β,SUBSYSTEM==βblockβ,KERNELS==β1-1.2β,SUBSYSTEMS==βusbβ,ATTRS{idProduct}==β10a2β,ATTRS{idVendor}==β1058β,ATTRS{manufacturer}=="Western Digital",RUN+="/home/pi/HDD_removed.sh" and ACTION=="remove",ENV{ID_MODEL}=="Elements_10A2",RUN+="/home/pi/HDD_removed.sh" and others, but nothing works properly.. As a help I printed the outputs of udevadm info and udevadm monitor below (sorry for the large sizes..): $ udevadm info -a -p $(udevadm info -q path -n /dev/sda1) looking at device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/block/sda/sda1':KERNEL=="sda1"SUBSYSTEM=="block"DRIVER==""ATTR{start}=="2048"ATTR{inflight}==" 0 0"ATTR{ro}=="0"ATTR{partition}=="1"ATTR{stat}==" 545 0 61544 4110 0 0 0 0 0 1320 4110"ATTR{size}=="1953456128"ATTR{alignment_offset}=="0"ATTR{discard_alignment}=="0"looking at parent device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/block/sda':KERNELS=="sda"SUBSYSTEMS=="block"DRIVERS==""ATTRS{badblocks}==""ATTRS{range}=="16"ATTRS{capability}=="50"ATTRS{inflight}==" 0 0"ATTRS{ext_range}=="256"ATTRS{ro}=="0"ATTRS{stat}==" 590 0 62336 4140 0 0 0 0 0 1350 4140"ATTRS{events_poll_msecs}=="-1"ATTRS{events_async}==""ATTRS{removable}=="0"ATTRS{size}=="1953458176"ATTRS{events}==""ATTRS{alignment_offset}=="0"ATTRS{discard_alignment}=="0"looking at parent device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0':KERNELS=="0:0:0:0"SUBSYSTEMS=="scsi"DRIVERS=="sd"ATTRS{evt_soft_threshold_reached}=="0"ATTRS{evt_mode_parameter_change_reported}=="0"ATTRS{inquiry}==""ATTRS{evt_capacity_change_reported}=="0"ATTRS{vendor}=="WD "ATTRS{timeout}=="30"ATTRS{evt_lun_change_reported}=="0"ATTRS{evt_media_change}=="0"ATTRS{queue_type}=="none"ATTRS{device_busy}=="0"ATTRS{eh_timeout}=="10"ATTRS{model}=="Elements 10A2 "ATTRS{iocounterbits}=="32"ATTRS{queue_depth}=="1"ATTRS{type}=="0"ATTRS{evt_inquiry_change_reported}=="0"ATTRS{max_sectors}=="240"ATTRS{iodone_cnt}=="0x27d"ATTRS{state}=="running"ATTRS{iorequest_cnt}=="0x27d"ATTRS{rev}=="1033"ATTRS{ioerr_cnt}=="0x3"ATTRS{scsi_level}=="7"ATTRS{device_blocked}=="0"looking at parent device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0':KERNELS=="target0:0:0"SUBSYSTEMS=="scsi"DRIVERS==""looking at parent device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0':KERNELS=="host0"SUBSYSTEMS=="scsi"DRIVERS==""looking at parent device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0':KERNELS=="1-1.2:1.0"SUBSYSTEMS=="usb"DRIVERS=="usb-storage"ATTRS{bInterfaceProtocol}=="50"ATTRS{bInterfaceNumber}=="00"ATTRS{bInterfaceSubClass}=="06"ATTRS{bInterfaceClass}=="08"ATTRS{bAlternateSetting}==" 0"ATTRS{authorized}=="1"ATTRS{bNumEndpoints}=="02"ATTRS{supports_autosuspend}=="1"looking at parent device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2':KERNELS=="1-1.2"SUBSYSTEMS=="usb"DRIVERS=="usb"ATTRS{bDeviceClass}=="00"ATTRS{manufacturer}=="Western Digital"ATTRS{bmAttributes}=="80"ATTRS{bConfigurationValue}=="1"ATTRS{version}==" 2.10"ATTRS{devnum}=="9"ATTRS{bMaxPower}=="100mA"ATTRS{idProduct}=="10a2"ATTRS{avoid_reset_quirk}=="0"ATTRS{urbnum}=="7168"ATTRS{bDeviceSubClass}=="00"ATTRS{maxchild}=="0"ATTRS{bcdDevice}=="1033"ATTRS{bMaxPacketSize0}=="64"ATTRS{idVendor}=="1058"ATTRS{product}=="Elements 10A2"ATTRS{speed}=="480"ATTRS{removable}=="removable"ATTRS{ltm_capable}=="no"ATTRS{serial}=="575831314541323038393032"ATTRS{bNumConfigurations}=="1"ATTRS{busnum}=="1"ATTRS{authorized}=="1"ATTRS{quirks}=="0x0"ATTRS{configuration}==""ATTRS{devpath}=="1.2"ATTRS{bDeviceProtocol}=="00"ATTRS{bNumInterfaces}==" 1"looking at parent device '/devices/platform/soc/3f980000.usb/usb1/1-1':KERNELS=="1-1"SUBSYSTEMS=="usb"DRIVERS=="usb"ATTRS{bDeviceClass}=="09"ATTRS{bmAttributes}=="e0"ATTRS{bConfigurationValue}=="1"ATTRS{version}==" 2.00"ATTRS{devnum}=="2"ATTRS{bMaxPower}=="2mA"ATTRS{idProduct}=="9514"ATTRS{avoid_reset_quirk}=="0"ATTRS{urbnum}=="144"ATTRS{bDeviceSubClass}=="00"ATTRS{maxchild}=="5"ATTRS{bcdDevice}=="0200"ATTRS{bMaxPacketSize0}=="64"ATTRS{idVendor}=="0424"ATTRS{speed}=="480"ATTRS{removable}=="unknown"ATTRS{ltm_capable}=="no"ATTRS{bNumConfigurations}=="1"ATTRS{busnum}=="1"ATTRS{authorized}=="1"ATTRS{quirks}=="0x0"ATTRS{configuration}==""ATTRS{devpath}=="1"ATTRS{bDeviceProtocol}=="02"ATTRS{bNumInterfaces}==" 1"looking at parent device '/devices/platform/soc/3f980000.usb/usb1':KERNELS=="usb1"SUBSYSTEMS=="usb"DRIVERS=="usb"ATTRS{bDeviceClass}=="09"ATTRS{manufacturer}=="Linux 4.9.24-v7+ dwc_otg_hcd"ATTRS{bmAttributes}=="e0"ATTRS{bConfigurationValue}=="1"ATTRS{version}==" 2.00"ATTRS{devnum}=="1"ATTRS{bMaxPower}=="0mA"ATTRS{idProduct}=="0002"ATTRS{avoid_reset_quirk}=="0"ATTRS{urbnum}=="25"ATTRS{bDeviceSubClass}=="00"ATTRS{maxchild}=="1"ATTRS{bcdDevice}=="0409"ATTRS{bMaxPacketSize0}=="64"ATTRS{idVendor}=="1d6b"ATTRS{product}=="DWC OTG Controller"ATTRS{speed}=="480"ATTRS{authorized_default}=="1"ATTRS{interface_authorized_default}=="1"ATTRS{removable}=="unknown"ATTRS{ltm_capable}=="no"ATTRS{serial}=="3f980000.usb"ATTRS{bNumConfigurations}=="1"ATTRS{busnum}=="1"ATTRS{authorized}=="1"ATTRS{quirks}=="0x0"ATTRS{configuration}==""ATTRS{devpath}=="0"ATTRS{bDeviceProtocol}=="01"ATTRS{bNumInterfaces}==" 1"looking at parent device '/devices/platform/soc/3f980000.usb':KERNELS=="3f980000.usb"SUBSYSTEMS=="platform"DRIVERS=="dwc_otg"ATTRS{wr_reg_test}=="Time to write GNPTXFSIZ reg 10000000 times: 540 msecs (54 jiffies)"ATTRS{grxfsiz}=="GRXFSIZ = 0x00000306"ATTRS{srpcapable}=="SRPCapable = 0x1"ATTRS{buspower}=="Bus Power = 0x1"ATTRS{bussuspend}=="Bus Suspend = 0x0"ATTRS{hptxfsiz}=="HPTXFSIZ = 0x02000406"ATTRS{hnp}=="HstNegScs = 0x0"ATTRS{mode}=="Mode = 0x1"ATTRS{mode_ch_tim_en}=="Mode Change Ready Timer Enable = 0x0"ATTRS{hsic_connect}=="HSIC Connect = 0x1"ATTRS{gsnpsid}=="GSNPSID = 0x4f54280a"ATTRS{driver_override}=="(null)"ATTRS{hcd_frrem}=="HCD Dump Frame Remaining"ATTRS{gotgctl}=="GOTGCTL = 0x001c0001"ATTRS{gpvndctl}=="GPVNDCTL = 0x00000000"ATTRS{hnpcapable}=="HNPCapable = 0x1"ATTRS{spramdump}=="SPRAM Dump"ATTRS{regoffset}=="0xffffffff"ATTRS{gnptxfsiz}=="GNPTXFSIZ = 0x01000306"ATTRS{guid}=="GUID = 0x2708a000"ATTRS{regdump}=="Register Dump"ATTRS{hprt0}=="HPRT0 = 0x00001405"ATTRS{hcddump}=="HCD Dump"ATTRS{rem_wakeup_pwrdn}==""ATTRS{regvalue}=="invalid offset"ATTRS{gusbcfg}=="GUSBCFG = 0x20001700"ATTRS{fr_interval}=="Frame Interval = 0x1d4b"ATTRS{busconnected}=="Bus Connected = 0x1"ATTRS{remote_wakeup}=="Remote Wakeup Sig = 0 Enabled = 0 LPM Remote Wakeup = 0"ATTRS{devspeed}=="Device Speed = 0x0"ATTRS{rd_reg_test}=="Time to read GNPTXFSIZ reg 10000000 times: 1500 msecs (150 jiffies)"ATTRS{enumspeed}=="Device Enumeration Speed = 0x1"ATTRS{inv_sel_hsic}=="Invert Select HSIC = 0x0"ATTRS{ggpio}=="GGPIO = 0x00000000"ATTRS{srp}=="SesReqScs = 0x1"looking at parent device '/devices/platform/soc':KERNELS=="soc"SUBSYSTEMS=="platform"DRIVERS==""ATTRS{driver_override}=="(null)"looking at parent device '/devices/platform':KERNELS=="platform"SUBSYSTEMS==""DRIVERS=="" $ udevadm monitor --property KERNEL[50982.358011] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/bsg/0:0:0:0 (bsg)ACTION=removeDEVNAME=/dev/bsg/0:0:0:0DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/bsg/0:0:0:0MAJOR=251MINOR=0SEQNUM=1095SUBSYSTEM=bsgKERNEL[50982.359131] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_generic/sg0 (scsi_generic)ACTION=removeDEVNAME=/dev/sg0DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_generic/sg0MAJOR=21MINOR=0SEQNUM=1096SUBSYSTEM=scsi_genericKERNEL[50982.359731] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_device/0:0:0:0 (scsi_device)ACTION=removeDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_device/0:0:0:0SEQNUM=1097SUBSYSTEM=scsi_deviceKERNEL[50982.361349] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_disk/0:0:0:0 (scsi_disk)ACTION=removeDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_disk/0:0:0:0SEQNUM=1098SUBSYSTEM=scsi_diskKERNEL[50982.367606] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/block/sda/sda1 (block)ACTION=removeDEVNAME=/dev/sda1DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/block/sda/sda1DEVTYPE=partitionMAJOR=8MINOR=1PARTN=1SEQNUM=1099SUBSYSTEM=blockKERNEL[50982.369279] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/block/sda (block)ACTION=removeDEVNAME=/dev/sdaDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/block/sdaDEVTYPE=diskMAJOR=8MINOR=0SEQNUM=1100SUBSYSTEM=blockKERNEL[50982.370139] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0 (scsi)ACTION=removeDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0DEVTYPE=scsi_deviceMODALIAS=scsi:t-0x00SEQNUM=1101SUBSYSTEM=scsiKERNEL[50982.410910] remove /devices/virtual/bdi/8:0 (bdi)ACTION=removeDEVPATH=/devices/virtual/bdi/8:0SEQNUM=1102SUBSYSTEM=bdiKERNEL[50982.411476] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0 (scsi)ACTION=removeDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0DEVTYPE=scsi_targetSEQNUM=1103SUBSYSTEM=scsiKERNEL[50982.412387] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/scsi_host/host0 (scsi_host)ACTION=removeDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/scsi_host/host0SEQNUM=1104SUBSYSTEM=scsi_hostKERNEL[50982.414188] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0 (scsi)ACTION=removeDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0DEVTYPE=scsi_hostSEQNUM=1105SUBSYSTEM=scsiKERNEL[50982.415487] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0 (usb)ACTION=removeDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0DEVTYPE=usb_interfaceINTERFACE=8/6/80MODALIAS=usb:v1058p10A2d1033dc00dsc00dp00ic08isc06ip50in00PRODUCT=1058/10a2/1033SEQNUM=1106SUBSYSTEM=usbTYPE=0/0/0KERNEL[50982.419788] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2 (usb)ACTION=removeBUSNUM=001DEVNAME=/dev/bus/usb/001/007DEVNUM=007DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2DEVTYPE=usb_deviceMAJOR=189MINOR=6PRODUCT=1058/10a2/1033SEQNUM=1107SUBSYSTEM=usbTYPE=0/0/0UDEV [50982.973557] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/bsg/0:0:0:0 (bsg)ACTION=removeDEVNAME=/dev/bsg/0:0:0:0DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/bsg/0:0:0:0MAJOR=251MINOR=0SEQNUM=1095SUBSYSTEM=bsgUSEC_INITIALIZED=982359004UDEV [50982.999940] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_device/0:0:0:0 (scsi_device)ACTION=removeDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_device/0:0:0:0SEQNUM=1097SUBSYSTEM=scsi_deviceUSEC_INITIALIZED=982362751UDEV [50983.095057] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/block/sda/sda1 (block)ACTION=removeDEVLINKS=/dev/disk/by-id/usb-WD_Elements_10A2_575831314541323038393032-0:0-part1 /dev/disk/by-label/Steven /dev/disk/by-path/platform-3f980000.usb-usb-0:1.2:1.0-scsi-0:0:0:0-part1 /dev/disk/by-uuid/21741F4F6C4915E1DEVNAME=/dev/sda1DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/block/sda/sda1DEVTYPE=partitionID_BUS=usbID_FS_LABEL=StevenID_FS_LABEL_ENC=StevenID_FS_TYPE=ntfsID_FS_USAGE=filesystemID_FS_UUID=21741F4F6C4915E1ID_FS_UUID_ENC=21741F4F6C4915E1ID_INSTANCE=0:0ID_MODEL=Elements_10A2ID_MODEL_ENC=Elements\x2010A2\x20\x20\x20ID_MODEL_ID=10a2ID_PART_ENTRY_DISK=8:0ID_PART_ENTRY_NUMBER=1ID_PART_ENTRY_OFFSET=2048ID_PART_ENTRY_SCHEME=dosID_PART_ENTRY_SIZE=1953456128ID_PART_ENTRY_TYPE=0x7ID_PART_ENTRY_UUID=00023f15-01ID_PART_TABLE_TYPE=dosID_PART_TABLE_UUID=00023f15ID_PATH=platform-3f980000.usb-usb-0:1.2:1.0-scsi-0:0:0:0ID_PATH_TAG=platform-3f980000_usb-usb-0_1_2_1_0-scsi-0_0_0_0ID_REVISION=1033ID_SERIAL=WD_Elements_10A2_575831314541323038393032-0:0ID_SERIAL_SHORT=575831314541323038393032ID_TYPE=diskID_USB_DRIVER=usb-storageID_USB_INTERFACES=:080650:ID_USB_INTERFACE_NUM=00ID_VENDOR=WDID_VENDOR_ENC=WD\x20\x20\x20\x20\x20\x20ID_VENDOR_ID=1058MAJOR=8MINOR=1PARTN=1SEQNUM=1099SUBSYSTEM=blockTAGS=:systemd:USEC_INITIALIZED=8036767UDEV [50983.126799] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_disk/0:0:0:0 (scsi_disk)ACTION=removeDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_disk/0:0:0:0SEQNUM=1098SUBSYSTEM=scsi_diskUSEC_INITIALIZED=982364537UDEV [50983.136895] remove /devices/virtual/bdi/8:0 (bdi)ACTION=removeDEVPATH=/devices/virtual/bdi/8:0SEQNUM=1102SUBSYSTEM=bdiUSEC_INITIALIZED=982411342UDEV [50983.138940] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_generic/sg0 (scsi_generic)ACTION=removeDEVNAME=/dev/sg0DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/scsi_generic/sg0MAJOR=21MINOR=0SEQNUM=1096SUBSYSTEM=scsi_genericUSEC_INITIALIZED=982360886KERNEL[50983.194516] remove /devices/virtual/bdi/8:1-fuseblk (bdi)ACTION=removeDEVPATH=/devices/virtual/bdi/8:1-fuseblkSEQNUM=1108SUBSYSTEM=bdiUDEV [50983.204265] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/scsi_host/host0 (scsi_host)ACTION=removeDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/scsi_host/host0SEQNUM=1104SUBSYSTEM=scsi_hostUSEC_INITIALIZED=982413320UDEV [50983.643690] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/block/sda (block)ACTION=removeDEVLINKS=/dev/disk/by-id/usb-WD_Elements_10A2_575831314541323038393032-0:0 /dev/disk/by-path/platform-3f980000.usb-usb-0:1.2:1.0-scsi-0:0:0:0DEVNAME=/dev/sdaDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0/block/sdaDEVTYPE=diskID_BUS=usbID_INSTANCE=0:0ID_MODEL=Elements_10A2ID_MODEL_ENC=Elements\x2010A2\x20\x20\x20ID_MODEL_ID=10a2ID_PART_TABLE_TYPE=dosID_PART_TABLE_UUID=00023f15ID_PATH=platform-3f980000.usb-usb-0:1.2:1.0-scsi-0:0:0:0ID_PATH_TAG=platform-3f980000_usb-usb-0_1_2_1_0-scsi-0_0_0_0ID_REVISION=1033ID_SERIAL=WD_Elements_10A2_575831314541323038393032-0:0ID_SERIAL_SHORT=575831314541323038393032ID_TYPE=diskID_USB_DRIVER=usb-storageID_USB_INTERFACES=:080650:ID_USB_INTERFACE_NUM=00ID_VENDOR=WDID_VENDOR_ENC=WD\x20\x20\x20\x20\x20\x20ID_VENDOR_ID=1058MAJOR=8MINOR=0SEQNUM=1100SUBSYSTEM=blockTAGS=:systemd:USEC_INITIALIZED=748036370UDEV [50983.733473] remove /devices/virtual/bdi/8:1-fuseblk (bdi)ACTION=removeDEVPATH=/devices/virtual/bdi/8:1-fuseblkSEQNUM=1108SUBSYSTEM=bdiUSEC_INITIALIZED=3192262UDEV [50984.141379] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0 (scsi)ACTION=removeDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0/0:0:0:0DEVTYPE=scsi_deviceMODALIAS=scsi:t-0x00SEQNUM=1101SUBSYSTEM=scsiUSEC_INITIALIZED=2371212UDEV [50984.629455] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0 (scsi)ACTION=removeDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0/target0:0:0DEVTYPE=scsi_targetSEQNUM=1103SUBSYSTEM=scsiUSEC_INITIALIZED=2413053UDEV [50985.087418] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0 (scsi)ACTION=removeDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/host0DEVTYPE=scsi_hostSEQNUM=1105SUBSYSTEM=scsiUSEC_INITIALIZED=2415484UDEV [50985.618300] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0 (usb)ACTION=removeDEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0DEVTYPE=usb_interfaceID_MODEL_FROM_DATABASE=Elements SE Portable (WDBPCK)ID_VENDOR_FROM_DATABASE=Western Digital Technologies, Inc.INTERFACE=8/6/80MODALIAS=usb:v1058p10A2d1033dc00dsc00dp00ic08isc06ip50in00PRODUCT=1058/10a2/1033SEQNUM=1106SUBSYSTEM=usbTYPE=0/0/0USEC_INITIALIZED=5647475UDEV [50986.078354] remove /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2 (usb)ACTION=removeBUSNUM=001DEVNAME=/dev/bus/usb/001/007DEVNUM=007DEVPATH=/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2DEVTYPE=usb_deviceID_BUS=usbID_MODEL=Elements_10A2ID_MODEL_ENC=Elements\x2010A2ID_MODEL_FROM_DATABASE=Elements SE Portable (WDBPCK)ID_MODEL_ID=10a2ID_REVISION=1033ID_SERIAL=Western_Digital_Elements_10A2_575831314541323038393032ID_SERIAL_SHORT=575831314541323038393032ID_USB_INTERFACES=:080650:ID_VENDOR=Western_DigitalID_VENDOR_ENC=Western\x20DigitalID_VENDOR_FROM_DATABASE=Western Digital Technologies, Inc.ID_VENDOR_ID=1058MAJOR=189MINOR=6PRODUCT=1058/10a2/1033SEQNUM=1107SUBSYSTEM=usbTYPE=0/0/0USEC_INITIALIZED=745644833 | The recommended place is /etc/systemd/system/nginx.service Then issue the command : systemctl enable nginx And finally systemctl start nginx | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/368122",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233431/"
]
} |
368,123 | I have an end-entity/server certificate which have an intermediate and root certificate. When I cat on the end-entity certificate, I see only a single BEGIN and END tag. It is the only the end-entity certificate. Is there any way I can view the intermediate and root certificate content. I need only the content of BEGIN and END tag. In Windows I can see the full cert chain from the "Certification Path". Below is the example for the Stack Exchange's certificate. From there I can perform a View Certificate and export them. I can do that for both root and intermediate in Windows. I am looking for this same method in Linux. | From a web site, you can do: openssl s_client -showcerts -verify 5 -connect stackexchange.com:443 < /dev/null That will show the certificate chain and all the certificates the server presented. Now, if I save those two certificates to files, I can use openssl verify : $ openssl verify -show_chain -untrusted dc-sha2.crt se.crt se.crt: OKChain:depth=0: C = US, ST = NY, L = New York, O = "Stack Exchange, Inc.", CN = *.stackexchange.com (untrusted)depth=1: C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert SHA2 High Assurance Server CA (untrusted)depth=2: C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert High Assurance EV Root CA The -untrusted option is used to give the intermediate certificate(s); se.crt is the certificate to verify. The depth=2 result came from the system trusted CA store. If you don't have the intermediate certificate(s), you can't perform the verify. That's just how X.509 works. Depending on the certificate, it may contain a URI to get the intermediate from. As an example, openssl x509 -in se.crt -noout -text contains: Authority Information Access: OCSP - URI:http://ocsp.digicert.com CA Issuers - URI:http://cacerts.digicert.com/DigiCertSHA2HighAssuranceServerCA.crt That "CA Issuers" URI points to the intermediate cert (in DER format, so you need to use openssl x509 -inform der -in DigiCertSHA2HighAssuranceServerCA.crt -out DigiCertSHA2HighAssuranceServerCA.pem to convert it for further use by OpenSSL). If you run openssl x509 -in /tmp/DigiCertSHA2HighAssuranceServerCA.pem -noout -issuer_hash you get 244b5494 , which you can look for in the system root CA store at /etc/ssl/certs/244b5494.0 (just append .0 to the name). I don't think there is a nice, easy OpenSSL command to do all that for you. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/368123",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121877/"
]
} |
368,155 | I'm not sure about when to use nc , netcat or ncat .If one is the deprecated version of another? If one is only available on one distribution? If it is the same command but with different names? In fact I'm a bit confused. My question comes from wanting to do a network speed test between two CentOS 7 servers. I came across several examples using nc and dd but not many using netcat or ncat . Could someone clarify this for me please? | nc and netcat are two names for the same program (typically, one will be a symlink to the other). Thoughβfor plenty of confusionβthere are two different implementations of Netcat ("traditional" and "OpenBSD"), and they take different options and have different features. Ncat is the same idea, but from the Nmap project. There is also socat , which is a similar idea. There is also /dev/tcp , an (optional) Bash feature. However, if you're looking to do network speed tests then all of the above are the wrong answer. You're looking for iperf3 ( site 1 or site 2 or code ). | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/368155",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103808/"
]
} |
368,210 | I want to rsync multiple sources and I wonder the best way to achieve that. e.g. /etc/fstab/home/user/download I thought about 3 solutions : Solution 1 multiple call to rsync rsync -a /etc/fstab bkprsync -a /home/user/download bkp con : harder to have agreggated stat Solution 2 create a tobackup folder that contains symlink, and use -L options sync -aL /home/user/tobackup bkp con : content to backup must not contain symlinks Solution 3 move files into to backup and create symlink in original location rsync -a /home/user/tobackup bkp con : some manual config Which one do you recommend ? Is there a better way ? | You can pass multiple source arguments. rsync -a /etc/fstab /home/user/download bkp This creates bkp/fstab and bkp/download , like the separate commands you gave. It may be desirable to preserve the source structure instead. To do this, use / as the source and use include-exclude rules to specify which files to copy. There are two ways to do this: Explicitly include each file as well as each directory component leading to it, with /*** at the end of directories when you want to copy the whole directory tree: rsync -a \ --include=/etc --include=/etc/fstab \ --include=/home --include=/home/user --include='/home/user/download/***' \ --exclude='*' / bkp Include all top-level directories with /*/ (so that rsync will traverse /etc and /home when looking for files to copy) and second-level directories with /*/*/ (for /home/user ), but strip away directories in which no file gets copied. This is more convenient because you don't have to list parents explicitly. You could even use --prune-empty-dirs --include='*/' instead of counting the number of levels, but this is impractical here as rsync would traverse the whole filesystem to explore directories even though none of the include rules can match anything outside /etc and /home/user/download . rsync -a --prune-empty-dirs \ --include='/*/' --include='/*/*/' \ --include=/etc/fstab \ --include='/home/user/download/***' \ --exclude='*' / bkp | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/368210",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32194/"
]
} |
368,211 | Sometimes, I want to test is my regex correct. How can I do reverse regex match from stdin? F.e. I can match a string with provided regex, like: grep "\(foo\)-bar"foobarfoo-barfoo-bar #Match found What I would like to do, is the opposite, something like this: $ grep "This is one string"\(This\) #Will send "This" to stdoutThis?.* #Will send full match Is this somehow possible without much scripting? | You can use - as the "file" to search, which will use standard input as the "haystack" to search for matching "needles" in: $ grep -oE '[aeiou]+' -This is a test < inputi > outputi > outputa > outpute > outputwhaaaat? < inputaaaa > output Use Ctrl - D to send EOF and end the stream. I don't believe, though, that you can do the same to use standard input for the -f switch which reads a list of patterns from a file. However, if you have a lot of patterns to text on one corpus, you can: grep -f needle-patterns haystack.txt where needle-patterns is a plaintext file with one regular expression per line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/368211",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126666/"
]
} |
368,240 | I am running a timed find command as normal user. What I know is that redirection is to prevent stdout/stderr messages on the terminal. If that's the case, why do different redirections methods take different amounts of time? Is it somehow related to the write speed on the tty or is there any other reason behind it? Could someone point me in right direction in understanding this? $ iduid=1000(user1) gid=1000(user1) groups=1000(user1),1001(user2)$time find /<truncated output>real 0m13.902suser 0m0.197ssys 0m0.448s$ time find / >/dev/null <truncated output>real 0m0.298suser 0m0.068ssys 0m0.206s$time find / 2> /dev/null <truncated output>real 0m13.279suser 0m0.181ssys 0m0.405s$ time find / > /dev/null 2>&1real 0m0.306suser 0m0.109ssys 0m0.174s | When your process ( find ) needs to actually write out the output, that obviously takes a lot longer than when you tell it to discard said output. When you use find / , both the stdout and stderr are sent to your terminal, and it has to write out both of them (i.e., the actual results and all the permission errors and whatnot) When you use time find / >/dev/null you are dropping the standard output of the command, but still printing out all the errors (if you have any). Judging by your results, you have lots of legitimate results and very few errors. When you use time find / 2> /dev/null , the standard output of the command is still being sent to your terminal, but now you're simply dropping the stderr. If you were finding through a filesystem that you did not have permission to read, this would actually be pretty fast. When you use time find / > /dev/null 2>&1 , you are dropping the standard output, and then sending standard error to where standard output is being sent,... i.e., you are dropping both. This will not output anything, and thus will be the fastest of all commands. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/368240",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/155295/"
]
} |
368,246 | I have a very wierd case... If I run a script with /bin/bash, it can't recognize aliases that I set even inside the script. And the most strange thing is $ cat -n test.sh 1 #!/bin/bash 2 alias somecommand='ls -alF' 3 alias 4 somecommand$ ./test.shalias somecommand='ls -alF'./test.sh: line 4: somecommand: command not found ... as shown above, if I run the command "alias" in the script it turns out that bash has taken somecommand into the aliases, but if I run the somecommand itself it will still not be recognized! Everything is right if I use the command "sh" to run the script.. so is it a bug of bash? Or is there something I'm missing? Any help is appreciated! | Simply don't use aliases in scripts. It makes little sense to use a feature designed for interactive use in scripts. Instead, use functions: somecommand () { ls -alF} Functions are much more flexible than aliases. The following would overload the usual ls with a version that always does ls -F (arguments are passed in $@ , including any flags that you use), pretty much as the alias alias ls="ls -F" would do: ls () { command ls -F "$@"} The command here prevents the shell from going into an infinite recursion, which it would otherwise do since the function is also called ls . An alias would never be able to do something like this: select_edit () ( dir=${1:-.} if [ ! -d "$dir" ]; then echo 'Not a directory' >&2 return 1 fi shopt -s dotglob nullglob set -- for name in "$dir"/*; do [ -f "$name" ] && set -- "$@" "$name" done select file in "$@"; do "${EDITOR:-vi}" "$file" break done) This creates the function select_edit which takes directory as an argument and asks the user to pick a file in that directory. The picked file will be opened in an editor for editing. The bash manual contains the statement For almost every purpose, aliases are superseded by shell functions. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/368246",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/221743/"
]
} |
368,293 | I need to create an archive of a directory using tar in a shell script but I am also supposed to exclude hidden files and files whose size is equal to 0 . Also the first command line argument is the location of the archive which is supposed to be created, the second is the name of the archive and the third is the path to the directory whose files are supposed to be archived. I tried sending the arguments like this in my terminal: /bin/bash ss1 /home/user arch /home/user/folder But it is giving me some tar errors.I tried to archive like this: tar -cvf --exclude=.* $1/$2 $3 But it is not correct and I am not sure what the right syntax for this would be, and also how I would exclude empty and hidden files. | Since you tagged this linux I'll assume you have GNU find and GNU tar . If your filenames don't have embedded newlines and you don't want to archive empty directories: find "$3" -type f \! -empty \! -name '.*' | tar cvf "$1/$2" -T - find finds the relevant files, and -T - tells tar to read the list of files to archive from stdin . Refining this, if you want to include empty directories: find "$3" \( -type d -empty \) -o \( -type f \! -empty \! -name '.*' \) | \ tar cvf "$1/$2" -T - And if you also want to handle filenames with embedded newlines: find "$3" \( \( -type d -empty \) -o \( -type f \! -empty \! -name '.*' \) \) -print0 | \ tar cvf "$1/$2" --null -T - | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/368293",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233420/"
]
} |
368,318 | I have a directory filled with files with names like logXX where XX is a two-character, zero-padded, uppercase hex number such as: log00log01log02...log0Alog0Blog0C...log4Elog4Flog50... Generally there will be fewer than say 20 or 30 files total. The date and time on my particular system is not something that can be relied up on (an embedded system with no reliable NTP or GPS time sources). However the filenames will reliably increment as shown above. I wish to grep through all the files for the single most recent log entry of a certain type, I was hoping to cat the files together such as... cat /tmp/logs/log* | grep 'WARNING 07 -' | tail -n1 However it occurred to me that different versions of bash or sh or zsh etc. might have different ideas about how the * is expanded. The man bash page doesn't say whether or not the expansion of * would be a definitely ascending alphabetical list of matching filenames. It does seem to be ascending every time I've tried it on all the systems I have available to me -- but is it DEFINED behaviour or just implementation specific? In other words can I absolutely rely on cat /tmp/logs/log* to concatenate all my log files together in alphabetical order? | In all shells, globs are sorted by default. They were already by the /etc/glob helper called by Ken Thompson's shell to expand globs in the first version of Unix in the early 70s (and which gave globs their name). For sh , POSIX does require them to be sorted by way of strcoll() , that is using the sorting order in the user's locale, like for ls though some still do it via strcmp() , that is based on byte values only. $Β dash -c 'echo *'Log01B log-0D log00 log01 log02 log0A log0B log0C log4E log4F log50 logβ logβ‘ lΓ³g01$Β bash -c 'echo *'logβ logβ‘ log00 log01 lΓ³g01 Log01B log02 log0A log0B log0C log-0D log4E log4F log50$Β zsh -c 'echo *'logβ logβ‘ log00 log01 lΓ³g01 Log01B log02 log0A log0B log0C log-0D log4E log4F log50$Β lslogβ‘ logβ log00 log01 lΓ³g01 Log01B log02 log0A log0B log0C log-0D log4E log4F log50$Β ls | sortlogβ‘logβ log00log01lΓ³g01Log01Blog02log0Alog0Blog0Clog-0Dlog4Elog4Flog50 You may notice above that for those shells that do sorting based on locale, here on a GNU system with a en_GB.UTF-8 locale, the - in the file names is ignored for sorting (most punctuation characters would). The Γ³ is sorted in a more expected way (at least to British people), and case is ignored (except when it comes to decide ties). However, you'll notice some inconsistencies for logβ logβ‘. That's because the sorting order of β and β‘ is not defined in GNU locales (currently; hopefully it will be fixed some day). They sort the same, so you get random results. Changing the locale will affect the sorting order. You can set the locale to C to get a strcmp() -like sort: $ bash -c 'echo *'logβ logβ‘ log00 log01 lΓ³g01 Log01B log02 log0.2 log0A log0B log0C log-0D log4E log4F log50$ bash -c 'LC_ALL=C; echo *'Log01B log-0D log0.2 log00 log01 log02 log0A log0B log0C log4E log4F log50 logβ logβ‘ lΓ³g01 Note that some locales can cause some confusions even for all-ASCII all-alnum strings. Like Czech ones (on GNU systems at least) where ch is a collating element that sorts after h : $Β LC_ALL=cs_CZ.UTF-8 bash -c 'echo *'log0Ah log0Bh log0Dh log0Ch Or, as pointed out by @ninjalj, even weirder ones in Hungarian locales: $ LC_ALL=hu_HU.UTF-8 bash -c 'echo *'logX LOGx LOGX logZ LOGz LOGZ logY LOGY LOGy In zsh , you can choose the sorting with glob qualifiers . For instance: echo *(om) # to sort by modification timeecho *(oL) # to sort by sizeecho *(On) # for a *reverse* sort by nameecho *(o+myfunction) # sort using a user-defined functionecho *(N) # to NOT sortecho *(n) # sort by name, but numerically, and so on. The numeric sort of echo *(n) can also be enabled globally with the numericglobsort option: $ zsh -c 'echo *'logβ logβ‘ log00 log01 lΓ³g01 Log01B log02 log0.2 log0A log0B log0C log-0D log4E log4F log50$ zsh -o numericglobsort -c 'echo *'logβ logβ‘ log00 lΓ³g01 Log01B log0.2 log0A log0B log0C log01 log02 log-0D log4E log4F log50 If you (as I was) are confused by that order in that particular instance (here using my British locale), see here for details. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/368318",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
368,368 | When I boot my CentOS box, the httpd service starts automatically. How do I make a custom service that does the same thing? I have a program I use for mining, and I don't want to need to run ./miner every time I boot the machine. | Since you are using CentOS 7.x, create a Unit. vim /usr/lib/systemd/system/miner.service as root and put the following contents: [Unit]Description=miner[Service]ExecStart=/path/to/miner[Install]WantedBy=multi-user.target You could add ExecStop= and ExecReload= options if there are specific arguments used to close or reload services. After that, you just need to systemctl enable miner.service to make it start on each boot. Related Stuff: Writing basic systemd service files man: systemd.service β Service unit configuration | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/368368",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/206079/"
]
} |
368,415 | I have several pdf files ( chapter1.pdf , chapter2.pdf , etc.), each one being a chapter of a book. I now how to merge them into a single pdf (I use the command pdfunite from poppler), but since the output file is big, it's difficult to find a chapter without having them indexed in a table of contents. So how to create an embedded table of contents in which each merged chapter is an entry? Note that I do not want to create a page in the output file which contains the list of chapters and their respective page numbers. I want the index/table of contents metadata of an pdf file, that can be browseable in any pdf reader's (or ebook device's) which supports such feature. | Non-destructive version of @bu5hman's answer: #!/bin/bashout_file="combined.pdf"bookmarks_file="/tmp/bookmarks.txt"bookmarks_fmt="BookmarkBeginBookmarkTitle: %sBookmarkLevel: 1BookmarkPageNumber: %d"rm -f "$bookmarks_file" "$out_file"declare -a files=(*.pdf)page_counter=1# Generate bookmarks file.for f in "${files[@]}"; do title="${f%.*}" printf "$bookmarks_fmt" "$title" "$page_counter" >> "$bookmarks_file" num_pages="$(pdftk "$f" dump_data | grep NumberOfPages | awk '{print $2}')" page_counter=$((page_counter + num_pages))done# Combine PDFs and embed the generated bookmarks file.pdftk "${files[@]}" cat output - | \ pdftk - update_info "$bookmarks_file" output "$out_file" It works by: Generating bookmarks.txt . Merging PDFs into combined.pdf . Updating combined.pdf with bookmarks.txt . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/368415",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233964/"
]
} |
368,418 | I'm trying to change the order of lines in a specific pattern. Working with a file with many lines (ex. 99 lines). For every three lines, I would like the second line to be the third line, and the third to be the second line. EXAMPLE. 1- Input: gi_1234My cat is blue.I have a cat.gi_5678My dog is orange.I also have a dog.... 2- Output: gi_1234I have a cat.My cat is blue.gi_5678I also have a dog.My dog is orange.... | Using awk and integer maths: awk 'NR%3 == 1 { print } NR%3 == 2 { delay=$0 } NR%3 == 0 { print; print delay; delay=""} END { if(length(delay) != 0 ) { print delay } }' /path/to/input The modulus operator performs integer division and returns the remainder, so for each line, it will return the sequence 1, 2, 0, 1, 2, 0 [...]. Knowing that, we just save the input on lines where the modulus is 2 for later -- to wit, just after printing the input when it's zero. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/368418",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233969/"
]
} |
368,463 | I'm trying to mount a USB stick on FreeBSD. The result of camcontrol devlist is: root@machine0:~ # camcontrol devlist<APPLE SSD SM0128G BXW1JA0Q> at scbus0 target 0 lun 0 (ada0,pass0)< USB DISK 1100> at scbus1 target 0 lun 0 (da0,pass1)<APPLE SD Card Reader 3.00> at scbus2 target 0 lun 0 (da1,pass2) I did root@machine0:~ # mount /dev/da0 /mntmount: /dev/da0: Invalid argument Specifying the file system doesn't help either: root@machine0:~ # mount -t fat /dev/da0 /mntmount: /dev/da0: Operation not supported by device Output of gpart show da0 : => 34 15730621 da0 GPT (7.5G) 34 6 - free - (3.0K) 40 409600 1 efi (200M) 409640 2008 - free - (1.0M) 411648 15316992 2 ms-basic-data (7.3G) 15728640 2015 - free - (1.0M) | You have to mount a specific partition, not the whole drive. You can try something like da0x , where x is replaced by the desired partition id. You can look in /dev to find partition id's on da0 . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/368463",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/158931/"
]
} |
368,509 | I've a multiple jar directory in which I would like to locate some classes. I found a solution to see if a Class exists with the following command : find -name "*.jar" | xargs -n 1 jar tf | grep 'myClass' The problem is that I can't see in which jar it is located. I'm looking for a solution to display the filename in my terminal. My current output is like that : [user@server lib]$ find -name "*.jar" | xargs -n 1 jar tf | grep 'RollOn' com/ventyx/utils/logutils/rolling/RollOnDemandAppender.class Any suggestions? | With GNU grep : find . -type f -name '*.jar' -exec sh -c ' for file do jar tf "$file" | grep -H --label="$file" myClass done' sh {} + Or use awk for instance: find . -type f -name '*.jar' -exec sh -c ' export FILE for FILE do jar tf "$FILE" | awk '\''/myClass/ { print ENVIRON["FILE"] ": " $0}'\'' done' sh {} + You can also use bsdtar (as jar files are zip files and bsdtar supports them) to do the matching by itself (allowing you to have a more verbose output with files metadata without running the risk of grep matching on that metadata), though you'd still need something like grep to insert the filename: find . -type f -name '*.jar' -exec sh -c ' for file do bsdtar tvf "$file" "*myClass*" | grep -H --label="$file" "^" done' sh {} + | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/368509",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165684/"
]
} |
368,624 | I want to remove all spaces from a file, except from every line beginning with the same pattern (pattern is "ORGANISM"). Input: Cat; Dog; SquirrelORGANISM Animalus terrusSequence: ACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGT Output: Cat;Dog;SquirrelORGANISM Animalus terrusSequence:ACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGTACGT No more spaces in any line except the line starting with the characters "ORGANISM". | sed '/^ORGANISM/!s/ //g' /path/to/input This will remove all spaces on all lines that do not start with ORGANISM . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/368624",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233969/"
]
} |
368,636 | My question is similar to this other one , excepted that one asks about newly created files. In my Unix box, users alice , bob and tomcat are in the group tomcat . The configuration files of the Tomcat server are owned by the user tomcat, and in the group tomcat. I have changed the permissions of this file to readable and writable by group so that alice and bob can edit the files. However, I have noticed that after editing, the file becomes owned by the last user who edited it. Q: Is it possible to change permissions so that Alice and Bob can edit the files, without changing their ownership? How does editing a file change its ownership anyway? | The resulting user of the file depends on what the editor does. Some editors save the file by truncating it, and writing over the file (without changing the inode). And some editors rename the file to another name ( file to file~ is usual), and create a new file with the name of the original. Modifying the original file keeps the owner the same, creating a new one makes the new file owned by the UID of the creating process. Of the editors I have on Debian, nano and joe , as well nvi and vim (the minimal version in vim-tiny ) seem to overwrite in-place. Though I suppose vim and Emacs are probably configurable in what they do. Stephen comments about atomic updates . The issue with re-creating in-place is that the file is truncated to zero length, then written. Another process could open and read it before all data is written. An atomic update would be done by creating the new version as say file.new , then renaming file.new to file . Leaving a backup file, one could create file.new , link file to file~ and then rename file.new to file . The rename is atomic in that any process that accesses the file by name gets either the old or the new version, not anything in between. Any open file handles will of course point to the file that was held open, giving a consistent view on the file. From the file permissions point of view, saving over the same file (inode) requires write access to the file itself (but not the directory), renaming it and creating a new one requires write access to the directory (but not to the original file). (Renaming and recreating is also incidentally a way of fixing file permissions in case someone creates or modifies a file in a shared directory, but forgets to give group write access to it.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/368636",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/234141/"
]
} |
368,639 | I'm writing a script to stop an application. Sometimes the application doesn't want to stop. So after a several minute pause, if the application is still running I want to kill it. After a number of failures of the kill command, I tried adding the -9 to force stop it. This doesn't seem to work. Does anyone know how I can get this to function, even if I need to use a different command, I'm open to new things. :-) Following is my command line: ps -ef|grep -v grep|grep <process_name>|awk -F' ' '{print $2}' |xargs kill -9 Thanks in advance. | @muru was correct, but lately I've found that using the '-f' option to pkill is preferred. Matches against the entire process and argument list. Here we have a few servers running Tomcat processes and Logstash (sending data to Elastic). So 'kill -9 java' to stop the Tomcat process also kills the Logstash process. pkill -9 -f 'pattern to match' Example: pkill -9 -f '/opt/tomcat/' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/368639",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/234145/"
]
} |
368,690 | I have two network interfaces, A 1.2.3.4 and B 1.2.3.99 . (in ifconfig ) I run nc -l 1.2.3.99 20101 -v to listen on the interface B. I run nc -v 1.2.3.99 20101 -s 1.2.3.4 -4 because I want to use the interface A . It connects but when I check with wireshark , no packet from A or B , only in lo ... Why it doesn't use the interface with the associated IP ? What should I do to force them to use the associated interface ? Edit: After following the piece of advice of Patrick: ip route add local 1.2.3.99 dev B table mainip route del local 1.2.3.99 dev B table localip route add 1.2.3.99 dev B table local I run nc -l 1.2.3.99 20101 but I get an error when I create the tcp server Ncat: bind to 1.2.3.99:20101: Cannot assign requested address. QUITTING. 17:10:38 alexis:~ $ ip route list table local1.2.3.99 dev B scope link ...17:10:40 alexis:~ $ ip route list table maindefault via 10.133.0.1 dev eth0 local 1.2.3.99 dev B scope host... | When you tell an application to use a specific IP address, the application is using the IP address , not the interface. Some applications do let you use a specific interface, but this is a separate behavior ( SO_BINDTODEVICE ). Since the application is binding to an IP address, and not an interface, the kernel is free to use whatever interface it wants. To determine which interface to use, it uses the routing tables (yes, there are multiple). If you just want a quick way to determine what interface/route the traffic will take, you can use ip route get 1.2.3.99 from 1.2.3.4 , which will output something like: # ip route get 1.2.3.99 from 1.2.3.4local 1.2.3.99 from 1.2.3.4 dev lo cache <local> This shows that the kernel is going to send the traffic over the lo interface. To walk through why, lets start with the command ip rule : # ip rule0: from all lookup local 32766: from all lookup main 32767: from all lookup default This shows all the routing tables the kernel is going to use to find the route for traffic. It starts with the top, and stops on the first match. The from all means that the rule matches any source address. So it's going to consult table local first. We can then look inside this table: # ip route show table localbroadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 broadcast 1.2.3.0 dev eth0 proto kernel scope link src 1.2.3.4 local 1.2.3.4 dev eth0 proto kernel scope host src 1.2.3.4 broadcast 1.2.3.255 dev eth0 proto kernel scope link src 1.2.3.4 local 1.2.3.99 dev eth1 proto kernel scope host src 1.2.3.99 (yours will likely look different) From this we then look to see if any of the routes match the destination address (1.2.3.99) by looking to see if the destination matches the second field. In the output above, the very last one matches. In this line, the first field is local , which according to man ip-route means: local - the destinations are assigned to this host. The packets are looped back and delivered locally. This means that the traffic is going to flow over the lo interface. As for how to make it use the A / B interface, you have 2 options: 1) The application would need to provide you with an argument where you can specify the interface. There are a dozen flavors of netcat, but the version on my systems does not have such an option. socat does though (I personally recommend socat over netcat because of the inconsistency & portability nightmare that is netcat. It's also way more powerful). 2) Create a non- local route that matches before the local one: ip route add local 1.2.3.99 dev B table mainip route del local 1.2.3.99 dev B table localip route add 1.2.3.99 dev B table local In these rules, the first 2 rules move the local route into the main table. Adding the route to the main table has to come first as the host has to have a local route somewhere for the kernel to accept traffic for that address. Having the route in 2 tables is OK. After that we then add a new route to the local table which does not have the local designation, which will result in traffic not going over the lo interface. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/368690",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/179642/"
]
} |
368,748 | I am unable to enter anything at the login screen; it just freezes directly after the page shows. The cursor inside the login form blinks about 10 times, then it stops. I can't move the mouse or use the keyboard. I already entered the secure mode and triggered update, upgrade and dist-upgrade via the root shell it made no difference. | We were able to solve it by starting the shell in secure mode and executing the following commands. apt-get update apt-get install xserver-xorg-input-all apt-get install ubuntu-desktop apt-get install ubuntu-minimal apt-get install xorg xserver-xorg apt-get install xserver-xorg-input-evdev //I think this packet was the problem apt-get install xserver-xorg-video-vmware reboot | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/368748",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124191/"
]
} |
368,767 | I have a service (written by myself) running on a Debian (Jessie) server, and the service's own logs happen to indicate that it restarted at a particular time. There is no indication of a segfault or other crash, so I am now trying to figure out if the application somehow silently failed and got respawned by systemd, or whether a user purposely restarted the service via systemctl . The shell history doesn't show such activity, but that is not conclusive because of export HISTCONTROL=ignoreboth and because an SSH session might have just timed out, preventing a previous login's bash history from being written to disk. The server was not rebooted at the time. But I would expect that systemd itself should keep a log indicating when a service was purposely restarted. To my surprise I was unable to find any documentation (e.g. for journalctl ) on how to get such logs. Some other posts (e.g. Where is / why is there no log for normal user systemd services? ) seem to indicate that there should be log messages like this: Jan 15 19:28:08 qbd-x230-suse.site systemd[1]: Starting chatty.service...Jan 15 19:28:08 qbd-x230-suse.site systemd[1]: Started chatty.service. But I don't see such log messages on my system. Is there a way to find out when systemd services were started, stopped or restarted? Edit : It seems the typical problem people might run into is that they run journalctl as a non-privileged user. This is not the case for me, I have been operating as root the whole time. In response to a comment, running grep systemd /var/log/syslog gives me only this: Jun 6 09:28:35 server systemd[22057]: Starting Paths.Jun 6 09:28:35 server systemd[22057]: Reached target Paths.Jun 6 09:28:35 server systemd[22057]: Starting Timers.Jun 6 09:28:35 server systemd[22057]: Reached target Timers.Jun 6 09:28:35 server systemd[22057]: Starting Sockets.Jun 6 09:28:35 server systemd[22057]: Reached target Sockets.Jun 6 09:28:35 server systemd[22057]: Starting Basic System.Jun 6 09:28:35 server systemd[22057]: Reached target Basic System.Jun 6 09:28:35 server systemd[22057]: Starting Default.Jun 6 09:28:35 server systemd[22057]: Reached target Default.Jun 6 09:28:35 server systemd[22057]: Startup finished in 59ms.Jun 6 09:37:08 server systemd[1]: Reexecuting. | If you need to script this, you should look into using the systemctl show command. It is more useful for scripts than trying to parse anything from status . For example, to find when the service last started you can use: $ systemctl show systemd-journald --property=ActiveEnterTimestampActiveEnterTimestamp=Wed 2017-11-08 05:55:17 UTC If you would like to see all the properties available just omit the flag and it will dump them all out. $ systemctl show <service_name> The documentation for these properties can be found here . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/368767",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194835/"
]
} |
368,817 | I recently installed Android Studio on arch linux with DWM. But the initial dialog window, which prompts for starting a new project is blank. The links in this windows work though. I can start a new project by blindly clicking where the new project button is supposed to be. There is no problem with the new project wizard, but the editor window which loads up is blank as well. However, if I start the X server with android studio as the client, It works correctly. So it's an issue with DWM. What could be the reason? Edit: Intellij has the same problem with dwm. | You'll need to set the _JAVA_AWT_WM_NONREPARENTING variable to 1 through some way to Android Studio. If you're starting dwm via startx , add this to your .xinitrc : export _JAVA_AWT_WM_NONREPARENTING=1 If you're launching Android Studio from a shell, add the same line to your shell's rc file. If you're launching Android Studio from a shortcut, and you're not using startx , then you'll have to add the variable to the WM after the process started . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/368817",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187585/"
]
} |
368,864 | I'm building a custom kernel based off 4.11 (for Mintx64, if it matters). I've already compiled and installed it to prove that it works. Now I've made a few small changes to a couple of files (in the driver and net subsystems, this is why I need to compile a custom kernel in the first place!) Now I want to build the modified kernel. However when I run fakeroot make -j5 deb-pkg LOCALVERSION=myname KDEB_PKGVERSION=1 The build system appears to start by "clean"-ing a whole load of stuff, so I stopped it quickly. Unfortunately the computer I'm using is not blessed with a good CPU and takes many hours to build from scratch. Therefore I'd rather avoid doing it again if possible! Is it possible to make just an incremental build without everything be "clean"d or is this a requirement of the kernel build system? The output I got was: CHK include/config/kernel.releasemake cleanCLEAN .CLEAN arch/x86/lib... | The make clean is only for the deb-pkg target. Take a look at scripts/package/Makefile : deb-pkg: FORCE $(MAKE) clean $(call cmd,src_tar,$(KDEB_SOURCENAME)) $(MAKE) KBUILD_SRC= +$(call cmd,builddeb)bindeb-pkg: FORCE $(MAKE) KBUILD_SRC= +$(call cmd,builddeb) If you build the bindeb-pkg instead, it won't do a clean. You probably don't need the source packages anyway. I suspect it does a clean because it doesn't want to tar up build artifacts in the source tarball. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/368864",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/234315/"
]
} |
368,867 | Doing an update on my CentOS 7 box and I noticed that there was a handful of DRPMs being installed. After doing some searches on google, there is no straight up Answer for this question so I thought it would fit here to ask. I am wondering what is a DRPM? How does it differ from a RPM package? | A drpm stands for delta rpm , which is an addition to an existing rpm , and only contains the different files. Source : Delta RPM packages contain the difference between an old and a new version of an RPM package. Applying a delta RPM on an old RPM results in the complete new RPM. It is not necessary to have a copy of the old RPM, because a delta RPM can also work with an installed RPM. The delta RPM packages are even smaller in size than patch RPMs, which is an advantage when transferring update packages over the Internet. The drawback is that update operations with delta RPMs involved consume considerably more CPU cycles than plain or patch RPMs. The README file referred to in the documentation can be found in the GitHub repository . You will see that deltarpm is based on bsdiff . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/368867",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60089/"
]
} |
368,880 | I have a directory full of RPM files recently installed (gotten by running yum install --downloadonly prior to the install). I want to know remove all these RPMs to get close to a 'fresh' install for testing reasons. Is there an easy way to uninstall all RPMs listed in the directory at once? I tried this: find . *.rpm | sed "s/.rpm$//g" | xargs sudo yum remove but I get the message "no match for arguments ./" for each rpm in the list, so something is wrong with the command. | A drpm stands for delta rpm , which is an addition to an existing rpm , and only contains the different files. Source : Delta RPM packages contain the difference between an old and a new version of an RPM package. Applying a delta RPM on an old RPM results in the complete new RPM. It is not necessary to have a copy of the old RPM, because a delta RPM can also work with an installed RPM. The delta RPM packages are even smaller in size than patch RPMs, which is an advantage when transferring update packages over the Internet. The drawback is that update operations with delta RPMs involved consume considerably more CPU cycles than plain or patch RPMs. The README file referred to in the documentation can be found in the GitHub repository . You will see that deltarpm is based on bsdiff . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/368880",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33822/"
]
} |
368,917 | Is it possible to automatically run "source .bashrc" every time when I edit the bashrc file and save it? | One way, as another answer points out, would be to make a function that replaces your editor call to .bashrc with a two-step process that opens your editor on .bashrc sources .bashrc such as: vibashrc() { vi $HOME/.bashrc; source $HOME/.bashrc; } This has some shortcomings: it would require you to remember to type vibashrc every time you wanted the sourcing to happen it would only happen in your current bash window it would attempt to source .bashrc regardless of whether you made any changes to it Another option would be to hook into bash's PROMPT_COMMAND functionality to source .bashrc in any/all bash shells whenever it sees that the .bashrc file has been updated (and just before the next prompt is displayed). You would add the following code to your .bashrc file (or extend any existing PROMPT_COMMAND functionality with it): prompt_command() { # initialize the timestamp, if it isn't already _bashrc_timestamp=${_bashrc_timestamp:-$(stat -c %Y "$HOME/.bashrc")} # if it's been modified, test and load it if [[ $(stat -c %Y "$HOME/.bashrc") -gt $_bashrc_timestamp ]] then # only load it if `-n` succeeds ... if $BASH -n "$HOME/.bashrc" >& /dev/null then source "$HOME/.bashrc" else printf "Error in $HOME/.bashrc; not sourcing it\n" >&2 fi # ... but update the timestamp regardless _bashrc_timestamp=$(stat -c %Y "$HOME/.bashrc") fi}PROMPT_COMMAND='prompt_command' Then, the next time you log in, bash will load this function and prompt hook, and each time it is about to display a prompt, it will check to see if $HOME/.bashrc has been updated. If it has, it will run a quick check for syntax errors (the set -n option), and if the file is clean, source it. It updates the internal timestamp variable regardless of the syntax check, so that it doesn't attempt to load it until the file has been saved/updated again. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/368917",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121891/"
]
} |
368,944 | Recently I noticed we have 3 options to set environment variables: export envVar1=1 setenv envVar2=2 env envVAr3=3 If there are other ways, please enlighten us. When should I prefer one over the other? Please suggest guidelines. As for shell compatibility, which is the most expansive (covers more shell dialects)? I already noticed this answer but I wish to expand the question with env and usage preference guidelines. | export VARIABLE_NAME='some value' is the way to set an environment variable in any POSIX-compliant shell ( sh , dash , bash , ksh , etc.; also zsh). If the variable already has a value, you can use export VARIABLE_NAME to make it an environment variable without changing its value. Pre-POSIX Bourne shells did not support this, which is why you'll see scripts that avoid export VARIABLE_NAME='some value' and use VARIABLE_NAME='some value'; export VARIABLE_NAME instead. But pre-POSIX Bourne shells are extremely rare nowadays. setenv VARIABLE_NAME='some value' is the csh syntax to set an environment variable. setenv does not exist in sh, and csh is extremely rarely used in scripts and has been surpassed by bash for interactive use for the last 20 years (and zsh for even longer), so you can forget about it unless you encounter it. The env command is very rarely useful except in shebang lines . When invoked without arguments, it displays the environment, but export does it better (sorted, and often quoted to disambiguate newlines in values from newlines that separate values). When invoked with arguments, it runs a command with extra environment variables, but the same command without env also works ( VAR=value mycommand runs mycommand with VAR set to value , just like env VAR=value mycommand ). The reason env is useful in shebang line is that it performs PATH lookup, and it happens to not do anything else when invoked with a command name. The env command can be useful to run a command with only a few environment variables with -i , or without parameters to display the environment including variables with invalid names that the shell doesn't import. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/368944",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/234294/"
]
} |
368,955 | Same as above. Recently I broke my system and then wiped and reinstalled. I initially managed to get python 3.6 working in Pycharm with a new installed before breaking things again. I didn't do anything special and merely retraced my steps only to find Pycharm only seeing 2.7 and 3.5 instead of 3.6 on my 3rd install. I want to make use of the most current version of Python due to the features it's released. How do I set up environment variables to recognize 3.6.1 for development purposes? I did it a few times on Windows but merely went into Advanced system settings and added a few lines in a window. I installed Python 3.6.1 successfully on Mint; How can I duplicate the above process for Pycharm on Linux? | export VARIABLE_NAME='some value' is the way to set an environment variable in any POSIX-compliant shell ( sh , dash , bash , ksh , etc.; also zsh). If the variable already has a value, you can use export VARIABLE_NAME to make it an environment variable without changing its value. Pre-POSIX Bourne shells did not support this, which is why you'll see scripts that avoid export VARIABLE_NAME='some value' and use VARIABLE_NAME='some value'; export VARIABLE_NAME instead. But pre-POSIX Bourne shells are extremely rare nowadays. setenv VARIABLE_NAME='some value' is the csh syntax to set an environment variable. setenv does not exist in sh, and csh is extremely rarely used in scripts and has been surpassed by bash for interactive use for the last 20 years (and zsh for even longer), so you can forget about it unless you encounter it. The env command is very rarely useful except in shebang lines . When invoked without arguments, it displays the environment, but export does it better (sorted, and often quoted to disambiguate newlines in values from newlines that separate values). When invoked with arguments, it runs a command with extra environment variables, but the same command without env also works ( VAR=value mycommand runs mycommand with VAR set to value , just like env VAR=value mycommand ). The reason env is useful in shebang line is that it performs PATH lookup, and it happens to not do anything else when invoked with a command name. The env command can be useful to run a command with only a few environment variables with -i , or without parameters to display the environment including variables with invalid names that the shell doesn't import. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/368955",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/222021/"
]
} |
368,960 | I need an if statement to return true if a word contain a specific letter.For example: var="information"if [ $var contain "i" ]; then....else...fi | What about using a switch or case like so: #!/bin/shv="information"case $v in *f*) echo "found \"f\" in ${v}";; *) echo "no match found in ${v}"esacexit Note that if the needle is stored in a variable, it's important to quote it so it's not taken as a pattern: case $haystack in *"$needle"*) echo matchesac Without it, if $needle was * or ? for instance, that would match on any haystack (and non-empty haystack respectively). In any case, $needle doesn't have to be a single character. It will work with any string. In many shells it would even work for any sequence of non-null bytes even if they don't form valid characters, but not all would break apart characters. For instance a 0xc3 byte may not be found that way in the Γ© string encoded in UTF-8 (0xc3 0xa9) in some implementations. Conversely, some shells may find i inside ΞΎ when the locale's encoding is BIG5-HKSCS where ΞΎ is encoded as 0xa3 0x69 (and i is 0x69 like in ASCII). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/368960",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108074/"
]
} |
369,097 | On Xubuntu, for a long time I've had an issue where my Left mouse button stops working for some reason. It happens pretty much everyday. Everything else seems to work. The only way I can get my mouse to work again is to logout and login, which requires me to shutdown all my programs. Obviously this is very annoying, I've had this issue for almost a year and I've assumed that an update would fix it but it still happens. Is anyone else aware of this issue and possible fixes? I'm using Xubuntu as my Desktop Environment. I'm currently on Ubuntu 16.04 LTS. Edit: It happened again and I used xev and evtest to see what events are recognised. xev did not respond to Left button clicks but evtest did respond to Left button clicks. Edit (2018/01/22) : Just an update. I still have the problem, but I have a short term fix. When the left mouse button stops working, I use Ctrl+Alt+T to bring up the terminal. I enter xinput in the terminal, which brings up a list of devices. I search for which device is probably the mouse (it has name like generic mouse ) and I find the associated ID number. I then enter the command: xinput disable ID where ID is the ID number of the mouse. This fixes the problem until I shutdown the computer. Also, for more information about the problem, the same mouse works for my Windows 10 installation, so I think the mouse is fine. The same problem also occurs in Kali Linux, except that Kali linux doesn't have xinput installed so I can't use my quick fix. | I have a Dell Inspiron 15 7559. The left click stops working once in a while when I was using Ubuntu 16.04. After I installed Ubuntu 18.04, the left click stops working almost every time after I resume from suspend. The best solution I found is switching to another virtual console (TTY) by Alt + Ctrl + F1 . The mouse works normally after switching back with Alt + Ctrl + F7 . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/369097",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/232247/"
]
} |
369,136 | There are many ways to show packages installed manually using apt , such as: apt-mark showmanual But sometimes that output is too much. For example if the user manually installed package foo : apt-get install foo ...and foo depended on bar and baz , then apt-mark showmanual would output: barbazfoo How can we list only the top level manually installed packages ( i.e. foo ) without their dependencies ( i.e. not baz , nor bar )? The following code seems to work, but GNU parallel calling apt-rdepends a few hundred times is too slow, (three hours with a 4 core CPU): apt-mark showmanual | tee /tmp/foo | parallel "apt-rdepends -f Depends,PreDepends,Suggests,Recommends {} | tail +2" 2> /dev/null | tr -s ' ' '\n' | grep -v '[():]' | sort -Vu | grep -wv -f - /tmp/foo | This could be done using the Python apt API. The packages you see in apt-mark showmanual are exactly the ones in apt.cache.Cache() for which is_installed is true and is_auto_installed is false. But, it's easier to process the dependencies: #! /usr/bin/env python3from apt import cachemanual = set(pkg for pkg in cache.Cache() if pkg.is_installed and not pkg.is_auto_installed)depends = set(dep_pkg.name for pkg in manual for dep in pkg.installed.get_dependencies('PreDepends', 'Depends', 'Recommends') for dep_pkg in dep)print('\n'.join(pkg.name for pkg in manual if pkg.name not in depends)) Even this lists some packages which I would not expect to see there ( init , grep ?!). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/369136",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165517/"
]
} |
369,154 | I'm working on creating an Ubuntu variant that has a lot of forensic analysis tools etc installed. However I can't seem to find out how to disable auto-mounting at all..I want it to NEVER mount anything, I always want to mount something manually. I've done some searching and found this: How can I use gsettings to disable device automount in Ubuntu 16.04? However, if I use: gsettings set org.gnome.desktop.media-handling automount false It still auto-mounts. Also the thread says something about The reason it failed on this occasion seemed to be caused by the lack of environment variables being set, notably $DBUS_SESSION_BUS_ADDRESS. Now I have now idea what this last part mean, anyone care to explain or have any other solution to fully disable auto-mounting cd/usb/sata etc. | One way of doing that is to write an udev rule that makes udisks2 ignore any added block devices. This can be done by dropping a file 10-myudisks2.rules in /etc/udev/rules.d with the rule: ACTION=="add|change", SUBSYSTEM=="block", ENV{UDISKS_IGNORE}="1" This is documented in: man 7 udevman 8 udisks | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/369154",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233888/"
]
} |
369,181 | I am trying to print every Nth line out of a file with more than 300,000 records into a new file. This has to happen every Nth record until it reaches the end of the file. | awk 'NR % 5 == 0' input > output This prints every fifth line. To use an environment variable: NUM=5awk -v NUM=$NUM 'NR % NUM == 0' input > output | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/369181",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/234535/"
]
} |
369,185 | I want to see the pagetable that kernel manages for one of my processes. In my case PID 4680 is mapped to dhclient . So in order to view the page table I tried the following: sudo cat /proc/4680/pagemap However this command just hangs on my Ubuntu 14.04 without any output. I have tried waiting 2 minutes and then have to kill it. Is there a better way of doing this? | According to the documentation , /proc/PID/pagemap contains one 64-bit value for each virtual page. With 4096-byte pages and a 64-bit virtual address space, there are 2**52 pages. So the full pagemap file will be 2**52 entries of 8 bytes each. That's a really big file. Catting the whole thing is going to take a long time. Not 2 minutes. A really long time. A speed test on my own computer suggests about 21 years. And it's mostly going to be filled with zeros (for all the virtual addresses that aren't mapped in the process). A bunch of \0 's output to a terminal cause no visible effect. It's not hung, it's doing what you asked. It's not a text file, so the entries that aren't zero aren't likely to look good on your terminal either. The right way to use the pagemap file is to know what virtual address you're looking for, seek to it, and read 8 bytes. Or if you want information for a range, read some multiple of 8 bytes. If you want all the nonzero entries, first read /proc/PID/maps to find what ranges are mapped. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/369185",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/200436/"
]
} |
369,234 | So far, I have been trying out the Nix package manager by nix-env -i installing packages. Now, I would like to manage that with a ~/configuration.nix , like the example here , so that I can version it in my dotfiles. Is there a way to generate this configuration from my current environment? All of the information that I can find about user or system level configuration is specific to NixOS, and assumes that I have run nixos-generate-config to create the file. This tool is not available from nixpkgs, which makes me think that it is designed only to create a NixOS install, not for general config-file creation. Also, why doesn't the Nix package manager create this file when it is installed? How do Nix (not NixOS) users configure their installed software, such as Vim plugins, without this file? | This file is indeed specific to NixOS and it is created automatically when installing NixOS. That said, there are workarounds. One of them is described in https://nixos.org/nixpkgs/manual/#sec-declarative-package-management in the Nixpkgs manual. (Added in this PR .) Please note that the overlays mechanism came along since then, and according to Chapter 12. Overlays : " Overlays are similar to other methods for customizing Nixpkgs, in particular the packageOverrides attribute described in Section 6.5, βModify packages via packageOverrides β . Indeed, packageOverrides acts as an overlay with only the super argument. It is therefore appropriate for basic use, but overlays are more powerful and easier to distribute. " It seems that there was also a work-in-progress for a user-side declarative configuration but it hasn't been updated in a while: https://github.com/NixOS/nixpkgs/pull/9250 See also the Declarative package management for normal users discussion on the NixOS Discourse. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/369234",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/234574/"
]
} |
369,292 | I am writing some scripts and got stuck with some commands.I would like to put some string (month) into specific place in existing text.Example: Left Right How can I put some text between Left and Right? I tried with print but it doesn't work as I want. date +'%b' | awk '{print "Left " $1} {print "Right"}' This one adds new line, which I don't want to be added. Left JunRight | date "+Left %b Right" You can put the strings in-place within the date command itself. Not verified across OSes, but it is functional in GNU date . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/369292",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/234627/"
]
} |
369,294 | I want to search an pattern and I need the next following line of that pattern. In Linux I had tried as below, and it worked. sed -n '/pattern/{N;p}' ouputfile.txt But in AIX, this is not working and throwing error as 0602-404 Function cannot be parsed How to achieve this in AIX? | You are missing a semicolon after the p before the `}' sed -n '/pattern/{N;p;}' ouputfile.txt You could also write it stringed as multiple -e commands: sed -n -e '/pattern/{' -e 'N;p' -e '}' ouputfile.txt And the safest & clearest way is to lay it out across lines as this method affords you to place in-line comments in the sed code: sed -ne ' # lines matching pattern /pattern/{ N; # grab the next line into the pattern space p; # print the pattern space holding the current+next line }' outputfile.txt (don't forget the ; in between the N / p and # commands) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/369294",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/217285/"
]
} |
369,348 | I have a XML file of little huge size. I have been provided with that and all I need to do is a extract some valuesin between the XML tags. Since I don't have the XML parser utility available in my machines. I am looking for an alternate method. To start with, there is a XML tag <capacity> </capacity > which repeats n number of time in the XML file and in between this XML tags there are many other different tags as well. I have to get each occurrence of <capacity> </capacity> XML tag separately and then parse through that and extract the values under them. <subcolumns><capacity><name>45.90</name><index>0</index><value_type>String</value_type><ignore_case_flag>1</ignore_case_flag><hidden_flag>0</hidden_flag><exclude_from_parse_flag>1</exclude_from_parse_flag></capacity><capacity><name>57.09</name><index>1</index><value_type>String</value_type><ignore_case_flag>1</ignore_case_flag><hidden_flag>0</hidden_flag><exclude_from_parse_flag>1</exclude_from_parse_flag></capacity><capacity><name>55</name><index>2</index><value_type>String</value_type><ignore_case_flag>1</ignore_case_flag><hidden_flag>0</hidden_flag><exclude_from_parse_flag>1</exclude_from_parse_flag></capacity></subcolumns> So the logic which I thought was to find the first occurrence of a <capacity> </capacity> XML tag and print it to a temp file and then delete that first occurrence. <capacity><name>45.90</name><index>0</index><value_type>String</value_type><ignore_case_flag>1</ignore_case_flag><hidden_flag>0</hidden_flag><exclude_from_parse_flag>1</exclude_from_parse_flag></capacity> Henceforth when this is done for the second time the new pair of <capacity> </capacity> XML tag is taken into consideration.So this has to repeat for multiple times until the last <capacity> </capacity> tag is found. And each time this part is extracted the data will be changing and that can be extracted. Now all I want is to select the first occurrence of <capacity> </capacity> XML tag from the master XML file & print it to temp file and delete that part. And this is what I tried and nothing worked for me. sed -n '2,${/<capacity>\(.*\)<\/capacity>/\1/p;q;}' "<input XML file>" >> temp.txt My further idea is to take that temp file for processing and extract the values which I need to under the capacity tags. For which I have already written the logic and it is working fine. | Using XML parsers is the right way for manipulating XML documents. xmlstarlet solution: xmlstarlet sel -t -c '//capacity[1]' -n yourxml > temp.txt && xmlstarlet ed -d '//capacity[1]' yourxml > tmp.xml && mv tmp.xml yourxml cat temp.txt<capacity><name>45.90</name><index>0</index><value_type>String</value_type><ignore_case_flag>1</ignore_case_flag><hidden_flag>0</hidden_flag><exclude_from_parse_flag>1</exclude_from_parse_flag></capacity> xmlstarlet sel -t -c '//capacity[1]' -n yourxml > temp.txt - extracts the first capacity tag declaration and redirects the output to temp.txt xmlstarlet ed -d '//capacity[1]' yourxml > tmp.xml - deletes the first capacity tag from the document (via -d delete action) and redirects the modified document content to temporary file tmp.xml mv tmp.xml yourxml - replace the initial xml document with its modified version | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/369348",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48806/"
]
} |
369,382 | I need to know whether a command has succeeded or failed, and unconditionally run some cleanup afterward. Neither of the normal options for executing sequential commands seem to be applicable here: $ mycmd.sh && rm -rf temp_files/ # correct exit status, cleanup fails if mycmd fails$ mycmd.sh ; rm -rf temp_files/ # incorrect exit status, always cleans up$ mycmd.sh || rm -rf temp_files/ # correct exit status, cleanup fails if mycmd succeeds If I was going to do it in a shell script, I'd do something like this: #!/usr/bin/env bashmycmd.shRET=$?rm -rf temp_filesexit $RET Is there a more idiomatic way to accomplish that on the command line than semicolon-chaining all those commands together? | Newlines in a script are almost always equivalent to semicolons: mycmd.sh; ret=$?; rm -rf temp_files; exit $ret In response to the edit: Alternatively, you could also use a trap and a subshell: ( trap 'rm -rf temp_files' EXIT; mycmd.sh ) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/369382",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49642/"
]
} |
369,413 | I am trying to compare 2 directories ( A / B ) in Linux and DELETE any files from B which DO NOT exist in A. For example, if file '1.jpg' exists in Directory B but does not exist in Directory A then it needs to be deleted from B. I have tried using diff but all the files are essentially different so it doesn't seem to work. (They are thumbnails of different sizes but have the same id's). So, this has to be done by file name only and ignore the actual contents of the file. Can someone please shed some light on how to do this with minimal effort? | rsync can do what you want quickly and easily: rsync --dry-run --verbose --recursive --existing --ignore-existing --delete-after A/ B/ From the help: --existing skip creating new files on receiver --ignore-existing skip updating files that already exist on receiver --delete delete extraneous files from destination dirs Remove the dry-run option after you're satisfied with the proposed results, to actually execute the deletions. The man page has more explicit description of the options and even mentions your use case: --existing, --ignore-non-existing This tells rsync to skip creating files (including directories) that do not exist yet on the destination. If this option is combined with the --ignore-existing option, no files will be updated (which can be useful if all you want to do is to delete extraneous files). --ignore-existing This tells rsync to skip updating files that already exist on the destination (this does not ignore existing directores, or nothing would get done). See also --existing. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/369413",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174887/"
]
} |
369,434 | I am monitoring the history of convergence of a certain problem. The history output is as follows: Time = 24Calculate volume forces from actuator diskTotal thrust = -8.46832Total torque = 1.03471ADisk volume = 0.0632799smoothSolver: Solving for Ux, Initial residual = 0.000447755, Final residual = 2.68745e-05, No Iterations 2smoothSolver: Solving for Uy, Initial residual = 0.0107909, Final residual = 0.000812227, No Iterations 2smoothSolver: Solving for Uz, Initial residual = 0.0103399, Final residual = 0.000786661, No Iterations 2GAMG: Solving for p, Initial residual = 0.123954, Final residual = 0.00958268, No Iterations 6time step continuity errors : sum local = 7.42808e-05, global = -4.25546e-05, cumulative = 0.000413527smoothSolver: Solving for epsilon, Initial residual = 0.00197379, Final residual = 0.000172248, No Iterations 1smoothSolver: Solving for k, Initial residual = 0.000510499, Final residual = 2.78594e-05, No Iterations 2ExecutionTime = 124.63 s ClockTime = 125 sTime = 25Calculate volume forces from actuator diskTotal thrust = -8.49093Total torque = 1.03723ADisk volume = 0.0632799smoothSolver: Solving for Ux, Initial residual = 0.000409002, Final residual = 2.59552e-05, No Iterations 2smoothSolver: Solving for Uy, Initial residual = 0.0103191, Final residual = 0.00077024, No Iterations 2smoothSolver: Solving for Uz, Initial residual = 0.00985658, Final residual = 0.000742227, No Iterations 2GAMG: Solving for p, Initial residual = 0.0390756, Final residual = 0.00247253, No Iterations 7time step continuity errors : sum local = 5.39785e-05, global = 3.40394e-05, cumulative = 0.000447566smoothSolver: Solving for epsilon, Initial residual = 0.00182397, Final residual = 0.000157739, No Iterations 1smoothSolver: Solving for k, Initial residual = 0.000465916, Final residual = 2.75864e-05, No Iterations 2ExecutionTime = 129.45 s ClockTime = 130 sTime = 26Calculate volume forces from actuator diskTotal thrust = -8.51463Total torque = 1.03953ADisk volume = 0.0632799 What I would like to do, is to copy values at a certain time, say in this case at Time=26 , the values of Total thrust = -8.51463Total torque = 1.03953 in the following format: 1.03953 -8.51463. Can someone help me to do that using shell script. | You asked for a shell script, but hopefully awk will do: find_thrust_torque.awk: /^Total thrust =/ {thrust = $4}/^Total torque =/ {torque = $4}/^Time =/ {if (found) exit; if ($3 == time) found=1}END {print torque " " thrust} Test It: $ awk -v time=25 -f find_thrust_torque.awk file11.03723 -8.49093$ awk -v time=26 -f find_thrust_torque.awk file11.03953 -8.51463 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/369434",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/234753/"
]
} |
369,453 | Is there a way to find a maximum depth of given directory tree? I was thinking about using find with incrementing maxdepth and comparing number of found directories, but maybe there is a simpler way? | One way to do it, assuming GNU find : find . -type d -printf '%d\n' | sort -rn | head -1 This is not particularly efficient, but it's certainly much better than trying different -maxdepth s in turn. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/369453",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2646/"
]
} |
369,459 | I need to write a bash program that runs commands echoed to a named pipe it reads, but I cannot get it work only when a command is sent. It keeps repeating the last command until a new one is written. That is: Execute ./read_pipe.sh It waits until a command is echoed to pipe and reads it. It executes the command once. <- What doesn't work. It keeps executing it forever. Repeat from step 2. My read_pipe.sh #!/bin/bashpipe="mypipe"if [ ! -p $pipe ]; then echo 'Creating pipe' mkfifo $pipefiwhile truedo if read line <$pipe; then COMMAND=$(cat $pipe) echo "Running $COMMAND ..." # sh -c $COMMAND fidone If I cat "echo 'Hello World'" > mypipe the output is this forever: Running "echo 'Hello World'" ...Running "echo 'Hello World'" ...Running "echo 'Hello World'" ...Running "echo 'Hello World'" ...... How can I run the command once and wait for another echoed command? | One way to do it, assuming GNU find : find . -type d -printf '%d\n' | sort -rn | head -1 This is not particularly efficient, but it's certainly much better than trying different -maxdepth s in turn. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/369459",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231977/"
]
} |
369,561 | I keep getting this error on apt-get upgrade: Installing unattended-upgrades (0.93.1+nmu1) ...Failed to start unattended-upgrades.service: Unit unattended-upgrades.service failed to load: Invalid argument. See system logs and 'systemctl status unattended-upgrades.service' for details.invoke-rc.d: initscript unattended-upgrades, action "start" failed.β unattended-upgrades.service - Unattended Upgrades Shutdown Loaded: error (Reason: Invalid argument) Active: inactive (dead) Docs: man:unattended-upgrade(8)jun 06 18:29:32 PRODUCTION systemd[1]: unattended-upgrades.service lacks ExecStart setting. Refusing.jun 06 18:29:32 PRODUCTION systemd[1]: unattended-upgrades.service lacks ExecStart setting. Refusing.jun 06 18:29:32 PRODUCTION systemd[1]: unattended-upgrades.service lacks ExecStart setting. Refusing.jun 06 18:32:41 PRODUCTION systemd[1]: unattended-upgrades.service lacks ExecStart setting. Refusing.jun 06 18:32:41 PRODUCTION systemd[1]: unattended-upgrades.service lacks ExecStart setting. Refusing.jun 06 18:32:41 PRODUCTION systemd[1]: unattended-upgrades.service lacks ExecStart setting. Refusing.jun 06 18:32:41 PRODUCTION systemd[1]: unattended-upgrades.service lacks ExecStart setting. Refusing.jun 06 18:33:24 PRODUCTION systemd[1]: unattended-upgrades.service lacks ExecStart setting. Refusing.jun 06 18:33:24 PRODUCTION systemd[1]: unattended-upgrades.service lacks ExecStart setting. Refusing.jun 06 18:33:24 PRODUCTION systemd[1]: unattended-upgrades.service lacks ExecStart setting. Refusing.dpkg: erro ao processar o pacote unattended-upgrades (--configure): subprocesso script post-installation returned exit status code 6Errors were found while processing: unattended-upgradesE: Sub-process /usr/bin/dpkg returned an error code (1) I don't care about unattended-upgrades, it can be removed. I tried apt-get remove but no luck there: Removing unattended-upgrades (0.93.1+nmu1) ...Failed to stop unattended-upgrades.service: Unit unattended-upgrades.service not loaded.invoke-rc.d: initscript unattended-upgrades, action "stop" failed.dpkg: error processing package unattended-upgrades (--remove): subprocess script pre-removal returned exit status error 5Errors were found while processing: unattended-upgradesE: Sub-process /usr/bin/dpkg returned an error code (1) The messages have been translated as some of them were not in english. I have debian jessie with sid repository configured. uname -a: Linux PRODUCTION 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2 (2017-04-30) x86_64 GNU/Linux cat /etc/debian_version: 9.0 I just want that nasty error gone, I don't care how. either by removing the package or fixing the issue, but I don't seem to be able to remove it, nor am I able to fix it due to lack of knowledge :) Any hint? | systemctl mask unattended-upgrades Explanation: systemd units can be overridden by the adminstrator putting a file with the same name in /etc/systemd/system . This mechanism can also be used to "mask" a service from being activated by socket activation, manual starts, or any other method. Instead of creating a file with the same name, if there is a symbolic link to /dev/null , then the unit is effectively ignored. So you can ab(use) systemctl mask , to replace the contents of the unit with nothing. To avoid the possibility confusion in future, check that you remove the mask once you have removed the package. systemctl unmask unattended-upgrades . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/369561",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136407/"
]
} |
369,566 | Passing secrets (password) to a program via environmental variable is considered "extremely insecure" according to MySQL docs and as poor choice (from security aspect) across other resources . I would like to know why - what is it that I'm missing? In the mentioned MySQL manual(I'm using this as an example), passing password via -p option in command line is considered as " insecure " and via env var as " extremely insecure ", bold italic font. I'm not an expert but I do know the fundamentals: simple ps command, even issued by unprivileged user reads every program alongside with command parameters while only the same user (and root, of course) may read environment of the process. So, only root and johndoe may read environment of the johndoe - started process, while hacked www-data script reads all via ps . There must be some big deal here that I'm missing - so please explain me what am I missing? My objective is to have a mean of transferring secret from one program to other, generally, non-interactive. | extremely insecure and should not be used. Some versions of ps include an option to display the environment of running processes. On some systems, if you set MYSQL_PWD, your password is exposed to any other user who runs ps. This was explained here ( via ): Background: in the process image argv[] and envp[] are stored in the same way, next to each other. In "classic" UNIXes /usr/bin/ps was typically setgid "kmem" (or similar group), which allowed it to dig around in /dev/kmem to read information about the active processes. This included the ability to read the process arguments AND the environment, of all users on the system. These days these "privileged ps hacks" are largely behind us: UNIX systems have all come up with different ways of querying such information (/proc on Linux, etc) I think all(?) of these consider a process's environment only to be readable by its uid. Thus, security-sensitive data like passwords in the environment aren't leaked. However, the old ways aren't 100% dead. Just as an example, here's an example from an AIX 5.2 machine I have access to, running as a non-root user [AIX 5.2 reached end-of-life in 2009. AIX, at least by 6100-09, and also confirmed on 7.2, now prevents non-root users from seeing the environment of other users' processes with the "ps ewwwax" command.] ... For the record, some while back we discovered (on IRC?) that OpenBSD 5.2 has this exact security exposure of leaking the environment to other local users (it was fixed shortly after that release, though). [OpenBSD 5.2 was released in 2012] This does not explain why the MySQL manual considered that using an environment variable is extremely insecure, compared to a command line argument. See the other answers to this question. In short, either the manual is confused, or the point is that it can be too easy for environment variables to be "leaked" by mistake. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/369566",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68350/"
]
} |
369,570 | I installed httpd on a CentOS 7 server, but systemctl start httpd.service is failing. What specific sequence of commands need to be typed in order to get httpd to start correctly on CentOS 7? Error Message The precise error message extracted from the full results at bottom is as follows: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using localhost.localdomain. Set the 'ServerName' directive globally to suppress this message Also, per @DopeGhoti's suggestion, the contents of the logs are: [root@localhost ~]# vi /var/log/httpd/error_log(13)Permission denied: AH00091: httpd: could not open error log file /var/www/mytestdeployment/error.log.AH00015: Unable to open logs(13)Permission denied: AH00091: httpd: could not open error log file /var/www/mytestdeployment/error.log.AH00015: Unable to open logs(13)Permission denied: AH00091: httpd: could not open error log file /var/www/mytestdeployment/error.log. How httpd was installed: 1.) Install Apache: sudo yum -y install httpd 2.) Enable Apache as a CentOS service so that it will automatically restart on reboot: sudo systemctl enable httpd.service 3.) Configure Firewalld sudo firewall-cmd --zone=public --add-service=httpsudo firewall-cmd --list-allsudo firewall-cmd --zone=public --permanent --add-service=http 4.) Give the server a name: vi /etc/httpd/conf/httpd.conf//Uncomment the ServerName line and give it the IP of the machine: ServerName 192.168.1.5:80 The error message: After installing httpd using the above commands, httpd is failing to start as follows: [root@localhost ~]# systemctl start httpd.serviceJob for httpd.service failed because the control process exited with error code. See "systemctl status httpd.service" and "journalctl -xe" for details.[root@localhost ~]# systemctl status httpd.service -lβ httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Tue 2017-06-06 11:31:32 PDT; 15min ago Docs: man:httpd(8) man:apachectl(8) Process: 32268 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=1/FAILURE) Process: 32267 ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND (code=exited, status=1/FAILURE) Main PID: 32267 (code=exited, status=1/FAILURE)Jun 06 11:31:32 localhost.localdomain systemd[1]: Starting The Apache HTTP Server...Jun 06 11:31:32 localhost.localdomain httpd[32267]: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using localhost.localdomain. Set the 'ServerName' directive globally to suppress this messageJun 06 11:31:32 localhost.localdomain systemd[1]: httpd.service: main process exited, code=exited, status=1/FAILUREJun 06 11:31:32 localhost.localdomain kill[32268]: kill: cannot find process ""Jun 06 11:31:32 localhost.localdomain systemd[1]: httpd.service: control process exited, code=exited status=1Jun 06 11:31:32 localhost.localdomain systemd[1]: Failed to start The Apache HTTP Server.Jun 06 11:31:32 localhost.localdomain systemd[1]: Unit httpd.service entered failed state.Jun 06 11:31:32 localhost.localdomain systemd[1]: httpd.service failed.[root@localhost ~]# systemctl status httpd.service -l[root@localhost ~]# vi /var/log/httpd/error_log(13)Permission denied: AH00091: httpd: could not open error log file /var/www/mytestdeployment/error.log.AH00015: Unable to open logs(13)Permission denied: AH00091: httpd: could not open error log file /var/www/mytestdeployment/error.log.AH00015: Unable to open logs(13)Permission denied: AH00091: httpd: could not open error log file /var/www/mytestdeployment/error.log.AH00015: Unable to open logs(13)Permission denied: AH00091: httpd: could not open error log file /var/www/mytestdeployment/error.log.AH00015: Unable to open logs(13)Permission denied: AH00091: httpd: could not open error log file /var/www/mytestdeployment/error.log.AH00015: Unable to open logs~"/var/log/httpd/error_log" 10L, 675C @JeffSchaller's suggestion After @JeffSchaller suggested to consider SELinux, I found that typing setenforce 0 as root resulted in the following: [root@localhost ~]# sestatusSELinux status: enabledSELinuxfs mount: /sys/fs/selinuxSELinux root directory: /etc/selinuxLoaded policy name: targetedCurrent mode: enforcingMode from config file: enforcingPolicy MLS status: enabledPolicy deny_unknown status: allowedMax kernel policy version: 28[root@localhost ~]# setenforce 0[root@localhost ~]# sestatusSELinux status: enabledSELinuxfs mount: /sys/fs/selinuxSELinux root directory: /etc/selinuxLoaded policy name: targetedCurrent mode: permissiveMode from config file: enforcingPolicy MLS status: enabledPolicy deny_unknown status: allowedMax kernel policy version: 28[root@localhost ~]# systemctl start httpd.service -l[root@localhost ~]# systemctl status httpd.service -lβ httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2017-06-06 12:28:38 PDT; 22s ago Docs: man:httpd(8) man:apachectl(8) Process: 32577 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=1/FAILURE) Main PID: 32690 (httpd) Status: "Total requests: 0; Current requests/sec: 0; Current traffic: 0 B/sec" CGroup: /system.slice/httpd.service ββ32690 /usr/sbin/httpd -DFOREGROUND ββ32691 /usr/sbin/httpd -DFOREGROUND ββ32692 /usr/sbin/httpd -DFOREGROUND ββ32693 /usr/sbin/httpd -DFOREGROUND ββ32694 /usr/sbin/httpd -DFOREGROUND ββ32695 /usr/sbin/httpd -DFOREGROUNDJun 06 12:28:38 localhost.localdomain systemd[1]: Starting The Apache HTTP Server...Jun 06 12:28:38 localhost.localdomain systemd[1]: Started The Apache HTTP Server.[root@localhost ~]# | Apache failed to start, with an error saying (13)Permission denied: AH00091: httpd: could not open error log file /var/www/mytestdeployment/error.log. AH00015: Unable to open logs Since SELinux was in enforcing mode, it prevented Apache from writing to the non-standard log directory. In order to keep Dan Walsh from weeping and CodeMed productive, we can apply the httpd_log_t policy to that directory: semanage fcontext -a -t httpd_log_t "/var/www/mytestdeployment(/.*)?"restorecon -Rv /var/www/mytestdeployment and confirm with: ls -lZ /var/www/mytestdeployment If you don't have the semanage utility, you can install it with: yum install policycoreutils-python | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/369570",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92670/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.