source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
331,803 | Anytime I attempt to install libpng12-0 with this command: sudo apt-get install libpng12-0 I get this returned: Reading package lists... DoneBuilding dependency tree Reading state information... DoneThe following packages were automatically installed and are no longer required: gstreamer0.10-plugins-base libavformat53 libcdaudio1 libgcrypt11 libgnutls-deb0-28 libgnutls26 libgsoap5 libgstreamer-plugins-bad0.10-0 libgstreamer-plugins-base0.10-0 libgstreamer0.10-0 libhogweed2 libjasper1 libmimic0 libmpg123-0 libnettle4 libpostproc52 libqt4-dbus libqt4-network libqt4-opengl libqt4-xml libqtcore4 libqtdbus4 libqtgui4 librtmp0 libslv2-9 libsoundtouch0 libswscale2 libtasn1-3 libuv1 libvncserver0 libwildmidi1 qdbus qt-at-spi qtchooser qtcore4-l10n virtualbox-dkmsUse 'sudo apt autoremove' to remove them.The following NEW packages will be installed: libpng12-00 upgraded, 1 newly installed, 0 to remove and 4 not upgraded.Need to get 173 kB of archives.After this operation, 273 kB of additional disk space will be used.Get:1 http://debian.cc.lehigh.edu/debian jessie/main amd64 libpng12-0 amd64 1.2.50-2+deb8u2 [173 kB]Fetched 173 kB in 0s (493 kB/s) (Reading database ... 182049 files and directories currently installed.)Preparing to unpack .../libpng12-0_1.2.50-2+deb8u2_amd64.deb ...Unpacking libpng12-0:amd64 (1.2.50-2+deb8u2) ...dpkg: error processing archive /var/cache/apt/archives/libpng12-0_1.2.50-2+deb8u2_amd64.deb (--unpack): unable to install new version of '/usr/lib/x86_64-linux-gnu/libpng12.so.0': No such file or directoryErrors were encountered while processing: /var/cache/apt/archives/libpng12-0_1.2.50-2+deb8u2_amd64.debE: Sub-process /usr/bin/dpkg returned an error code (1) I have NO IDEA how to work around this. I do have libpng16-16 installed, but I don't see why that would cause an issue. I've tried everything from downloading the .deb manually and installing it to trying to symlink the libpng16-16 so to that location. All of it gave me no luck. Anyone have any advice? Further information: any attempt to symlink another .so into the path provided ( /usr/lib/x86_64-linux-gnu/libpng12.so.0 ) results in the deletion of that symlink and the same error. The .so I was attempting to symlink as a fix, was libpng.so which is provided by libpng-dev (or libpng16-16 ). | I have this resolved now. I went to the Debian forums and asked my question here , where a helpful member pointed out that libpng12-0 isn't available for Stretch (should have specified my OS version earlier, sorry). I was trying to install the Jessie version, and that just... doesn't work with Stretch right now. There's a version of libpng12-0 in Sid, currently. It should make it's way to Stretch in the near future to solve this issue. In the meantime, I abandoned installing the Jessie libpng12-0 package, and just did the Wheezy package, which is version 1.2.49 instead of 1.2.50 , which worked like a charm. Until 1.2.50 is out for Stretch, I recommend installing Wheezy's 1.2.49 . Thanks again for the help to everyone who replied and commented, you all are truly wonderful human beings! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/331803",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86495/"
]
} |
331,837 | Usually, if you edit a scrpit, all running usages of the script are prone to errors. As far as I understand it, bash (other shells too?) read the script incrementally, so if you modified the script file externally, it starts reading the wrong stuff. Is there any way to prevent it? Example: sleep 20echo test If you execute this script, bash will read the first line (say 10 bytes) and go to sleep. When it resumes, there can be different contents in the script starting at 10-th byte. I may be in the middle of a line in the new script. Thus the running script will be broken. | Yes shells, and bash in particular, are careful to read the file one line at a time, so it works the same as when you use it interactively. You'll notice that when the file is not seekable (like a pipe), bash even reads one byte at a time to be sure not to read past the \n character. When the file is seekable, it optimises by reading full blocks at a time, but seek back to after the \n . That means you can do things like: bash << \EOFread varvar's contentecho "$var"EOF Or write scripts that update themselves. Which you wouldn't be able to do if it didn't give you that guarantee. Now, it's rare that you want to do things like that and, as you found out, that feature tends to get in the way more often than it is useful. To avoid it, you could try and make sure you don't modify the file in-place (for instance, modify a copy, and move the copy in place (like sed -i or perl -pi and some editors do for instance)). Or you could write your script like: { sleep 20 echo test}; exit (note that it's important that the exit be on the same line as } ; though you could also put it inside the braces just before the closing one). or: main() { sleep 20 echo test}main "$@"; exit The shell will need to read the script up until the exit before starting to do anything. That ensures the shell will not read from the script again. That means the whole script will be stored in memory though. That can also affect the parsing of the script. For instance, in bash : export LC_ALL=fr_FR.UTF-8echo $'St\ue9phane' Would output that U+00E9 encoded in UTF-8. However, if you change it to: { export LC_ALL=fr_FR.UTF-8 echo $'St\ue9phane'} The \ue9 will be expanded in the charset that was in effect at the time that command was parsed which in this case is before the export command is executed. Also note that if the source aka . command is used, with some shells, you'll have the same kind of problem for the sourced files. That's not the case of bash though whose source command reads the file fully before interpreting it. If writing for bash specifically, you could actually make use of that, by adding at the start of the script: if [[ ! $already_sourced ]]; then already_sourced=1 source "$0"; exitfi (I wouldn't rely on that though as you could imagine future versions of bash could change that behaviour which can be currently seen as a limitation (bash and AT&T ksh are the only POSIX-like shells that behave like that as far as can tell) and the already_sourced trick is a bit brittle as it assumes that variable is not in the environment, not to mention that it affect the content of the BASH_SOURCE variable) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/331837",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92541/"
]
} |
331,840 | Where can I find infos about when the unattend updates/upgrades run and what ist done (or IF something was done)? I want to enable the unattended-upgrades (for security updates) on a debian virtual server and, yeah, on my RaspberryPi, too. Do I have to search the /var/log/apt -logs for infos about WHAT was installed and /var/log/syslog about infos WHEN there was an action? I see no CRON entry for when the update-process will run and the configs /etc/apt/apt.conf.d/20auto-upgrades and /etc/apt/apt.conf.d/50unattended-upgrades don't tell me either. Solution (credits to @bahamut): sudo cat /var/log/unattended-upgrades/unattended-upgrades.log 2016-12-22 06:35:26,489 INFO Initial whitelisted packages: 2016-12-22 06:35:26,489 INFO script for unattended-upgrades is executed2016-12-22 06:35:26,489 INFO allowed sources are: ['origin=Debian,codename=jessie,label=Debian-Security']2016-12-22 06:35:35,518 INFO Packages that will be upgraded: libsmbclient libtevent0 libwbclient0 python-samba samba samba-common samba-common-bin samba-dsdb-modules samba-libs samba-vfs-modules smbclient winbind2016-12-22 06:35:35,523 INFO dpkg-protocol written to »/var/log/unattended-upgrades/unattended-upgrades-dpkg.log« 2016-12-22 06:35:52,336 INFO all upgrades installed | Unattended upgrade has its own log-file in /var/log/unattended-upgrades/unattended-upgrades.log . It is policed by anacron. # These lines replace cron's entries1 5 cron.daily run-parts --report /etc/cron.daily7 10 cron.weekly run-parts --report /etc/cron.weekly@monthly 15 cron.monthly run-parts --report /etc/cron.monthly Additional information on what was done is located in /var/log/unattended-upgrades/unattended-upgrades-dpkg.log . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/331840",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/161003/"
]
} |
331,952 | Is there a way to use dpkg to view a changelog between different versions of a package? If I wanted to know e.g., why 'passwd' was being upgraded in a recent update is there a way to use dpkg to see what changed? $ dpkg -l passwdDesired=Unknown/Install/Remove/Purge/Hold| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)||/ Name Version Architecture Description+++-==============-============-============-=================================ii passwd 1:4.2-3.1 amd64 change and administer password an It's being upgraded to 1:4.2-3.3... I know with Debian I can look at the package notes and from there at the linked Debian changelog . But this doesn't apply to all deb based distros, and it's awkward for a quick look at what's new. | dpkg does not provide any facility to read the changelog of a package.you should extract the package and read the changelog dpkg -X <package.deb> <folder> then you can read the changelog using the dpkg-parsechangelog utility dpkg-parsechangelog -l <folder>/usr/share/doc/<package>/changelog.Debian.gz Since that's a real pain , if your distro is using apt-get you can use apt-get changelog <packagename> or apt changelog <packagename> | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/331952",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14792/"
]
} |
331,977 | I just uploaded some files to my laptop running Ubuntu 16.04 LTS using Xender (which is great). The files are pictures and 1 video .MOV extension. When I try to view or play them, I get an error stating File reading failed:VLC could not open the file "/home/blah/Videos/IMG_0006.MOV" (Permission denied).Your input can't be opened:VLC is unable to open the MRL 'file:///home/blah/Videos/IMG_0006.MOV'. Check the log for details. Here is the output of ls -al : total 497040drwxr-xr-x 2 blah blah 4096 Dec 21 11:31 .drwx------ 27 blah blah 4096 Dec 18 14:41 ..---------- 1 blah blah 358905035 Sep 5 13:19 IMG_0002.MOV---------- 1 blah blah 39697387 Sep 25 16:58 IMG_0003.MOV---------- 1 blah blah 72482166 Sep 25 16:59 IMG_0004.MOV---------- 1 blah blah 3468251 Sep 25 17:00 IMG_0005.MOV---------- 1 blah blah 34355357 Sep 25 17:00 IMG_0006.MOV I have searched online, and don't find anything on this type of issue. Any help? Thanks. I installed VLC and mplayer and changed the permissions as follows: -r-------- 1 blah blah 358905035 Sep 5 13:19 IMG_0002.MOV-r-------- 1 blah blah 39697387 Sep 25 16:58 IMG_0003.MOV-r-------- 1 blah blah 72482166 Sep 25 16:59 IMG_0004.MOV-r-------- 1 blah blah 3468251 Sep 25 17:00 IMG_0005.MOV-r-------- 1 blah blah 34355357 Sep 25 17:00 IMG_0006.MOV And both mplayer and VLC play the file now. The fix seems to be a change in permissions. | dpkg does not provide any facility to read the changelog of a package.you should extract the package and read the changelog dpkg -X <package.deb> <folder> then you can read the changelog using the dpkg-parsechangelog utility dpkg-parsechangelog -l <folder>/usr/share/doc/<package>/changelog.Debian.gz Since that's a real pain , if your distro is using apt-get you can use apt-get changelog <packagename> or apt changelog <packagename> | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/331977",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/206636/"
]
} |
332,005 | Is there a way to test whether a shell function exists that will work both for bash and zsh ? | If you want to check that there's a currently defined (or at least potentially marked for autoloading) function by the name foo regardless of whether a builtin/executable/keyword/alias may also be available by that name, you could do: if typeset -f foo > /dev/null; then echo there is a foo functionfi Though note that if there's a keyword or alias called foo as well, it would take precedence over the function (when not quoted). The above should work in ksh (where it comes from), zsh and bash . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/332005",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
332,019 | I'm running an application that writes to log.txt. The app was updated to a new version, making the supported plugins no longer compatible. It forces an enormous amount of errors into log.txt and does not seem to support writing to a different log file. How can I write them to a different log? I've considered replacing log.txt with a hard link (application can't tell the difference right?) Or a hard link that points to /dev/null. What are my options? | # cp -a /dev/null log.txt This copies your null device with the right major and minor dev numbers to log.txt so you have another null . Devices are not known by name at all in the kernel but rather by their major and minor numbers. Since I don't know what OS you have I found it convenient to just copy the numbers from where we already know they are. If you make it with the wrong major and minor numbers, you would most likely have made some other device, perhaps a disk or something else you don't want writing to. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/332019",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82447/"
]
} |
332,048 | I am sharing documents by running a hotspot in conjonction to dnsmasq that redirect all name queries to an IP <IP> where the documents can be found create_ap wlan0 wlan0 HereAreTheDocuments echo "address=/#/<IP>" >> /dev/dnsmasq.conf service dnsmasq start I need to force users connected to my hotspot to set my IP as their DNS. How can I force connected users to use the local DNS instead of a remote one? For instance lots of machine are using Google DNS at 8.8.8.8 and 8.8.4.4 | # cp -a /dev/null log.txt This copies your null device with the right major and minor dev numbers to log.txt so you have another null . Devices are not known by name at all in the kernel but rather by their major and minor numbers. Since I don't know what OS you have I found it convenient to just copy the numbers from where we already know they are. If you make it with the wrong major and minor numbers, you would most likely have made some other device, perhaps a disk or something else you don't want writing to. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/332048",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189711/"
]
} |
332,056 | I would like to take a screenshot of the KDE Plasma 5 splash screen as I am creating a new splash theme. But pressing PrtSc during the splash doesn't launch spectacle (my screenshooter) until after the splash screen is gone and the screenshot it takes is of the desktop as it appears after the splash screen. | # cp -a /dev/null log.txt This copies your null device with the right major and minor dev numbers to log.txt so you have another null . Devices are not known by name at all in the kernel but rather by their major and minor numbers. Since I don't know what OS you have I found it convenient to just copy the numbers from where we already know they are. If you make it with the wrong major and minor numbers, you would most likely have made some other device, perhaps a disk or something else you don't want writing to. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/332056",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27613/"
]
} |
332,061 | I have a dedicated server with 3 SSD drives in RAID 1. Output of cat /proc/mdstat : Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md4 : active raid1 sdc4[2] sdb4[1] sda4[0] 106738624 blocks [3/3] [UUU] bitmap: 0/1 pages [0KB], 65536KB chunkmd2 : active raid1 sdc2[2] sda2[0] sdb2[1] 5497792 blocks [3/3] [UUU] md1 : active raid1 sda1[0] sdc1[2] sdb1[1] 259008 blocks [3/3] [UUU] unused devices: <none> ¿How can a drive be safely removed from the soft raid without loosing any data?I would like to remove a drive from the array in order to reformat it and use it independently, while keeping the most important data mirrored. | You've got a three-way mirror there: each drive has a complete copy of all data. Assuming the drive you want to remove is /dev/sdc , and you want to remove it from all three arrays, you'd perform the following steps for /dev/sdc1 , /dev/sdc2 , and /dev/sdc4 . Step 1: Remove the drive from the array. You can't remove an active device from an array, so you need to mark it as failed first. mdadm /dev/md1 --fail /dev/sdc1mdadm /dev/md1 --remove /dev/sdc1 Step 2: Erase the RAID metadata so the kernel won't try to re-add it: wipefs -a /dev/sdc1 Step 3: Shrink the array so it's only a two-way mirror, not a three-way mirror with a missing drive: mdadm --grow /dev/md1 --raid-devices=2 You may need to remove the write-intent bitmap from /dev/md4 before shrinking it (the manual isn't clear on this), in which case you'd do so just before step 3 with mdadm --grow /dev/md4 --bitmap=none , then put it back afterwards with mdadm --grow /dev/md4 --bitmap=internal . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/332061",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/202601/"
]
} |
332,096 | I have an output like this: (2+05:10)(10:19)(00:45) This output represents day hours and minute. I would like to replace the ':' with 'd','h' and 'm' respectively, to get this 2d05h10m10h19m00h45m Presently, I tried, sed -e 's/(//g; s/)//g; s/+/:/g'|awk '{split($0,s,":"); print s[1]"d" s[2]"h"s[3]"m"}') which gives (and here I mess up !) 6d05h20m3d15h17m1d02h27m00d08hm00d11hm02d25hm02d30hm16d50hm5d00h39m21d48hm | Sounds like < file tr '+:)' dhm | tr -d '(' Would do it. Or to match on that pattern more explicitly: sed 's/(\([0-9]\{1,\}\)+\([0-9]\{1,\}\):\([0-9]\{1,\}\))/\1d\2h\3m/g s/(\([0-9]\{1,\}\):\([0-9]\{1,\}\))/\1h\2m/g' < file Which with some sed implementations you can simplify using extended regular expressions to: sed -E 's/\(([0-9]+)\+([0-9]+):([0-9]+)\)/\1d\2h\3m/g s/\(([0-9]+):([0-9]+)\)/\1h\2m/g' < file Or with a single s command with perl : perl -pe 's{\((?:(\d+)\+)?(\d+):(\d+)\)}{ ($1 && "$1d") . "$2h$3m"}ge' < file Or: perl -pe 's/\((\d*\+?\d+:\d+\))/$1 =~ y|+:)|dhm|r/ge' < file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/332096",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/170079/"
]
} |
332,116 | I need to rollback some packages. I have a list of all the packages I need to rollback and the versions I need. All the versions I need are sitting in /var/cache/apt/archives yet apt ignored them telling me that it couldn't find the version I asked for. How can I get apt to see the older versions? I did try using dpkg-scanpackages but it seems to ignore the older ones favoring the newer ones. The command I used is apt-get -s install $(cat rollback.txt | tr '\n' ' ') . rollback.txt contains all of the packages I wish to downgrade in the correct apt format. rollback.txt . The errors are linked here: errors.log . I'm basically looking to downgrade of everything from today. I'll then go through and do an upgrade that won't brick my system. | Sounds like < file tr '+:)' dhm | tr -d '(' Would do it. Or to match on that pattern more explicitly: sed 's/(\([0-9]\{1,\}\)+\([0-9]\{1,\}\):\([0-9]\{1,\}\))/\1d\2h\3m/g s/(\([0-9]\{1,\}\):\([0-9]\{1,\}\))/\1h\2m/g' < file Which with some sed implementations you can simplify using extended regular expressions to: sed -E 's/\(([0-9]+)\+([0-9]+):([0-9]+)\)/\1d\2h\3m/g s/\(([0-9]+):([0-9]+)\)/\1h\2m/g' < file Or with a single s command with perl : perl -pe 's{\((?:(\d+)\+)?(\d+):(\d+)\)}{ ($1 && "$1d") . "$2h$3m"}ge' < file Or: perl -pe 's/\((\d*\+?\d+:\d+\))/$1 =~ y|+:)|dhm|r/ge' < file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/332116",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101019/"
]
} |
332,163 | I would like to use netcat to send a piece of text to the echo service on my server, get the reply then exit, so that I know the connection is still good. so far I've tried: echo 'test' | netcat server 7 this way netcat would wait for more input rather than exit. How can I make netcat exit after getting reply from the echo service? | Just tried - slightly different behaviour between netcat-openbsd and netcat-traditional ( ubuntu 16.4). The OpenBSD variant does what you expect, while with the netcat-traditional I need to add the -q 1 to avoid waiting for more input. echo 'test' | netcat -q 1 server 7 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/332163",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27695/"
]
} |
332,217 | I have a massive number of files with extensions like .0_1234 .0_4213 and .0_4132 etc. Some of these are gzip compressed and some are raw email. I need to determine which are compressed files, decompress those, and rename all files to a common extension once all compress files are decompressed. I've found I can use the file command to determine which are compressed, then grep the results and use sed to whittle the output down to a list of files, but can't determine how to decompress the seemingly random extensions. Here's what I have so far file *|grep gzip| sed -e 's/: .*$//g' I'd like to use xargs or something to take the list of files provided in output and either rename them to .gz so they can be decompressed, or simply decompress them in-line. | Don't use gzip , use zcat instead which doesn't expect an extension. You can do the whole thing in one go. Just try to zcat the file and, if that fails because it isn't compressed, cat it instead: for f in *; do ( zcat "$f" || cat "$f" ) > temp && mv temp "$f".ext && rm "$f" done The script above will first try to zcat the file into temp and, if that fails (if the file isn't in gzip format), it will just cat it. This is run in a subshell to capture the output of whichever command runs and redirect it to a temp file ( temp ). Then, the temp is renamed to the original file name plus an extension ( .ext in this example) and the original is deleted. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/332217",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/206808/"
]
} |
332,290 | I'm using optirun from bumblebee . It is designed to start my 2nd GPU, run the command given, and shut down the 2nd GPU at the end. A simplified example: optirun echo test | cat However there's a bug in optirun that requires that I now run a follow up command to force the GPU to shut down. Can I easily wrap some complex command such as echo test | cat in a shell script such that I can run optirun, and then follow that up at the end with command (my workaround to the bug)? The quoting and all seems to be an issue preventing me from doing this with a simple shell script. | I'm confident you're simply after $@ , the argument list to a script. Trivial example: $ cat >cc.sh <<EOF#!/bin/shhead "\$@"echo I AM DONEEOF$ chmod 755 cc.sh Works with arguments: $ ./cc.sh cc.sh #!/bin/shhead "$@"echo I AM DONEI AM DONE Works with STDIN/STDOUT $ cat cc.sh | ./cc.sh | tail -n 2echo I AM DONEI AM DONE Works with a mixture: $ cat cc.sh | ./cc.sh -n 2 | tail -n 2head "$@"I AM DONE Therefore: #!/bin/shoptirun "$@"rmmod <mod> [perhaps >/dev/null 2>&1 if you need to ignore errors from rmmod] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/332290",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9035/"
]
} |
332,348 | I want to join pdf files by pdfjoin / pdfunite /... in the numerical order discussed well in the thread answer linux command merge pdf files with numerical sort and Modified time order. If you use the solution in the thread, it puts the order in the numerical order and alphabetical order. This is problematic with the filenames such as where you see both have the same Modified time by minute accuracy but Visceral is earlier by second accuracy (File browser notes it and puts Visceral first in the Modified order. Filename Modified----- ---3.THE ABC.pdf 10:39 3.Visceral abc..pdf 10:39 Complete filenames 1.Description abc.pdf2.Gabcd.pdf3.THE ABC.pdf3.Visceral abc..pdf4.description of abc.pdf5.Chraa..pdf Proposal #1 works in the numerical and alphabetical order but not in the numerical and modified order # https://stackoverflow.com/a/23643544/54964ls -v *.pdf | ... bash -c 'IFS=$'"'"'\n'"'"' read -d "" -ra x;pdfunite "${x[@]}" output.pdf' Proposal #2 simplified case but does not deal whitespaces and other special characters in filenames # https://stackoverflow.com/a/23643544/54964pdfunite $(ls *.pdf | sort -n) output.pdf There is nothing in the pdfunite --help about the ordering so I think it should be done by ls / sort /...The command sort does not have anything about modified in its man page. Testing xhienne's answer The order is not correct in the output where you see 2.jpg and 4.jpg are at the wrong order for some reason masi@masi:~/Documents$ ls -tr /home/masi/Documents/[0-9]* | sort -t. -k1,1n -s/home/masi/Documents/1.jpg/home/masi/Documents/3.jpg/home/masi/Documents/5.jpg/home/masi/Documents/6.jpg/home/masi/Documents/7.jpg/home/masi/Documents/8.jpg/home/masi/Documents/9.jpg/home/masi/Documents/10.jpg/home/masi/Documents/2.jpg/home/masi/Documents/4.jpg 2nd iteration export LC_ALL=C; ls -tr /home/masi/Documents/[0-9]* | sort -t. -k1,1n -s Output /home/masi/Documents/1.jpg/home/masi/Documents/3.jpg/home/masi/Documents/5.jpg/home/masi/Documents/6.jpg/home/masi/Documents/7.jpg/home/masi/Documents/8.jpg/home/masi/Documents/9.jpg/home/masi/Documents/10.jpg/home/masi/Documents/2.jpg/home/masi/Documents/4.jpg OS: Debian 8.5 | You could do that with zsh : zmodload zsh/statprefixmtime () {sortstring=${(l:6::0:)${REPLY%%.*}}$(zstat -F '%s' +mtime -- $REPLY)REPLY=${sortstring}}print -rl -- *(o+prefixmtime) Replace print -rl with your command if you're happy with the result How it works: The globs will sort here (via o+function ) based on what the function prefixmtime returns, that is sortstring which is a string obtained by concatenating the numerical prefix of each file name ${REPLY%%.*} left- padded with zeros (l:6::0:) (assuming prefixes are up to 6-chars long) and the mtime in seconds (obtained via zstat module). It may be easier to understand how it works if you run: { for f (*)printf '%s %s\n' ${(l:6::0:)${f%%.*}}$(zstat -F '%s' +mtime -- $f) $f} | sort -k1,1n Note that the above assumes you're in the same directory with your files, otherwise you'll have to define the sort string in that function as sortstring=${(l:6::0:)${${REPLY##*/}%%.*}}$(zstat -F '%s' +mtime -- $REPLY) and then you can use directory paths e.g. print -rl some/place/else/*(o+prefixmtime) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/332348",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
332,372 | If you run say Nemo from bash shell, you're not going to be able to execute any other commands from that shell until you end the Nemo process. And that's my problem. I want to be able to run other commands without having to open an another shell and without having to end the started process. | exec Nemo would not run Nemo in a shell sub-process , that is, it would execute Nemo in the process of the shell, so replace the shell with Nemo . But it doesn't look like that's what you want. What it looks like you want here is the command to run in a separate process but for the shell not to wait for that process to finish before issuing the next prompt. For that, you'd use: Nemo & That runs Nemo asynchronously. When it's done from an interactive shell, Nemo is also put in background so as to be prevented from reading from the terminal or from getting killed by a SIGINT if you press Ctrl+C. Note that the command will still be in the shell's job table. You can put it back in foreground with fg . The shell will also try to kill it when it exits (at least in some shells). With some shells including bash , you can also use the disown command to tell the shell to forget about it. With zsh , you can also start it as: Nemo &! to start it in background and disown it straight away. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/332372",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/206922/"
]
} |
332,419 | I'm using tmux 2.1 and tried to on mouse mode with set -g mouse on And it works fine, I can switch across tmux window splits by clicking the appropriate window. But the downside of this is that I cannot select text with mouse. Here is how it looks like: As you can see, the selection just become red when I keep pressing the mouse button and disappear when I release the button. Without mouse mode enabled the "selection with mouse" works completely fine. Is there some workaround to turn mouse mode on and have the ability to select text? | If you press Shift while doing things with the mouse, that overrides the mouse protocol and lets you select/paste. It's documented in the xterm manual for instance, and most terminal emulators copy that behavior. Notes for OS X: In iTerm, use Option instead of Shift . In Terminal.app, use Fn . | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/332419",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56819/"
]
} |
332,444 | I'm attempting a batch rename of camelcase-named files to include spaces between adjacent upper and lower case leters. I'm using Mac OS so the utils I'm using are the BSD-variant. For example: 250 - ErosPhiliaAgape.mp3 => 250 - Eros Philia Agape.mp3 I'm trying to find the relevant files and pipe them to mv which runs sed in a subshell, using this command: find . -name "*[a-z][A-Z]*" -print0 | xargs -0 -I {} mv {} "$(sed -E 's/([a-z])([A-Z])/\1 \2/g')" Individually, these commands work fine: find pulls up the correct files and sed renames it correctly, but when I combine them with xargs, nothing happens. What do I need to change to make this work? | The problem is that the $(...) sub-shell in your command is evaluated at the time you run the command, and not evaluated by xargs for each file.You could use find 's -exec instead to evaluate commands for each file in a sh , and also replace the quoting appropriately: find . -name "*[a-z][A-Z]*" -type f -exec sh -c 'echo mv -v "{}" "$(echo "{}" | sed -E "s/([a-z])([A-Z])/\1 \2/g")"' \; If the output of this looks good, drop the echo in echo mv .Note that due to echo | sed , this won't work with filenames with embedded \n . (But that was an already existing limitation in your own attempt, so I hope that's acceptable.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/332444",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
332,457 | I'm having a problem with the Esc key when I want to return to the interactive mode from the insert mode. Is there exist another key used to release the insert mode. | Ctrl - [ sends the same character to the terminal as the physical Esc key. The latter is simply a shortcut for the former, generally. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/332457",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/198926/"
]
} |
332,474 | I want to list all files in /usr/ using ls . I am not calling ls directly, but via xargs . Moreover, I am using xargs parameters -L and -P to utilize all my cores. find /usr/ -type f -print0 | xargs -0 -L16 -P4 ls -lAd | sort -k9 > /tmp/aaa the above command works as expected. It produces nice output. However when I increase the number of lines -L parameter from 16 to 64: find /usr/ -type f -print0 | xargs -0 -L64 -P4 ls -lAd | sort -k9 > /tmp/bbb the resulting output is all garbled up. What I mean by that is, output no longer starts on new line, new lines start in the middle of "previous" line and are all mixed up: -rw-r--r-- 1 root root 5455 Nov 16 2010 /usr/shareonts/X11/encodings/armscii-8.enc.gz-rw-r--r-- 1 root root 1285 May 29 2016-rw-r--r-- 1 root root 6205 May 29 2016 /usr/include/arpa/nameser_compat.h-rw-r--r-- 1 root root 0 Apr 17 20-rw-r--r-- 1 root root 933 Apr 16 2012 /usr/share/icons/nuoveXT2/16x16/actions/address-book-new.png-rw-r--r-- 1 root root 53651 Jun 17 2012-rw-r--r-- 1 root root 7117 May 29 2016 /usr/include/dlfcn.h-rw-r--r-- 1 root root 311 Jun 9 2015-rw-r--r-- 1 root root 1700 Jun 9 2015 /usr/share/cups/templates/de/add-printer.tmpl-rw-r--r-- 1 root root 5157 M1 root root 10620 Jun 14 2012 /usr/lib/perl5/Tk/pTk/tkIntXlibDecls.m-rw-r--r-- 1 root -rwxr-xr-x 1 root root 1829 Jan 22 2013 /usr/lib/emacsen-common/packages/install/dictionaries-common-rw-r--r-- 1 root r-rw-r--r-- 1 root root 1890 Jun 2 2012 /usr/share/perl5/Date/Manip/TZ/afaddi00.pm-rw-r--r-- 1 root root 1104 Jul-rw-r--r-- 1 root root 10268 Jul 27 15:58 /usr/share/perl/5.14.2/B/Debug.pm-rw-r--r-- 1 root root 725 Apr 1-rw-r--r-- 1 root root 883 Apr 1 2012 /usr/share/icons/gnome/16x16/actions/address-book-new.png Funny thing is, it only happens when using -L64 or larger. I don't see this problem with -L16 . Can anybody explain what is happening here? | This is to do with writes to pipes. With -L16 you are running one process for each 16 files, which produces about a thousand characters, depending on how long the filenames are. With -L64 you are about four thousand. The ls program almost certainly uses the stdio library, and almost certainly uses a 4kB buffer for outputting to reduce the number of write calls. So find produces a load of filenames, then (for the -L64 case) xargs chops them into bundles of 64 and starts up 4 ls processes to handle them. Each ls will generate its first 4k of output and write it to the pipe to sort. Note that this 4k will typically not end with a newline. So say the third ls gets its first 4kB ready first, and it ends lrwxrwxrwx 1 root root 6 Oct 21 2013 bzegrep -> bzgrep -rwxr-xr-x 1 root root 4877 Oct 21 2013 bzexe lrwxrwxrwx 1 root root 6 Oct 2 and then the first ls outputs something, e.g. total 123459 then the input to sort will include lrwxrwxrwx 1 root root 6 Oct 2total 123459 In the -L16 case, the ls processes will (usually) only output a complete set of results in one go. Of course for this case you are just wasting time and resources by using xargs and ls, you should just let find output the information it already has rather than running extra programs to discover the information again. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/332474",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
332,498 | When I used wget , the hostname resolution is ok root:here cd$ wget https://gfe.cit.api.here.com/1/layer_put.json?layer_id=123&app_id=x2&app_code=x1 The result is Resolving gfe.cit.api.here.com... 52.51.134.116, 54.154.19.134, 52.208.9.155Connecting to gfe.cit.api.here.com|52.51.134.116|:443... connected.HTTP request sent, awaiting response... 400 Bad Request2016-12-24 13:18:47 ERROR 400: Bad Request. But when I used ping ping https://gfe.cit.api.here.com/1/layer_put.json?layer_id=123&app_id=x2&app_code=x1 The result is cannot resolve https://gfe.cit.api.here.com/1/layer_put.json?layer_id=123: Unknown host The hostname resolution failed, what's the difference between wget and ping ? | Answering to: What is the difference between 'ping' and 'wget' in relation to hostname resolution Ping expects either an IP address or a hostname as parameter. You are giving it a full URL which it tries to resolve as a hostname and fails. With everything but the fully qualified named stripped, the ping command is able to check the connection (and fails in my following test, maybe because the ICMP request is blocked or because the server is down): $ ping gfe.cit.api.here.comPinging cle2-cit.eu-west-1.elasticbeanstalk.com [54.154.19.134] with 32 bytes of data:Request timed out. For the general difference between ping and wget , see Alec's answer. For a probable reason explaining the Error 400, see roaima's one. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/332498",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167600/"
]
} |
332,531 | Bash Manual says: Bash attempts to determine when it is being run with its standard input connected to a network connection, as when executed by the remote shell daemon, usually rshd , or the secure shell daemon sshd . If Bash determines it is being run in this fashion, it reads and executes commands from ~/.bashrc , if that file exists and is readable. This Bash sources ~/.bashrc : ssh user@host : But this Bash sources ~/.bash_profile : ssh user@host I don't see a difference in these two commands according to the spec. Isn't stdin connected to a network connection in both cases? | A login shell first reads /etc/profile and then ~/.bash_profile . A non-login shell reads from /etc/bash.bashrc and then ~/.bashrc . Why is that important? Because of this line in man ssh : If command is specified, it is executed on the remote host instead of a login shell. In other words, if the ssh command only has options (not a command), like: ssh user@host It will start a login shell, a login shell reads ~/.bash_profile . An ssh command which does have a command , like: ssh user@host : Where the command is : (or do nothing). It will not start a login shell, therefore ~/.bashrc is what will be read. Remote stdin The supplied tty connection for /dev/stdin in the remote computer may be an actual tty or something else. For: $ ssh isaac@localhost/etc/profile sourced$ ls -la /dev/stdinlrwxrwxrwx 1 root root 15 Dec 24 03:35 /dev/stdin -> /proc/self/fd/0$ ls -la /proc/self/fd/0lrwx------ 1 isaac isaac 64 Dec 24 19:34 /proc/self/fd/0 -> /dev/pts/3$ ls -la /dev/pts/3crw--w---- 1 isaac tty 136, 3 Dec 24 19:35 /dev/pts/3 Which ends in a TTY (not a network connection) as the started bash sees it. For a ssh connection with a command: $ ssh isaac@localhost 'ls -la /dev/stdin'isaac@localhost's password: lrwxrwxrwx 1 root root 15 Dec 24 03:35 /dev/stdin -> /proc/self/fd/0 The list of TTY's start the same, but note that /etc/profile was not sourced. $ ssh isaac@localhost 'ls -la /proc/self/fd/0'isaac@localhost's password:lr-x------ 1 isaac isaac 64 Dec 24 19:39 /proc/self/fd/0 -> pipe:[6579259] Which tells the shell that the connection is a pipe (not a network connection). So, in both the test cases, the shell is unable to know that the connection is from a network and therefore does not read ~/.bashrc (if we only talk about the connection to a network). It does read ~/.bashrc, but for a different reason. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/332531",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7157/"
]
} |
332,532 | Assuming user has /bin/bash as the shell in /etc/passwd . Then ssh user@host command runs the command using Bash. However, that shell is neither login nor interactive, which means neither ~/.bash_profile nor ~/.bashrc is sourced. In that case how to set the PATH environment variable so that executables can be found and executed? Is it recommended to prefix the actual command with source ~/.bashrc ? Edit. This question is trivial for Bash, because (as people pointed out) ~/.bashrc is sourced in such case. The definitive answer comes from this paragraph in man bash : Bash attempts to determine when it is being run with its standard input connected to a network connection, as when executed by the remote shell daemon, usually rshd, or the secure shell daemon sshd. If bash determines it is being run in this fashion, it reads and executes commands from ~/.bashrc , if that file exists and is readable. It will not do this if invoked as sh . The --norc option may be used to inhibit this behavior, and the --rcfile option may be used to force another file to be read, but neither rshd nor sshd generally invoke the shell with those options or allow them to be specified. | You have few possibilities: Set the PATH on the server in ~/.ssh/environment (needs to be enabled by PermitUserEnvironment yes in sshd_config ). Use full path to the binary As you mentioned, manually source .bashrc : prefix the command with . ~/.bashrc (or source ) It pretty much depends on the use case, which way you will go. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/332532",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7157/"
]
} |
332,537 | The process I'm running sometimes generates core file, and that file has following file permissions: server:~ # ls -l /mnt/process/core/core_segfault -rw------- 1 root root 245760 Dec 2 11:29 /mnt/process/core/core_segfault The issue is that only root user can open it for investigation, while I'd like everyone with access to it to be able to read it without me always setting permissions manually. How could I set default permissions to something like -rw-rw-rw- ? | Since core files contain the complete memory layout of the process at the time it crashed, they may contain sensitive information. For this reason, core files are created with ownership set to the uid of the process at the time of its crash, and permissions set rather restrictive. There is no setting to change that easily. However, what you can do is to set the kernel.core_pattern sysctl setting to a program (which must start with a pipe character, | ). The kernel will then call that program when a core file is generated, instead of dumping it to disk. This program should be able to generate the core file with the permissions you want. Examples of programs that do so are systemd-coredump and apport . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/332537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/207031/"
]
} |
332,556 | I'm installing arch Linux and I typed in the command grub-mkconfig -o /boot/grub/grub.cfg And it responded with WARNING: Failed to connect to lvmetad. Falling back to device scanning. What do I do? | This worried me as well. From some digging about on the GRUB Arch Wiki page : Warning when installing in chroot When installing GRUB on a LVM system in a chroot environment (e.g. during system installation), you may receive warnings like /run/lvm/lvmetad.socket: connect failed: No such file or directory or WARNING: failed to connect to lvmetad: No such file or directory. Falling back to internal scanning. This is because /run is not available inside the chroot. These warnings will not prevent the system from booting, provided that everything has been done correctly, so you may continue with the installation. So looks like there's no need to worry. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/332556",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/206695/"
]
} |
332,559 | Was reading man page of resolv.conf and meet sortlist . What is the use of it? Man page shows only list of network\IP addresses after sortlist keyword, not the sorting criterion. How that addresses map to sorting? Searched material about this question, did not found answer, though. | sortlist is used to move matching IP addresses in DNS responses to the front of the result list with the intention that applications will use them preferentially. It's a bit obsolete though. Nowadays we have better a standard for that, in the form of RFC 3484 (see section 6). RFC 3484 is much better than the sortlist hack better because: It supports IPv6 [better]. It takes source address selection into account. It's not specific to DNS (it's hooked into the libc name service, a layer above). It's a standard. RFC 3484 style destination address selection is configured in /etc/gai.conf . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/332559",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191967/"
]
} |
332,579 | Using eval is often discouraged because it allows execution of arbitrary code. However, if we use eval echo , then it looks like the rest of the string will become arguments of echo so it should be safe. Am I correct on this? | Counterexample: DANGEROUS=">foo"eval echo $DANGEROUS The arbitrary arguments to echo could have done something more nefarious than creating a file called "foo". | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/332579",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7157/"
]
} |
332,641 | I'd like to install the latest Python, which is 3.6 at the time of this post. However, the repository is saying that Python 3.4.2 is the newest version. I've tried: $ sudo apt-get update$ sudo apt-get install python3python3 is already the newest version.$ python -VPython 3.4.2 To upgrade to Python 3.6 on my Windows workstation, I simply downloaded an exe, clicked "next" a few times, and it's done. What's the proper and officially accepted procedure to install Python 3.6 on Debian Jessie? | You can install Python-3.6 on Debian 8 as follows: wget https://www.python.org/ftp/python/3.6.9/Python-3.6.9.tgztar xvf Python-3.6.9.tgzcd Python-3.6.9./configure --enable-optimizations --enable-sharedmake -j8sudo make altinstallpython3.6 It is recommended to use make altinstall according to the official website . If you want pip to be included, you need to add --with-ensurepip=install to your configure call. For more details see ./configure --help . Warning: make install can overwrite or masquerade the python binary. make altinstall is therefore recommended instead of make install since it only installs exec_prefix/bin/pythonversion . Some packages need to be installed to avoid some known problems, see: Common build problems (updated) Ubuntu/Debian: sudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev \libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev \xz-utils tk-dev libffi-dev liblzma-dev Alternative of libreadline-dev: sudo apt install libedit-dev Fedora/CentOS/RHEL(aws ec2): sudo yum install zlib-devel bzip2 bzip2-devel readline-devel sqlite sqlite-devel \openssl-devel xz xz-devel libffi-devel Alternative of openssl-devel: sudo yum install compat-openssl10-devel --allowerasing Update You can download the latest python-x.y.z.tar.gz from here . To set a default python version and easily switch between them , you need to update your update-alternatives with the multiple python version. Let's say you have installed the python3.7 on debian stretch , use the command whereis python to locate the binary ( */bin/python ). e,g: /usr/local/bin/python3.7/usr/bin/python2.7/usr/bin/python3.5 Add the python versions: update-alternatives --install /usr/bin/python python /usr/local/bin/python3.7 50update-alternatives --install /usr/bin/python python /usr/bin/python2.7 40update-alternatives --install /usr/bin/python python /usr/bin/python3.5 30 The python3.7 with the 50 priority is now your default python , the python -V will print: Python 3.7.0b2 To switch between them, use: update-alternatives --config python Sample output: There are 3 choices for the alternative python (providing /usr/bin/python). Selection Path Priority Status------------------------------------------------------------* 0 /usr/local/bin/python3.7 50 auto mode 1 /usr/bin/python2.7 40 manual mode 2 /usr/bin/python3.5 30 manual mode 3 /usr/local/bin/python3.7 50 manual modePress <enter> to keep the current choice[*], or type selection number: | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/332641",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/207108/"
]
} |
332,664 | I have a file like: http://example.mxhttps://test.comhttp://4.3.4.4http://dev.somedomain.comhttp://1.3.4.2 I want to get rid of the ones with IPs so that the result should be: http://example.mxhttps://test.comhttp://dev.somedomain.com What I did is: cat in.log | sed '/^http:\/\/[0-9]/d' > out.log But it does not work. | But it does work. The correct answer appears to be already in your question. $ cat in.loghttp://example.mxhttps://test.comhttp://4.3.4.4http://dev.somedomain.comhttp://1.3.4.2$ cat in.log | sed '/^http:\/\/[0-9]/d' > out.log$ cat out.loghttp://example.mxhttps://test.comhttp://dev.somedomain.com$ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/332664",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31464/"
]
} |
332,672 | I looked at another similar question about adding third-party repos. I am trying to add a third-party desktop IM client called riot . While the site gives link to the third-party it gives no instructions as how to add third-party sources or keyring in Debian. I went through https://riot.im/packages/debian/pool/main/ and made the following additions in my /etc/apt/sources.list - ######## Third party repos #######deb https://riot.im/packages/debian/ stretch main Now I have two questions :- a. Is the third-party repo. I have entered is correct or should I ask for more information from upstream. b. How do I add the secure key as all packages are usually signed in the Debian Universe. The public key is given at https://riot.im/packages/debian/repo-key.asc I am on Debian stretch/testing. | You must NEVER install any 3rd party key with apt-key add , as suggested in other posts, because it would cause the system to accept signatures from the third-party keyholder on all other repositories configured on the system.You should set up the repository and install the key as follows: Create directory for manually installed OpenPGP keys: $ sudo mkdir /usr/local/share/keyrings Download the key into the directory. Since your key’s extension is .asc , it is probably "ascii-armored" (you can check this by downloading they key and opening it in a text editor: if it starts with something like -----BEGIN PGP PUBLIC KEY BLOCK----- then it is armored; if it looks like a set of some binary data, then it is not armored and you can use it as it is): for an armored key: $ curl https://riot.im/packages/debian/repo-key.asc | gpg --dearmor | sudo dd of=/usr/local/share/keyrings/riot-archive-keyring.gpg If the key is not armored, then use this command instead: $ sudo wget -O /usr/local/share/keyrings/riot-archive-keyring.gpg https://riot.im/packages/debian/repo-key.asc Add the desired 3rd party repository into the list of sources (pay attention to the signed-by option, it tells APT that the repo is signed with the specific key): It is recommended to use the new deb822 multiline format for sources now. So create new .sources file with the respective content below: $ sudoedit /etc/apt/sources.list.d/riot.sources Types: debURIs: https://riot.im/packages/debian/Suites: stretchComponents: mainSigned-By: /usr/local/share/keyrings/riot-archive-keyring.gpg Or if you prefer the legacy style (one line per source), use this command instead:: $ echo "deb [signed-by=/usr/local/share/keyrings/riot-archive-keyring.gpg] https://riot.im/packages/debian/ stretch main" | sudo tee -a /etc/apt/sources.list.d/riot.list Restrict the 3rd party repository to some specific software package only. Create preference control file for APT: $ sudoedit /etc/apt/preferences.d/riot.pref Put the following content into the file (if necessary, you can append the package name with asterisk ( * ) as a wildcard or list multiple package names separated by space ( ): Package: *Pin: origin riot.imPin-Priority: 1Package: riot-webPin: origin riot.imPin-Priority: 500 You can find official information from Debian here: https://wiki.debian.org/DebianRepository/UseThirdParty | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/332672",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50490/"
]
} |
332,678 | I realise that there are many similar questions like this but I have not found one that answers my explicit query. I am still using Linux Fedora 20, and it is well past the time when I should upgrade to the latest version. I have started using Deja Dup for backup of my /home directory on to an external one terabyte hard drive; my question is, please, what other directories should I backup as well before I start the installation? | For /etc , use etckeeper . It stores /etc under version control, taking care of preserving permissions and ownership. Before an upgrade, make sure that you've committed the latest changes, and set a tag (e.g. git tag fedora20-before-upgrade ). Also make a list of all the packages you currently have installed ( rpm -ql >/var/tmp/fedora20-package-list.txt ). That could be useful if the upgrade ends up removing some package to make dependencies work. Other than that, there isn't anything that's especially at risk during upgrades. Home directories and local installations (e.g. under /usr/local ) should not be touched, and the rest of the system should be managed by the upgrade. Of course, like any other time, you should have up-to-date backups in case something unexpected happens. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/332678",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18182/"
]
} |
332,691 | I want to construct an xml string by inserinting variables: str1="Hello"str2="world"xml='<?xml version="1.0" encoding="iso-8859-1"?><tag1>$str1</tag1><tag2>$str2</tag2>'echo $xml The result should be <?xml version="1.0" encoding="iso-8859-1"?><tag1>Hello</tag1><tag2>world</tag2> But what I get is: <?xml version="1.0" encoding="iso-8859-1"?><tag1>$str1</tag1><tag2>$str2</tag2> I also tried xml="<?xml version="1.0" encoding="iso-8859-1"?><tag1>$str1</tag1><tag2>$str2</tag2>" But that removes the inner double quotes and gives: <?xml version=1.0 encoding=iso-8859-1?><tag1>hello</tag1><tag2>world</tag2> | You can embed variables only in double-quoted strings. An easy and safe way to make this work is to break out of the single-quoted string like this: xml='<?xml version="1.0" encoding="iso-8859-1"?><tag1>'"$str1"'</tag1><tag2>'"$str2"'</tag2>' Notice that after breaking out of the single-quoted string, I enclosed the variables within double-quotes.This is to make it safe to have special characters inside the variables. Since you asked for another way, here's an inferior alternative using printf : xml=$(printf '<?xml version="1.0" encoding="iso-8859-1"?><tag1>%s</tag1><tag2>%s</tag2>' "$str1" "$str2") This is inferior because it uses a sub-shell to achieve the same effect, which is an unnecessary extra process. As @steeldriver wrote in a comment, in modern versions of bash, you can write like this to avoid the sub-shell: printf -v xml ' ... ' "$str1" "$str2" Since printf is a shell builtin, this alternative is probably on part with my first suggestion at the top. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/332691",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31464/"
]
} |
332,699 | Today pretty much all kernels use virtual memory provided by the MMU. They do that with the global page table, the address of which is located in a CPU register, and a page supervisor/mapper of pages to processes. The "vm" in vmlinuz , for example, means that the linux kernel supports virtual memory. All that is possible because the MMU maps continuous addresses of memory to the memory segments understood by the x86 architecture. The original UNIX kernel did have a vmunix version, which, I believe, must have used a similar technique. Yet, the original UNIX kernel was written before MMUs were available. If I'm not mistaken the original UNIX kernel (called simply unix ), was written before the existence of the x86 architecture. Historically it did run on the PDP-9 and PDP-11. How that kernel performed memory addressing and management? Was it a segment based addressing (two numbers) or full memory addressing (a single number)? How it separated memory between processed? | Virtual memory is almost a decade older than Unix: there was one in the Burroughs B5000 in 1961. It didn't have an MMU in the modern sense (i.e. based on pages) but provided the same basic functions. IBM System/360 Model 67 in 1965 (still older than Unix) had an MMU. Intel x86 processors didn't get an MMU until the 80386 in 1986. Implementing a Unix system doesn't actually require an MMU. It does require some form of virtual memory, otherwise implementing the fork system call is prohibitively difficult. The fork system call, to create processes by copying an existing process, was a fundamental part of Unix ever since the very first version, so it did require virtual memory. See D. M. Ritchie and K. Thompson, The UNIX Time-Sharing System , CACM, 1974 , §V “Processes and images”. I don't know the details of the hardware that the first Unix versions ran on, but they did have virtual memory in the form of a segmented architecture . The CPU translated between pointers dereferenced by a program (virtual addresses) and actual locations in memory (physical addresses). The mapping was performed by adding an offset to the virtual address. On each context switch between processes, the register containing the offset was adjusted. Although virtually all Unix implementations provide process isolation, this was not the case of some historical implementations on hardware that didn't have memory protection (both in the 1970s, and also in the 1980s with MINIX on 8088 and 80286). Memory protection is somewhat orthogonal to address virtualization; an MMU provides both, a simple segmented architecture doesn't, an MPU¹ provides protection without virtualization. There is a Linux implementation for systems without an MMU, uCLinux , but due to the lack of fork many programs can't run (the only supported of fork is vfork which requires an execve call in the child immediately afterwards). ¹ An MPU (memory protection unit) records access rights for each page of memory. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/332699",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/172635/"
]
} |
332,712 | when I run mount , I can see my hard drive mount as fuseblk . /dev/sdb1 on /media/ecarroll/hd type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,uhelper=udisks2) However, fuseblk doesn't tell me what filesystem is on my device. I found it using gparted but I want to know how to find the fs using the command line utilities. | I found the answer provided by in the comments by Don Crissti to be the best lsblk -no name,fstype This shows me exactly what I want and I don't have to unmount the device, mmcblk0 └─mmcblk0p1 exfat See also, man page on lsblk | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/332712",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3285/"
]
} |
332,764 | Contrary to the Linuxes I used, in FreeBSD the /usr/local directory is heavily populated by a normal installation, even without using any ports. In fact, non-basic shells (Bash and Z-Shell) are put there (in /usr/local/bin ). Under Linux it was nice to have custom-built scripts or software in the /usr/local tree for them to be clearly separated from the distribution software (e.g. to easily “deactivate” these “modifications”, by taking /usr/local out of the $PATH ). What is the reasoning behind this? And since I doubt that there is a way to make FreeBSD behave like Linux, what would be the best practices to install custom-build software and files accessible for all users? | Under Linux it was nice to have custom-built scripts or software in the /usr/local tree for them to be clearly separated from the distribution software And that is exactly what you are getting on FreeBSD . Shells like the Z shell and the Bourne Again shell are not part of FreeBSD . They are third-party additions. The operating system is sometimes referred to by the slang name "base". In the BSD world in general, third party additions on top of "base" do not live in /usr . They live in /usr/local . In the BSD world — and this is true of other BSD operating systems like OpenBSD — you get the operating itself in / and /usr , and the stuff that isn't the operating system in /usr/local . If one wants just the operating system functionality without the additions, one takes /usr/local out of consideration in what one is doing. The slight twist to this is that FreeBSD derivatives like TrueOS Server and TrueOS Desktop modestly consider their additions on top of FreeBSD to be not part of the operating system. So there's a whole load of TrueOS out-of-the-box stuff that lives in /usr/local with the non-operating-system stuff. For example: It's where you'll find PCDM, the TrueOS display manager, living. Conversely, /usr/local is where all custom-built softwares that are not parts of the operating system go. To show how strong this division is: The Mewburn rc scripts for non-operating-system stuff go in /usr/local/etc/rc.d/ and do not get added to /etc/rc.d/ . It's where you'll find /usr/local/etc/rc.d/nginx . Non-operating-system configuration files go in /usr/local/etc/ not /etc/ . It's where you'll find /usr/local/etc/cups . There's a distinction between /usr/share/man where the operating system manuals are and /usr/local/man where the non-operating-system manuals are. Even the package manager itself is (currently) not part of the operating system proper. There's a "bootstrap" package manager, pkg-static . This installs pkg , the actual package manager that has configuration files in /usr/local/etc/pkg and that is itself an add-on. The conceptual leap that you have to make in coming from the Linux "distributions" world is that you don't get an operating system built by selecting from a mish-mash of packages supplied by a "distributor". You get a full operating system as one coherent unit (installed by an installer, upgraded with freebsd-update , and maintained as a single "boot environment" using ZFS), and all of the third-party stuff as ports and packages separate from that. If you yourself are supplying third-party stuff, be you a developer or a system administrator, then you do ports and packages too, or you just put it directly in /usr/local somehow. On the other hand, custom-built softwares that are parts of the operating system go in / and /usr where the operating system lives. The source and build system for the entire operating system come supplied in /usr/src as part of this one self-sufficient system. You make local modifications there, share them with other people using Subversion (FreeBSD) and git (TrueOS) if you want, and rebuild either the "userland" alone or the entire operating system (both "shell" and "kernel" ) from that. Interesting side note If you nonetheless make your own structure for your own machines, you are, according to the operating system manual itself, expected to provide a local hier manual page superseding the operating system one. ☺ Further reading " Chapter 4. Installing Applications: Packages and Ports ". FreeBSD Handbook . 2016. The FreeBSD Documentation Project. FreeBSD Porter's Handbook . 2016. The FreeBSD Documentation Project. hier . §7. FreeBSD manual . " TrueOS for Linux Users ". TrueOS User Guide . 2016. https://unix.stackexchange.com/a/332441/5132 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/332764",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150422/"
]
} |
332,791 | From here I understand that to disable Ctrl + S the stty -ixon command can be used and it works, but as soon as I close the terminal and open another I have to re-enter the command. To permanently disable Ctrl + S I have made a startup.sh that contains the stty -ixon command and run it with crontab at @reboot but it does not work. So what will be the solution to permanently disable Ctrl + S ? | To disable Ctrl - s permanently in terminal just add this line at the end of your .bashrc script (generally in your home directory) stty -ixon An explanation of why this exists and what it relates to can be found in this answer: https://retrocomputing.stackexchange.com/a/7266 | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/332791",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95588/"
]
} |
332,862 | I am attempting to install git on Debian 8.6 Jessie and have run into some dependency issues. What's odd is that I didn't have any issues the few times I recently installed Git in a VM while I was getting used to Linux. apt-get install git Results in : The following packages have unmet dependencies: git : Depends: liberror-perl but is not installable Recommends: rsync but it is not installableE: Unable to correct problems, you have held broken packages. UPDATE my sources.list Seems to be an issue with my system. I can no longer properly install anything. I'm getting dependency issues installing things like Pulseaudio which I've previously installed successfully a few days ago. | You should edit your sources.list , by adding the following line: deb http://ftp.ca.debian.org/debian/ jessie main contrib Then upgrade your package and install git : apt-get update && apt-get upgrade && apt-get dist-upgradeapt-get -f installapt-get install git Edit the following package git , liberror-perl and [rsync ] 3 can be downloaded from the main repo , because you don't have the main repo on your sources.list you cannot install git and its dependencies . Your sources.list should be (with non-free packages): deb http://ftp.ca.debian.org/debian/ jessie main contrib non-freedeb-src http://ftp.ca.debian.org/debian/ jessie main contrib non-freedeb http://security.debian.org/ jessie/updates main contrib non-freedeb-src http://security.debian.org/ jessie/updates main contrib non-freedeb http://ftp.ca.debian.org/debian/ jessie-updates main contrib non-freedeb-src http://ftp.ca.debian.org/debian/ jessie-updates main contrib non-freedeb http://ftp.ca.debian.org/debian/ jessie-backports main contrib non-free On debian Stretch your /etc/apt/sources.list should be (at least): deb http://deb.debian.org/debian stretch maindeb http://security.debian.org/ stretch/updates main deb http://deb.debian.org/debian/ stretch-updates main | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/332862",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149682/"
]
} |
332,885 | I want a bash script which does the following: Find pictures (jpg,jpeg,JPG,JPEG) recursively from current directory downwards Generate a thumbnail with imagemagick's convert Move thumbnail to other directory My current script looks like this: for f in `find . -type f -iname "*.jpg"` do convert ./"$f" -resize 800x800\> ./"${f%.jpg}_thumb.jpg" mv ./"${f%.jpg}_thumb.jpg" /home/user/thumbs/done It doesn't convert files (or folders with all content) which have spaces/special characters. I tried with print0 but it didn't help. | You could use more advanced options like -set combined with percent escapes (namely %t to extract the filename without directory or extension) to do the resize, rename and move of each file with a single convert invocation: find . -type f -iname \*.jpg -exec convert {} -resize 800x800\> \-set filename:name '%t' '/home/user/thumbs/%[filename:name]_thumb.jpg' \; | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/332885",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/207277/"
]
} |
332,886 | I've created a program that intentionally has a divide by zero error. If I run it in the command line it returns: "Floating point exception" But if I run this as a systemd service I can not see this error message. In my systemd script I have added: StandardError=journal But the error message is nowhere to be seen when using journalctl . How can this error message be added to the log seen with journalctl ? | To get all errors for running services using journalctl : $ journalctl -p 3 -xb where -p 3 means priority err , -x provides extra message information, and -b means since last boot | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/332886",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/203765/"
]
} |
332,909 | I'm running a tiny script to update and upgrade some Debian machines but since some weeks it always stopped due to some "news" the terminal is showing up. When manually upgrading I see a "fullscreen" (find screenshot below) from some software, which forces to press "q". I don't want to change any software so I'd like to find a solution, which allows to just skip every interactive screen, while upgrading. Usually I was fine using: sudo apt-get update -y sudo apt-get upgrade -y After I realised that the upgrade process is interrupted without any timeout, I also tried to use the solution of this post : sudo DEBIAN_FRONTEND=noninteractive apt-get -y upgrade but unfortunately with the same result. Does anybody have a solution to just upgrade a machine without any interruptions? UPDATE : First I just executed: DEBIAN_FRONTEND=noninteractive Secondary edited the /etc/dpkg/dpkg.cfg file to: # dpkg configuration file## This file can contain default options for dpkg. All command-line# options are allowed. Values can be specified by putting them after# the option, separated by whitespace and/or an `=' sign.## Do not enable debsig-verify by default; since the distribution is not using# embedded signatures, debsig-verify would reject all packages.no-debsig# Log status changes and actions to a file.log /var/log/dpkg.logforce-confoldforce-confdef Finally I executed: sudo apt-get upgrade -yq This did the trick regarding "press q to quit" - great! I think it's also working to combine the commands executing: DEBIAN_FRONTEND=noninteractivesudo apt-get -o Dpkg::Options::="--force-confnew --force-confdef" --force-yes -yq upgrade Unfortunately another similar problem shows up now: Also trying to edit /etc/apt/listchanges.conf didn't work out unfortunately: [apt]frontend=noneemail_address=rootconfirm=0save_seen=/var/lib/apt/listchanges.dbwhich=news SOLUTION : I noticed (sorry if this is obvious for an advanced linux user) that bash acts different, when you execute a command via script than directly entering the command into the console. All in all it was enough for my script solution to add the -yp parameter and set the DEBIAN_FRONTEND . In order to be safe, I'd edit the /etc/dpkg/dpkg.cfg file too. #!/bin/bashDEBIAN_FRONTEND=noninteractiveexport DEBIAN_FRONTENDapt-get -yq updateapt-get -yq upgrade | you should set DEBIAN_FRONTEND=noninteractive , this will stop debconf prompts from appearing. After that, add force-confold and force-confdef to your /etc/dpkg/dpkg.cfg file.then use the -y option sudo apt-get -y update && sudo apt-get -y upgrade or use this command apt-get -o Dpkg::Options::="--force-confnew --force-confdef" --force-yes -y upgrade and if it doesn't work try apt-get -o Dpkg::Options::="--force-confnew" --force-yes -y upgrade | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/332909",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/157946/"
]
} |
332,914 | I have setup a Centos Linux Server. I can connect to my server with command: ssh My_Name @ Ip_Address How can I set a name for my server so that I can connect with the command: ssh My_Name @ My_Server_Name.com The .com or '.net' does not matter, I just want to have a name for my server. | If you just want to do this for convenience, the simplest method is to add a stanza to your ssh configuration file ~/.ssh/config : Host my_server_name HostName some_ip_address User my_name ... any other options Then you can ssh my_server_name to connect. Other options include: using mDNS and connecting to the .local name your machine advertises (mDNS is called bonjour in the Apple world; a common linux implementation is avahi ). editing the /etc/hosts file on each client machine to provide a mapping from the IP address to the chosen server name. installing and configuring a DNS server on a machine in your local network, and setting it as the preferred DNS server for all other local machines. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/332914",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145309/"
]
} |
333,050 | Yes, I'm aware that I wrote "SSH shell" in the title. TL;DR: The first paragraph, the one with the link, and the one with the error message are most important. I have my Raspberry Pi at home which I can access over the internet but only via IPv6. I'm currently in a location where I don't have IPv6. I can execute commands on it by first logging in to a server which has both IPv4 and IPv6 and then logging into my pi from there. However, I use SSH on it for more than executing commands on it: git backups (Deja Dup) accessing files (SFTP) VNC (I tunnel through SSH and can then connect to localhost via VNC) These are in decreasing order of importance. I want to access my git repos. A few more details: I can't simply make my Pi accessible via IPv4. The modem it's behind has an IPv4 address and an IPv6 subnet but I have to use hardware I didn't choose running software I can't change. That software is not only buggy and I can't even take a look at it, but furthermore, it doesn't allow IPv4 port forwarding it all. I don't control the server with both IPv4 and IPv6 on it. I only have a normal user account on it and can't – for example – install new software if more than standard user rights are required for it. Googling for a solution brought up this rather promising page , and it actually works for git. I set up new remotes for the repos I'm using, simply replacing the pi's domain name by localhost:3333 . But it looks much more promising than that. It looks like the solution for all of the above. And it kind of started to work out! SFTP works and I can't really determine whether backups via Deja Dup work, yet, because my connection is too slow, but it hasn't failed yet, and something's causing network traffic, so that's good and promising. But why can't I just do ssh localhost:3333 to connect to my laptop to get a shell on my pi? The command results in this error message: ssh: Could not resolve hostname localhost:3333: Name or service not known I'm mainly interested in why I can't get a shell the way I'd expect it to work. | You might want to look into ssh 's ProxyCommand configuration, which allows for this to work more seamlessly, and will work for shells, SFTP, tunnels, and anything else you might want to proxy via ssh. Let's say you have the following three hosts: workstation.example.com - This is the machine you're physically working on proxy.example.com - This is the machine you're routing your SSH traffic through endpoint.example.com - This is where you want the traffic to ultimately end up In ~/.ssh/config on workstation , add the following: Host endpoint User EndpointUser # set this to the username on the destination host HostName endpoint.example.com ProxyCommand ssh [email protected] nc %h %p 2> /dev/null On the proxy host, make sure nc (netcat) is installed. Then, on workstation , you can ssh endpoint or sftp endpoint and you will be transparently proxied to the machine by way of your proxy host. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/333050",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147785/"
]
} |
333,057 | I am having issues with Seagate Laptop SSHD 1TB, PN: ST1000LM014-1EJ164-SSHD-8GB . dmesg | grep ata1: says this: [ 1.197516] ata1: SATA max UDMA/133 abar m2048@0xf7d36000 port 0xf7d36100 irq 31[ 6.548436] ata1: link is slow to respond, please be patient (ready=0)[ 11.232622] ata1: COMRESET failed (errno=-16)[ 16.588832] ata1: link is slow to respond, please be patient (ready=0)[ 21.269019] ata1: COMRESET failed (errno=-16)[ 26.621223] ata1: link is slow to respond, please be patient (ready=0)[ 56.322386] ata1: COMRESET failed (errno=-16)[ 56.322449] ata1: limiting SATA link speed to 3.0 Gbps[ 61.374591] ata1: COMRESET failed (errno=-16)[ 61.374651] ata1: reset failed, giving up Further, I don't see the drive in GParted. Does this mean this drive is dead or semi-dead? | Since the issue is with the link, rather than an actual error reported by the drive itself, technically it means that either the SATA port, or the SATA cable, or the drive is having issues. In all likelihood though the drive is dead. (But try another cable if you have one!) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/333057",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
333,061 | I just finished installing Debian 8 (Jessie) and tried to make a directory in lib/firmware , because there was a file missing ( rtl8723befw.bin ) in the installation, and it says mkdir: cannot create directory `rtlwifi`: Permission denied I tried putting sudo on the front, but then it returns: bash: sudo: command not found When trying to install sudo with apt-get install sudo or even apt-get update it returns: E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)E: Unable to lock the administration directory (/var/lib/dpkg/), are you root? I am really at a loss of what to do. All the solutions that I seem to find for the latest error is to use sudo, but I don't even have that. | If you do not have sudo installed, you will need to actually become root. Use su - and provide the root user's password (not your password) when asked. Once you have become root, you can then apt-get install sudo , log out of the root shell, and actually use sudo as you are trying to, now that it will have been installed. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/333061",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/207390/"
]
} |
333,097 | I first thought about SED ( sed "s/^/COUNTER \&/" /tmp/1.tex ) but it is designed for a single line, and I cannot increment the counter itself by sed so thinking now awk because I have great experiences with gawk in integrated approaches. Data What & South Dragon & North Dragon & 5 \\ \hlineWhat & South Dragon & North Dragon & 5 \\ \hlineWhat & South Dragon & North Dragon & 5 \\ \hline Expected output 1 & What & South Dragon & North Dragon & 5 \\ \hline2 & What & South Dragon & North Dragon & 5 \\ \hline3 & What & South Dragon & North Dragon & 5 \\ \hline OS: Debian 8.5 | nl is a utility to number the lines of a file.Usage: nl /path/to/file In your specific case: $ nl -s ' & ' input.txt 1 & What & South Dragon & North Dragon & 5 \\ \hline 2 & What & South Dragon & North Dragon & 5 \\ \hline 3 & What & South Dragon & North Dragon & 5 \\ \hline | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/333097",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
333,107 | I have been using zsh 5.3 for sometime now. I finally managed to have a prompt that I like (see my own answer) - Now I want to add battery status to my zsh prompt on the right-corner so I always know how much battery is remaining . How do I do it ? Update - I saw Paul H.'s comment and saw the stackoverflow answers and saw https://stackoverflow.com/a/34913418 . I like that one EXCEPT that one shows the battery in left and I want it to show on the right. Any ideas ? | nl is a utility to number the lines of a file.Usage: nl /path/to/file In your specific case: $ nl -s ' & ' input.txt 1 & What & South Dragon & North Dragon & 5 \\ \hline 2 & What & South Dragon & North Dragon & 5 \\ \hline 3 & What & South Dragon & North Dragon & 5 \\ \hline | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/333107",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50490/"
]
} |
333,121 | I use the following command to recursively search multiple files and find the line number in each file in which the string is found. grep -nr "the_string" /media/slowly/DATA/lots_of_files > output.txt The output is as follows: /media/slowly/DATA/lots_of_files/lots_of_files/file_3.txt:3:the_string /media/slowly/DATA/lots_of_files/lots_of_files/file_7.txt:6:the_string is in this sentence. /media/slowly/DATA/lots_of_files/lots_of_files/file_7.txt:9:the_string is in this sentence too. As shown above, the output includes the filename, line number and all the text in that line including the string. I have also figured out how to print just the specific lines of a files containing the string using the following command: sed '3!d' /media/slowly/DATA/lots_of_files/lots_of_files/file_3.txt > print.txt sed '6!d' /media/slowly/DATA/lots_of_files/lots_of_files/file_7.txt >> print.txt sed '9!d' /media/slowly/DATA/lots_of_files/lots_of_files/file_7.txt >> print.txt I created the above commands manually by reading the line numbers and filenames Here's my question. Q1a Is there a way to combine both steps into one command? I'm thinking piping the line number and the filename into sed and printing the line. I'm having a problem with the order in which the grep output is generated. Q1b Same as above but also print the 2 lines before and 2 lines after the line containing the string (total of 5 lines)? I'm thinking piping the line number and the filename into sed and printing all the required lines somehow. Big thanks. | If I am understanding the question correctly, you can accomplish this with one grep command. For Q1a, your grep output can suppress the filename using -h , e.g.: grep -hnr "the_string" /media/slowly/DATA/lots_of_files > output.txt For Q1b, your grep output can include lines preceding and following matched lines using -A and -B , e.g.: grep -hnr -A2 -B2 "the_string" /media/slowly/DATA/lots_of_files > output.txt The output will contain a separator between matches, which you can suppress with --no-group-separator , e.g.: grep -hnr -A2 -B2 --no-group-separator "the_string" /media/slowly/DATA/lots_of_files > output.txt Note that the output uses a different delimiter for matching lines ( : ) and context lines ( - ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/333121",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148402/"
]
} |
333,144 | I'm trying to copy the contents of a failing USB thumb drive. If I read the data too fast, the drive's controller chip overheats and the drive vanishes from the system. When that happens, I need to unplug the drive, wait a minute or so for it to cool, plug it back in, and re-start the copy. I've got an old backup of the contents of the drive, so the obvious way to get the rest of the data is to use rsync to bring the backup up to date, but this runs into the whole "read too fast, the drive vanishes, and I need to start over" issue. Is there a way to tell rsync to only read X megabytes of data per minute? Alternatively, is it possible to tell it to suspend operations when the drive vanishes, and resume when it gets plugged back in? | Unlike DopeGhoti's experience, the --bwlimit flag does limit data transfer, with my rsync (v3.1.2). test: $ dd if=/dev/urandom bs=1M count=10 of=data10+0 records in10+0 records out10485760 bytes (10 MB, 10 MiB) copied, 0.0871822 s, 120 MB/s$ du -h data10M data$ time rsync -q data fast0.065 seconds$ time rsync -q --bwlimit=1M data slow10.004 seconds (note: my time output looks different to most time invocations ( zsh feature), those times weren't edited by me) Else, perhaps something along the lines of a double -exec in find . I believe that rsync -R should create & copy the parent folders, but if it doesn't, then cp --parents should. $ find /failing/usb -exec rsync -R {} /somewhere/safe/ \; -exec sleep 1 \; Note : also check out ddrescue , it might be right what you're looking for :) | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/333144",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62988/"
]
} |
333,186 | I am running in an interactive bash session. I have created some file descriptors, using exec, and I would like to list what is the current status of my bash session. Is there a way to list the currently open file descriptors? | Yes, this will list all open file descriptors: $ ls -l /proc/$$/fdtotal 0lrwx------ 1 isaac isaac 64 Dec 28 00:56 0 -> /dev/pts/6lrwx------ 1 isaac isaac 64 Dec 28 00:56 1 -> /dev/pts/6lrwx------ 1 isaac isaac 64 Dec 28 00:56 2 -> /dev/pts/6lrwx------ 1 isaac isaac 64 Dec 28 00:56 255 -> /dev/pts/6l-wx------ 1 isaac isaac 64 Dec 28 00:56 4 -> /home/isaac/testfile.txt Of course, as usual: 0 is stdin, 1 is stdout and 2 is stderr. The 4th is an open file (to write) in this case. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/333186",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39807/"
]
} |
333,190 | I am using the following line at the beginning of a bash shell script: IFS=':#:' But it is not separating fields with :#:, only with colon. What is the issue? EDIT: This is my data in text file: f:#:0c:#:Test Cs:#:test Sctype:#:0a:#:test At:#:10:02:03r:#:test rf:#:0c:#:Test C1s:#:test S1ctype:#:1a:#:test A1t:#:00:02:22r:#:test rf:#:20c:#:Test Cs:#:test Sctype:#:2a:#:test A1t:#:00:02:03r:#:test r ... and I am reading it using the following code: IFS=':#:' while read -r key value; do .....done < "$FileName" | Yes, this will list all open file descriptors: $ ls -l /proc/$$/fdtotal 0lrwx------ 1 isaac isaac 64 Dec 28 00:56 0 -> /dev/pts/6lrwx------ 1 isaac isaac 64 Dec 28 00:56 1 -> /dev/pts/6lrwx------ 1 isaac isaac 64 Dec 28 00:56 2 -> /dev/pts/6lrwx------ 1 isaac isaac 64 Dec 28 00:56 255 -> /dev/pts/6l-wx------ 1 isaac isaac 64 Dec 28 00:56 4 -> /home/isaac/testfile.txt Of course, as usual: 0 is stdin, 1 is stdout and 2 is stderr. The 4th is an open file (to write) in this case. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/333190",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125463/"
]
} |
333,198 | I would like to save the stderr stream of a command into a log file but I also want to display the whole output (stdout + stderr) on the screen. How can I do this? I only found the solution to display stdout + stderr to the console and redirect both streams to a file as well: foo | tee output.file ( https://stackoverflow.com/questions/418896/how-to-redirect-output-to-a-file-and-stdout ) But I only want to redirect stderr to the log file. | With a recent bash, you can use process substitution. foo 2> >(tee stderr.txt) This just sends stderr to a program running tee. More portably exec 3>&1 foo 2>&1 >&3 | tee stderr.txt This makes file descriptor 3 be a copy of the current stdout (i.e. the screen), then sets up the pipe and runs foo 2>&1 >&3 . This sends the stderr of foo to the same place as the current stdout, which is the pipe, then sends the stdout to fd 3, the original output. The pipe feeds the original stderr of foo to tee, which saves it in a file and sends it to the screen. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/333198",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/203835/"
]
} |
333,225 | https://www.centos.org/docs/5/html/5.2/Deployment_Guide/s3-proc-self.html says The /proc/self/ directory is a link to the currently running process. There are always multiple processes running concurrently, so which process is "the currently running process"? Does "the currently running process" have anything to do with which process is currently running on the CPU, considering context switching? Does "the currently running process" have nothing to do with foreground and background processes? | This has nothing to do with foreground and background processes; it only has to do with the currently running process. When the kernel has to answer the question “What does /proc/self point to?”, it simply picks the currently-scheduled pid , i.e. the currently running process (on the current logical CPU). The effect is that /proc/self always points to the asking program's pid; if you run ls -l /proc/self you'll see ls 's pid, if you write code which uses /proc/self that code will see its own pid, etc. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/333225",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
333,241 | I'd like to define a function that is called, whenever a shell-user types a command that does not exist. In my case I'd like to log the errors and try alternative commands. currently, when typing e.g. dgfgsdjagfghsdg the error zsh: command not found: dgfgsdjagfghsdg is shown. Is there a way to define a function, that get the typed command (+ arguments) as a parameter? | Yes. In the Z shell it is a function named command_not_found_handler . In the Bourne Again shell it is a function named command_not_found_handle . Further reading Intercept "command not found" error in zsh how to locally redefine 'command_not_found_handle'? (2x) zsh: command not found No command 'bla' found, did you mean:? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/333241",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103727/"
]
} |
333,254 | Since I installed EasyTag on my Arch Linux several other programs use EasyTag instead of Nautilus as filebrowser. For example, Firefox starts EasyTag if I click on "open containing folder". Where can I set Nautilus as my "standard file browser"? | You can define the default file browser by editing the file ~/.local/share/applications/mimeapps.list . Open this file and change the line inode/directory as follow inode/directory=nautilus.desktop; If this doesn't work, you should change the filemanager in the file /usr/share/applications/mimeinfo.cache by adding (or updating) this line inode/directory=nautilus.desktop | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/333254",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148745/"
]
} |
333,296 | I am debugging some printing issues on a small LAN, and although I'm fairly sure the issues I'm facing are not related to cups itself, I have been tinkering with the printing protocols that both CUPS and my printers (Konica Minolta Bizhub C224E and C3350) understand. That made me wonder: is it just a matter of knowing which protocols your printers support, or is there any hierarchy between them? From the extensive reading I did, I seem to be able to deduce that LPD is fairly old and IPP(14) the 'new kid on the block', but does this new protocol offer real benefits or not? | Thx to @RuiFRibeiro I found some resources on the AskUbuntu site and one of them was pointing to an obsolete cups.org FAQ, which led me to a link that I had missed before: https://www.cups.org/doc/network.html . This page lists the most important differences: AppSocket Protocol The AppSocket protocol (sometimes also called the JetDirect protocol, owing to its origins with the HP JetDirect network interfaces) is the simplest, fastest, and generally the most reliable network protocol used for printers. AppSocket printing normally happens over port 9100 and uses the socket URI scheme: socket://ip-address-or-hostname Internet Printing Protocol (IPP) IPP is the only protocol that CUPS supports natively and is supported by most network printers and print servers. IPP printing normally happens over port 631 and uses the http (Windows), ipp, and ipps URI schemes: http://ip-address-or-hostname:port-number/resourceipp://ip-address-or-hostname:port-number/resourceipps://ip-address-or-hostname:port-number/resource Line Printer Daemon (LPD) Protocol LPD is the original network printing protocol and is supported by many network printers. Due to limitations in the LPD protocol, we do not recommend using it if the printer or server supports one of the other protocols. LPD printing normally happens over port 515 and uses the lpd URI scheme: lpd://ip-address-or-hostname/queue | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/333296",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58176/"
]
} |
333,348 | I have my SSH client configured to multiplex my sessions: Host *ControlMaster autoControlPath ~/.ssh/sockets/%r@%h-%pControlPersist 600 I'm occasionally bumping into the default OpenSSH server side MaxSessions limit of 10. The obvious answer is to increase MaxSessions to a number significantly larger than I'd ever need. Is there a reason to not just set it to 1000000? The default of 10 suggests this is some reason not to. All I can come up with is perhaps past 10 or so, busy connections might be less efficient, but seeing as the harm would be limited to myself, I'm not sure this is the reason. | There is always a reason why to limit anything. The 10 is "sane default". The less is for more restrictive use cases (preventing shell access or allowing only single channel), bumping it to more can also make a sense, if you really know, you will be issuing millions of sessions. I rarely open more than 4. To the question: Is there a reason to not just set it to 1000000? max_sessions variable has int type, so the maximum possible value is 2147483647 . Nothing prevents you setting up your ideal million. ... but as already mentioned, there is no good reason to do that. There is no significant security effect in using more sessions (once single session of attacker is opened, you are screwed), but there might be performance penalty when using more of them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/333348",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56657/"
]
} |
333,368 | After the latest upgrade on Debian stretch, hitting alt+shift on my keyboard make it change layout, which breaks all my alt+shift+<anything> xbindkeys shortcuts. I have disabled all shortcuts in Settings -> Keyboard -> Input. Still the same. In Settings -> Languages, it is said that this alt+shift behaviour can be tweaked in.. Settings -> Keyboard. But alt+shift seems to be set nowhere there. Is it hardcoded? Is there a way xbindkeys can work around this? | Okay, got it: this line in my /etc/default/keyboard XKBOPTIONS="grp:alt_shift_toggle,grp_led:scroll" .. should not contain grp:alt_shift_toggle , which is the relevant xkb option according to this post . In addition, Gnome overrides xkb options according to this other post . As a consequence, this output: $ dconf read /org/gnome/desktop/input-sources/xkb-options['grp:alt_shift_toggle','grp_led:scroll'] .. should not read grp:alt_shift_toggle on my machine either. So after I ran: dconf write /org/gnome/desktop/input-sources/xkb-options "['grp_led:scroll']" I got my good'ol behaviour back ;) I have filed this as a bug to Gnome. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/333368",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87656/"
]
} |
333,373 | When I issue ps aux | grep mtp I get ubuntu-+ 15934 0.1 0.0 519848 7068 ? Sl 21:13 0:00 /usr/lib/gvfs/gvfsd-mtp --spawner :1.9 /org/gtk/gvfs/exec_spaw/20 So the PID in this case is 15934. But every new time this is run the PID is different. Is there any other way to kill a process other than by PID? | Probably there is a parent process which kills child processes and forks new children. You can use pstree to find the parent process: pgrep mtp | xargs -i pstree -ps {} Or alternatively you can use the ppid option of ps: pgrep mtp | while read line; do ps -p $line -o ppid; done Then you can kill the parent process | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/333373",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/203712/"
]
} |
333,385 | I am preparing a old HDD for use with my Pi. But it has >36 bad sectors (avoce threshold). So I ran badblocks to investigate which sectors are affected and saved them to a file. The file now contains basically a list of all affected sectors separated by linebreaks. How can I use this information now with mkfs.ext4 so it won't allocate data blocks at those addresses? | Probably there is a parent process which kills child processes and forks new children. You can use pstree to find the parent process: pgrep mtp | xargs -i pstree -ps {} Or alternatively you can use the ppid option of ps: pgrep mtp | while read line; do ps -p $line -o ppid; done Then you can kill the parent process | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/333385",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/132968/"
]
} |
333,548 | I need to pass as a program argument a parameter expansion. The expansion results in a filename with spaces. Therefore, I double-quote it to have the filename as a single word: "$var" . As long as $var contains a filename, the program gets a single-word argument and it works fine. However, at times the expansion results in an empty string, which when passed as argument, breaks the program (which I cannot change). Not removing the empty string is the specified behavior, according to Bash Reference Manual: If a parameter with no value is expanded within double quotes, a null argument results and is retained. But then, how do I manage the case where I need to quote variables, but also need to discard an empty string expansion? EDIT: Thanks to George Vasiliou, I see that a detail is missing in my question (just tried to keep it short :) ). Running the program is a long java call, which abbreviated looks like this: java -cp /etc/etc MyClass param1 param2 "$var" param4 Indeed, using an if statement like that described by George would solve the problem. But it would require one call with "$var" in the then clause and another without "$var" in the else clause. To avoid the repetition, I wanted to see if there is a way to use a single call that discards the expansion of "$var" when it is empty. | The ${parameter:+word} parameter expansion form seems to do the job ( xyz=2; set -- ${xyz:+"$xyz"}; echo $# )1( xyz=; set -- ${xyz:+"$xyz"}; echo $# )0( unset xyz; set -- ${xyz:+"$xyz"}; echo $# )0 So that should translate to program ${var:+"$var"} in your case | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/333548",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146180/"
]
} |
333,561 | I am trying to compose a grep statement and it is killing me. I am also tired of getting the arguments list too long error. I have a file, let's call it subset.txt . It contains hundreds of lines with specific strings such as MO43312948 . In my object directory I have thousands of files and I need to copy all the files that contain the strings listed in subset.txt into another directory. I was trying to start with this to just return the matching files from the objects directory. grep -F "$(subset.txt)" /objects/* I keep getting `bash: /bin/grep: Argument list too long`` | You can pass a directory as a target to grep with -R and a file of input patterns with -f : -f FILE, --file=FILE Obtain patterns from FILE, one per line. If this option is used multiple times or is combined with the -e (--regexp) option, search for all patterns given. The empty file contains zero patterns, and therefore matches nothing. -R, --dereference-recursive Read all files under each directory, recursively. Follow all symbolic links, unlike -r. So, you're looking for: grep -Ff subset.txt -r objects/ You can get the list of matching files with: grep -Flf subset.txt -r objects/ So, if your final list isn't too long, you can just do: mv $(grep -Flf subset.txt -r objects/) new_dir/ If that returns an argument list too long error, use: grep -Flf subset.txt -r objects/ | xargs -I{} mv {} bar/ And if your file names can contain spaces or other strange characters, use (assuming GNU grep ): grep -FZlf subset.txt -r objects/ | xargs -0I{} mv {} bar/ Finally, if you want to exclude binary files, use: grep -IFZlf subset.txt -r objects/ | xargs -0I{} mv {} bar/ | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/333561",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/207748/"
]
} |
333,573 | How do I truncate column "test10" to 5 characters from Unix command line? From this test1,test2,test3,test4,test10,test11,test12,test17rh,mbn,ccc,khj,ee3 eeeeeEeee ee$eeee e.eeeee2eeeee5eeeeeeee,a2,3,uhyt,bb,mb,khj,R ee3ee eeEeee ee$eeee e.eeeee2eeeee5eeeeeeee,a,5,rmbn,htr,ccc,fdf,F1ee eeeeEeee ee$eeee e.eeeee2eeeee5eeeeeeee,a,e,r To this test1,test2,test3,test4,test10,test11,test12,test17rh,mbn,ccc,khj,ee3 e,a2,3,uhyt,bb,mb,khj,R ee3,a,5,rmbn,htr,ccc,fdf,F1ee ,a,e,r | If your file really is as simple as your example, you can do one of: awk $ awk -F, -vOFS=, 'NR>1{$5=substr($5,1,5)}1' file test1,test2,test3,test4,test10,test11,test12,test17rh,mbn,ccc,khj,ee3 e,a2,3,uhyt,bb,mb,khj,R ee3,a,5,rmbn,htr,ccc,fdf,F1ee ,a,e,r Explanation The -F, sets the input field separator to , and the -vOFS=, sets the variable OFS (the output field separator) to , . NR is the current line number, so the script above will change the 5th field to a 5-character substring of itself. The lone 1 is awk shorthand for "print this line". perl $ perl -F, -lane '$F[4]=~s/(.{5}).*/$1/ if $.>1; print join ",", @F' file test1,test2,test3,test4,test10,test11,test12,test17rh,mbn,ccc,khj,ee3 e,a2,3,uhyt,bb,mb,khj,R ee3,a,5,rmbn,htr,ccc,fdf,F1ee ,a,e,r Explanation The -a makes perl act like awk and split its input lines on the character given by -F and saves them as elements of the array @F . We then remove all but the 1st 5 characters of the 5th field (they start counting at 0 ) and then print the resulting @F array joined with commas. sed $ sed -E '1!s/(([^,]+,){4}[^,]{5,5})[^,]*,/\1,/' filetest1,test2,test3,test4,test10,test11,test12,test17rh,mbn,ccc,khj,ee3 e,a2,3,uhyt,bb,mb,khj,R ee3,a,5,rmbn,htr,ccc,fdf,F1ee ,a,e,r Explanation This is the substitution operator whose general format is s/original.replacement/ . The 1! means "don't do this for the 1st line". The regular expression matches a set of non- , followed by a , 4 times ( ([^,]+,){4} ), then any 5 non- , characters ( [^,]{5} )—these are the 1st 5 of the 5th field—and then anything else until the end of the field ( [^,]+, ). All this is replaced with the first part of the line, effectively truncating the field. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/333573",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/207756/"
]
} |
333,598 | Is there any way to retrieve UID/GID of running process?Currently, I know only way of looking it up in htop. But I don't want to depend on third-party tool, prefer to use builtin unix commands. Could you suggest a few useful oneliners? This didn't satisfy my curiousity: How to programmatically retrieve the GID of a running process top shows only user but not the group. | $ stat -c "%u %g" /proc/$pid/1000 1000 or $ egrep "^(U|G)id" /proc/$pid/statusUid: 1000 1000 1000 1000Gid: 1000 1000 1000 1000 or with only bash builtins: $ while read -r line;do [ "${line:1:2}" = "id" ] && echo $line;done < /proc/17359/status Pid: 17359Uid: 1000 1000 1000 1000Gid: 1000 1000 1000 1000 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/333598",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191967/"
]
} |
333,636 | This bash file running on Mac terminal failed to change the directory. Rather reporting it does not exist when it actually does. Any thing I did wrong? #!/usr/bin/env bashset -eread nameAPPLICATION_PATH="~/Documents/meteor/apps/$name"cd "${APPLICATION_PATH}" | There are two points: Problems with tilde expansion Problems with sourcing vs. executing. For the tilde part, a very recent question at superuser was about the same issue ( https://superuser.com/questions/1161493/why-bash-script-wont-extend-bashrc/1161496#1161496 ) The tilde is expanded before the variable, so the cd cannot find the path. To overcome this, lead the command with eval as such: eval cd "${APPLICATION_PATH}" Unfortunately, when you execute the script (I mean, if it is chmod'ed to "+x", calling the path), you will see that the $PWD does not change in the "current shell". However if you add such a line at the end of the script ls You will see that, ls is executed at the new working directory. How come? The answer is here ( https://superuser.com/questions/176783/what-is-the-difference-between-executing-a-bash-script-and-sourcing-a-bash-scrip#176788 ) Short answer: sourcing will run the commands in the current shell process. executing will run the commands in a new shell process. still confused? then please continue reading the long answer. Shortly, to change the $PWD at current shell, you should "source" the script as such: source /path/to/script or . /path/to/script A third point: If you don't want to mess with source or . , you can define an alias in your ~/.bashrc ( https://stackoverflow.com/questions/752525/run-bash-script-as-source-without-source-command ): alias mycmd="source mycmd.sh" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/333636",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/155011/"
]
} |
333,640 | I'm using an iptables firewall on my local Linux server. The log tells me that there is a continuous multicast on the network by my router (Fritz box). Is this a normal behaviour. Should I allow this traffic? [633912.348130] IPTables Packet Dropped: IN=enp3s0 OUT= MAC=01:00:5e:00:00:01:xx:xx:xx:xx:xx:xx:xx:xx SRC=192.168.178.1 DST=224.0.0.1 LEN=36 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2[634912.348130] IPTables Packet Dropped: IN=enp3s0 OUT= MAC=01:00:5e:00:00:01:xx:xx:xx:xx:xx:xx:xx:xx SRC=192.168.178.1 DST=224.0.0.1 LEN=36 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2[635037.322691] IPTables Packet Dropped: IN=enp3s0 OUT= MAC=01:00:5e:00:00:01:xx:xx:xx:xx:xx:xx:xx:xx SRC=192.168.178.1 DST=224.0.0.1 LEN=36 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2[635287.169456] IPTables Packet Dropped: IN=enp3s0 OUT= MAC=01:00:5e:00:00:01:xx:xx:xx:xx:xx:xx:xx:xx SRC=192.168.178.1 DST=224.0.0.1 LEN=36 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 | This is IGMP traffic, which unless you know why you'd want it, can be safely ignored. The 224.0.0.1 multicast subnet is defined as being for all hosts on the network segment. (See Notable IPv4 multicast addresses (Wikipedia) .) Protocol 2 (See List of IP protocol numbers (Wikipedia) ) is IGMP, the Internet Group Management Protocol (Wikipedia) . Essentially, your router is asking if there are any other multicast-capable routers on the subnet. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/333640",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149305/"
]
} |
333,709 | I want to copy an entire file structure (with thousands of files and hundreds of directories), it's a hierarchy of directories and there are those node_modules directory that I want to exclude from the copying process. Is there a Unix command to copy from a directory and all of its files and sub-directories recursively with an option to say don't include the directories with the name <name> ? Something like : cp root/ rootCopy/ --except node_modules ? If not, is there a simple way to do that from the command line without writing a bash or something ? | You can try with rsync or tar command . See this or this post. From rsync man page --exclude=PATTERN exclude files matching PATTERN --exclude-from=FILE read exclude patterns from FILE rsync -avz --exclude 'dir*' source/ destination/ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/333709",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32990/"
]
} |
333,728 | Security team of my organization told us to disable weak ciphers due to they issue weak keys. arcfour arcfour128 arcfour256 But I tried looking for these ciphers in ssh_config and sshd_config file but found them commented. grep arcfour *ssh_config:# Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc Where else I should check to disable these ciphers from SSH? | If you have no explicit list of ciphers set in ssh_config using the Ciphers keyword, then the default value, according to man 5 ssh_config (client-side) and man 5 sshd_config (server-side), is: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128, [email protected],[email protected], [email protected], aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc, aes256-cbc,arcfour Note the presence of the arcfour ciphers. So you may have to explicitly set a more restrictive value for Ciphers . ssh -Q cipher from the client will tell you which schemes your client can support. Note that this list is not affected by the list of ciphers specified in ssh_config . Removing a cipher from ssh_config will not remove it from the output of ssh -Q cipher . Furthermore, using ssh with the -c option to explicitly specify a cipher will override the restricted list of ciphers that you set in ssh_config and possibly allow you to use a weak cipher. This is a feature that allows you to use your ssh client to communicate with obsolete SSH servers that do not support the newer stronger ciphers. nmap --script ssh2-enum-algos -sV -p <port> <host> will tell you which schemes your server supports. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/333728",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19072/"
]
} |
333,743 | When I uss less file1 file2 I get both files shown in the "less buffer viewer", but less file1 file2 | cat prints the content of both files appended to stdout. How does less know if it should show the "less buffer viewer" or produce output to stdout for a next command? What mechanism is used for doing this? | less prints text to stdout. stdout goes to a terminal (/dev/tty?) and opens the default buffer viewer through a pipe when piping it to another programm using | ( less text | cut -d: -f1 ) to a file when redirecting it with > ( less text > tmp ) There is a C function called "isa tty " which checks if the output is going to a tty (less 4.81, main.c, line 112). If so, it uses the buffer viewer otherwise it behaves like cat . In bash you can use test (see man test ) -t FD file descriptor FD is opened on a terminal -p FILE exists and is a named pipe Example: [[ -t 1 ]] && \ echo 'STDOUT is attached to TTY'[[ -p /dev/stdout ]] && \ echo 'STDOUT is attached to a pipe'[[ ! -t 1 && ! -p /dev/stdout ]] && \ echo 'STDOUT is attached to a redirection' | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/333743",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/207872/"
]
} |
333,757 | I am currently trying to make an autonomous drone using the Robot Operating System ( ROS ). To do this, I have installed Raspbian Lite ( Jessie ) on a Raspberry Pi 3 and am currently using ROS Kinetic on it. Because it is Raspbian Lite , there were no window managers or desktop environments that came along with the installation. I decided to go with Openbox Window Manager and installed a terminal onto it for convenience. I can just call sudo startx , and the window manager opens up, which can be accessed by Ctrl + alt + F2 `. Now my question lies in the fact that I do not understand the process of creating new sessions within the system wide terminal. Is it called the system wide terminal to start with? What are these sessions, that I am invoking with the use of Ctrl + Shift + F ? Some of them accommodate display managers and some of them accommodate terminals , while I imagine, that a whole desktop environment can be accommodated too. Is there a man page that I can look into? | They are kernel virtual terminal devices , multiplexed onto the physical framebuffer and human-input devices by a terminal emulator program that is built into the kernel itself. To applications programs running on top of the kernel, they look like any other terminal devices, such as a serial terminal device . (They have a line discipline, but no modem control.) The system implements terminal login by dint of running a getty program (or equivalent) and a login program that accept user credentials and invoke login sessions . The X server program also needs to use the physical framebuffer and human-input devices. It needs to negotiate sharing them with the kernel terminal emulator. It does so by allocating one virtual terminal and telling the kernel to disconnect that from the kernel terminal emulator. Hence why it appears that the X server "runs" on a particular terminal. When the kernel terminal emulator sees the hotkey chord for switching to the allocated virtual terminal, it cedes control of the framebuffer and human-input devices to to the X server. When the X server sees the hotkey chord for switching to another virtual terminal, the X server cedes control back. These hotkey chords are not necessarily symmetrical. On one of my systems the hotkey chord implemented by the kernel terminal emulation program for switching to virtual terminal #2 is Alt + F2 whereas the hotkey chord implemented by the X server for the same action is Ctrl + Alt + F2 . When it comes to graphical login , a display manager handles starting up X servers with greeter programs. You're just starting an X server directly and not using a display manager, of course. Once the user credentials have been authenticated, a desktop manager displays a desktop environment , which comprises a set of X client applications of varying degrees of complexity. For complex desktop environments, there is a whole bunch of server programs interconnected via a desktop bus . (On one of my systems, the so-called "small and lightweight" GNOME Editor requires a D-BUS broker and nine other server programs to be running.) Some of those X client programs can be other terminal emulators, userspace ones, such as LXTerminal, Unicode RXVT, GNOME Terminal, Terminate, roxterm, evilvte, xterm, and so forth. These do not directly use physical framebuffer and human-input devices, and they make use of pseudo-terminal devices. Further reading https://superuser.com/a/723442/38062 https://unix.stackexchange.com/a/316279/5132 https://unix.stackexchange.com/a/194218/5132 https://unix.stackexchange.com/a/178807/5132 https://stackoverflow.com/a/39302351/340790 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/333757",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166916/"
]
} |
333,766 | Is there any way to remove ^C when you hit CTRL + C in the shell include with Red Hat Enterprise Linux 6 ("Santiago")? I have permission to edit my own .bash_profile . | Edit (or create) your ~/.inputrc file. Add a line saying set echo-control-characters Off This will instruct the GNU Readline library (which Bash uses) to not output (echo) any control characters to the screen. The setting will be active in all new Bash sessions thereafter (and in any other utility that uses the Readline library). Notice that if your Unix system comes with a system-wide configuration file for the Readline library (usually /etc/inputrc ), then your personal configuration file will need to include that file: $include /etc/inputrcset echo-control-characters Off Another alternative is to make a personal copy of the system-wide configuration file and then modify that. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/333766",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/207888/"
]
} |
333,853 | I have a Samsung 840 PRO Series SSD and want to update its firmware in order to find an alternative solution for this problem . I downloaded "Samsung Magician Software for Enterprise SSD" from this page because Samsung only offers magician for consumer SSDs for Windows. According to the top reply on this reddit post , it should work. However, upon trying to determine my SSD's ID, magician claims to have found no Samsung SSDs, even though the one and only storage medium currently attached to my laptop is the SSD mentioned in the first paragraph. # ./magician -L================================================================================================Samsung(R) SSD Magician DC Version 2.0Copyright (C) 2015 SAMSUNG Electronics Co. Ltd. All rights reserved.================================================================================================Magician is now configuring the environment for LSI MegaRAID SAS.Magician is now configuring the environment for LSI SAS IT/IR Controller.Magician is now configuring the environment for LSI SAS IT/IR2 Controller.Magician is now configuring the environment for LSI SAS IT/IR3 Controller.----------------------------------------------------------------------------| Disk | Model | Serial | Firmware | Capacity | Drive | Total Bytes || Number | | Number | | | Health | Written |----------------------------------------------------------------------------No Samsung SSD found! | Samsung is really, really weird and it took me many hours to figure this one out because it's absolutely counterintuitive. It turned out that I was right with my skepticism of an image provided by Samsung probably actually being suitable to boot from it. Putting the image they offer you on a thumb drive doesn't work. It's not that it's super fast and you don't notice the update happening like I first thought, it's just that that image isn't bootable which means that nothing happens. You have to mount that image, find a different image in it, and put that image onto your thumb drive. Because reasons, I guess. Step-by-Step Guide Check which firmware your SSD currently has via # hdparm -I /dev/sda . In my case it was Firmware Revision: DXM05B0Q . Visit this site and under "Firmware" → "Samsung SSD Firmware for Windows Users" download "840 PRO Firmware" which currently has the description "ISO DXM06B0Q". Mount the ISO file you just downloaded. From the mountpoint, copy isolinux/btdsk.img to a different location. I'll assume /tmp/btdsk.img for it. This step is actually necessary because root can't read that file but your normal user account can. Run sudo dd if=/tmp/btdsk.img of=/dev/sdb where /dev/sdb is your thumb drive. Go check whether it's /dev/sdb and make sure it's not mounted before you run the command! You will obviously lose the data stored on your thumb drive with this. Shut your computer down. Boot from the thumb drive. It takes a few seconds, then you'll see the slightly confusing message "Firmware is already updated onto this SSD!". The firmware version is printed above it. Press a key to continue. You're shown some kind of shell. I didn't figure out how to reboot the computer from there so I simply killed it via a hard reset. So if you can't figure it out either, just press the power button for 6 seconds. If you figured it out, leave a comment or edit this answer. Remove the thumb drive. Boot your OS. Run # hdparm -I /dev/sda , again, to verify the firmware has been updated. At the time of this writing, it says Firmware Revision: DXM06B0Q . Burn your thumb drive to get rid of that software. Alternatively, delete its contents and reuse it. Newer Firmware Versions ens mentioned in the comments that newer firmware images can directly be copied to the thumb drive via dd without prior extraction from a different image. I have not tested this. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/333853",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147785/"
]
} |
333,862 | Naive approach is find dir1 dir2 dir3 -type d -name .git | xargs -I {} dirname {} , but it's too slow for me, because I have a lot deep folder structures inside git repositories (at least I think that this is the reason). I've read about that I can use prune to prevent find to recurse into directories once it found something, but there's two things. I'm not sure how this works (I mean I don't understand what prune does although I've read man page) and the second it wouldn't work in my case, because it would prevent find to recurse into .git folder but not into all other folders. So what I actually need is: for all subdirectories check if they contain a .git folder and if it is then stop searching in this filesystem branch and report result. It would be perfect if this would also exclude any hidden directories from search. | Okay, I still don't totally sure how this works, but I've tested it and it works. .├── a│ ├── .git│ └── a│ └── .git└── b └── .git6 directories, 0 files% find . -type d -exec test -e '{}/.git' ';' -print -prune./a./b I'm looking forward into making the same faster. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/333862",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53385/"
]
} |
333,867 | In the book "A guide to aix 3.2", it says that one may use the Korn Shell command set -f to "disable filename generation", but what does it mean? What happens with set -f ? | set -f is the portable (i.e. POSIX) way to disable filename expansion. When enabled (by default or with set +f ), filename expansion is an operation performed by the shell that replaces, when possible, command line arguments containing: occurrences of wildcards ( ? = any single character and * = any number of characters) ranges enclosed in square brackets (e.g. [a-z12] = any character from a to z , or 1 or 2 ) non matching lists (e.g. [^a-z] = any character not in the range a to z ) and character classes (e.g. [[:xdigit:]] = any character that can be used to represent an hexadecimal number) by the file names that match them. When disabled, these arguments are left unchanged. $ pwd/etc/samba$ echo *lmhosts smb.conf$ echo *o??smb.conf$ set -f$ echo **$ echo *o??*o?? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/333867",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80389/"
]
} |
333,961 | I have a text file in this format: ####################################KEY2VAL21VAL22VAL23VAL24####################################KEY1VAL11VAL12VAL13VAL14####################################KEY3VAL31VAL32VAL33VAL34 I want sort this file by KEY line and keep next 4 lines with it in result so sorted result should be: ####################################KEY1VAL11VAL12VAL13VAL14####################################KEY2VAL21VAL22VAL23VAL24####################################KEY3VAL31VAL32VAL33VAL34 is there a way to do this ? | msort(1) was designed to be able to sort files with multi-line records. It has an optional gui, as well as a normal and usable-for-humans command line version. (At least, humans that like to read manuals carefully and look for examples...) AFAICT, you can't use an arbitrary pattern for records, so unless your records are fixed-size (in bytes, not characters or lines). msort does have a -b option for records that are blocks of lines separated by blank lines. You can transform your input into a format that will work with -b pretty easily, by putting a blank line before every ###... (except the first one). By default, it prints statistics on stderr, so at least it's easy to tell when it didn't sort because it thought the entire input was a single record. msort works on your data. The sed command prepends a newline to every #+ line except for line 1. -w sorts the whole record (lexicographically). There are options for picking what part of a record to use as a key, but I didn't need them. I also left out stripping the extra newlines. $ sed '2,$ s/^#\+/\n&/' unsorted.records | msort -b -w 2>/dev/null ####################################KEY1VAL11VAL12VAL13VAL14####################################KEY2VAL21VAL22VAL23VAL24####################################KEY3VAL31VAL32VAL33VAL34 I didn't have any luck with -r '#' to use that as the record separator. It thought the whole file was one record. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/333961",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11920/"
]
} |
333,969 | When I request X forwarding from SSH server, then SSH server sets a $DISPLAY variable with value localhost:10.0 . In addition, it starts to listen on 127.0.0.1 port 6010 (and also ::1 port 6010 for IPv6): Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 127.0.0.1:6010 *:* users:(("sshd",pid=11405,fd=10)) How do X clients know that they will need to connect to TCP port 6010? Does this work in a way that by default they connect to TCP port 6000 + <display number> and as display number is in this example 10, then they will connect to TCP port 6010 ? | It’s part of the X11 protocol (search for "6000") and is documented e.g. in Xorg(1) : Xorg listens on port 6000+ n , where n is the display number. This connection type can be disabled with the -nolisten option (see the Xserver(1) man page for details). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/333969",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33060/"
]
} |
333,975 | I have to ask question similar to this . In scenarios where you're backing up directory with tar and new files/dirs are being added current files/dirs are being edited and deleted can you expect safe result? By safe result I mean something like: tar will not screw up something on the source dir/subdirs tar will add to archive as it found in the moment of building archieve success signal will be emitted even if described changes occurred | It’s part of the X11 protocol (search for "6000") and is documented e.g. in Xorg(1) : Xorg listens on port 6000+ n , where n is the display number. This connection type can be disabled with the -nolisten option (see the Xserver(1) man page for details). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/333975",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68350/"
]
} |
334,083 | How can I configure i3 window manager to open new program (window) started in terminal on a specific workspace? | This is what you have to put in your ~/.i3/config file: For example you want Emacs always opened up in work-space 4 . assign [class="Emacs"] 4 How do you get the class info? Run xprop and click on the window you want to capture. For example while Emacs is running, using another terminal execute xprop and then click on the Emacs window. In the output you will find : WM_CLASS(STRING) = "emacs", "Emacs" The first string is the instance and the second one is the class . Finally restart i3 ( $mod+Shift+r ) for the changes to take place. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/334083",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/208105/"
]
} |
334,170 | On Windows I have frequently changed the priority of a games process to 'high' or 'realtime' to get a performance boost. This has never resulted in any problems with my hardware. I was thinking that maybe I could do this on Linux using the chrt command to change the realtime priority of the games process, as renice ing, even to -20 (the highest priority) doesn't seem to provide any noticeable boost. However, I am wary of doing this without knowing whether it might be bad for my CPU. Can anyone inform me on the risks? | Changing the priority of a process only determines how often this process will run when other processes are competing for CPU time. It has no impact when the process is the only one using CPU time. A minimum-priority process on an otherwise idle system gets 100% CPU time, same as a maximum-priority process. So you can run your game with a higher priority, but that won't make it run faster unless something else on the system is using a significant amount of CPU time. I recommend keeping the priority lower than the X server, because if the X server wants CPU time, it's likely to be because the game is asking it to display something complex, and display is usually a CPU-demanding task (but it depends how much of the work is done in the GPU — CPU priorities have no influence on the GPU). CPUs are designed to execute code. Changing process priorities won't affect how much work the CPU does, but even if it did, that wouldn't damage the CPU, it would only make it run hotter and so make the fans in the computer blow harder. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/334170",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92090/"
]
} |
334,171 | With the risk of raising a question that might already have an answer, i would like to ask if anybody knows if and how is it possible to read man pages in my terminal for programs/apps that are not installed in my system using online sources/online tools. I made a stackexchange and google search but found nothing about this issue. For example man grep will raise the grep manual as expected.On the other hand man agrep will give an error since agrep is not installed.In order to read agrep manual i have to google agrep man pages , getting results like this : https://linux.die.net/man/1/agrep PS: BTW it seems strange to me that http://man7.org/linux/man-pages/dir_all_alphabetic.html do not provide agrep... I wonder if it is possible to run man agrep or similar command from my terminal (without installing agrep) and read agrep man pages in terminal as usual man works. I don't expect all man pages of the world to be locally available; i just wonder if there is any tricky way to use man (or even other command) to search and display man pages of not installed progs without having to open browser, type keyword, search in results etc. PS: As noticed by Kusalananda, raising web queries may lead to results not suitable for particular distro version (different versions result will pop up). So the best (in my dream) would be if there was a kind of Distro specific (Debian in my case) internal / built in command (or even a switch) that could retrieve online man pages specifically for my setup (i.e something like : man --online agrep or onlineman agrep ). It seems not. | You can use links : links -dump https://linux.die.net/man/1/agrep | less . Just change the category and name and you're good. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/334171",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/188385/"
]
} |
334,187 | Is there a (technical or practical) limit to how large you can configure the maximum number of open files in Linux? Are there some adverse effects if you configure it to a very large number (say 1-100M)? I'm thinking server usage here, not embedded systems. Programs using huge amounts of open files can of course eat memory and be slow, but I'm interested in adverse effects if the limit is configured much larger than necessary (e.g. memory consumed by just the configuration). | I suspect the main reason for the limit is to avoid excess memory consumption (each open file descriptor uses kernel memory). It also serves as a safeguard against buggy applications leaking file descriptors and consuming system resources. But given how absurdly much RAM modern systems have compared to systems 10 years ago, I think the defaults today are quite low. In 2011 the default hard limit for file descriptors on Linux was increased from 1024 to 4096 . Some software (e.g. MongoDB) uses many more file descriptors than the default limit. The MongoDB folks recommend raising this limit to 64,000 . I've used an rlimit_nofile of 300,000 for certain applications. As long as you keep the soft limit at the default (1024), it's probably fairly safe to increase the hard limit. Programs have to call setrlimit() in order to raise their limit above the soft limit, and are still capped by the hard limit. See also some related questions: https://serverfault.com/questions/356962/where-are-the-default-ulimit-values-set-linux-centos https://serverfault.com/questions/773609/how-do-ulimit-settings-impact-linux | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/334187",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121284/"
]
} |
334,228 | I have a btrfs RAID1 system with the following state: # btrfs filesystem showLabel: none uuid: 975bdbb3-9a9c-4a72-ad67-6cda545fda5e Total devices 2 FS bytes used 1.65TiB devid 1 size 1.82TiB used 1.77TiB path /dev/sde1 *** Some devices missing The missing device is a disk drive that failed completely and which the OS could not recognize anymore. I removed the faulty disk and sent it for recycling. Now I have a new disk installed under /dev/sdd. Searching the web, I fail to find instructions for such a scenario (bad choice of search terms?). There are many examples how to save a RAID system when the faulty disk still remain somewhat accessible by the OS. btrfs replace command requires a source disk. I tried the following: # btrfs replace start 2 /dev/sdd /mnt/brtfs-raid1-b# btrfs replace status /mnt/brtfs-raid1-bNever started No error message, but status indicate it never started. I cannot figure out what the problem with my attempt is. I am running Ubuntu 16.04 LTS Xenial Xerus, Linux kernel 4.4.0-57-generic. Update #1 Ok, when running the command in "non background mode (-B)", I see an error that did not showed up before: # btrfs replace start -B 2 /dev/sdd /mnt/brtfs-raid1-b ERROR: ioctl(DEV_REPLACE_START) failed on "/mnt/brtfs-raid1-b": Read-only file system /mnt/brtfs-raid1-b is mounted RO (Read Only). I have no choice; Btrfs does not allow me to mount the remaining disk as RW (Read Write). When I try to mount the disk RW, I get the following error in syslog: BTRFS: missing devices(1) exceeds the limit(0), writeable mount is not allowed When in RO mode, it seams I cannot do anything; cannot replace, nor add, nor delete a disk. But there is no way for me to mount the disk as RW. What option is left? It shouldn't be this complicated when a simple disk fails. The system should continue running RW and warn me of a failed drive. I should be able to insert a new disk and have the data recopied over it, while the applications remain unaware of the disk issue. That is a proper RAID. | Update: According to @mkudlacek, this problem has been fixed. For prosperity, here is my answer to why in 2017, I could not rebuild a RAID with a missing drive. Turns out that this is a limitation of btrfs as of beginning of 2017. To get the filesystem mounted rw again, one needs to patch the kernel. I have not tried it though. I am planing to move away from btrfs because of this; one should not have to patch a kernel to be able to replace a faulty disk. Click on the following links for details: Kernel patch here Full email thread Please leave a comment if you still suffer from this problem as of 2020 . I believe that people would like to know if this has been fixed or not. Update: I moved to good old mdadm and lvm and am very happy with my RAID10 4x4 Tb (8 Tb total space), as of 2020-10-20. It is proven, works well, not resource intensive and I have full trust in it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/334228",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/160335/"
]
} |
334,240 | I have the following bash string and I need to add a line break to it, before the 'Hello' string: bash -c "echo 'Hello' > /location/file" I already tried adding it with different variations of the \n syntax; Before the double quotes, inside the range of the double quotes, and with different variations of escaping. How could I add a line break just before the 'Hello' string, so to make it appear in the second row? | There are (at least) three options here. Use a literal newline: bash -c "echo 'Hello' > /location/file" Use printf (or the non-standard echo -e ), which expands backslash escaped characters as part of the commands themselves (of which both are shell builtins): bash -c "printf '\n%s\n' Hello > /location/file" Use bash's nonstandard $' quoting, which expands backslash escaped characters as part of the shell: bash -c "echo $'\nHello' > /location/file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/334240",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
334,242 | I want to SSH into a server, start a screen session, cd into path/to/my/script/ , and run test.sh there. As a starter, I tried ssh [email protected] screen -dm bash -c 'cd path/to/my/script/; pwd > ~/output.txt' and expected to see path/to/my/script/ in output.txt , but I see my home directory there. This means the cd command doesn't really work, so bash won't be able to run test.sh . How may I solve this? | The short answer: add some extra quotes around the command, like this: ssh [email protected] "screen -dm bash -c 'cd path/to/my/script/; pwd > ~/output.txt'" To see what's going on, you can specify the -v option to ssh to obtain some debug information. In this case, you'll see a line like the following for the original command debug1: Sending command: screen -dm bash -c cd path/to/my/script/; pwd > ~/output.txt while the extra quotes change this into debug1: Sending command: screen -dm bash -c 'cd path/to/my/script/; pwd > ~/output.txt' So it appears that ssh just takes the arguments that were passed to it, concatenates them all, and lets the remote side split the concatenated argument list again into individual arguments. Calling the argument list argv (like in C), you've got something like the following in the original version: argv[0] = sshargv[1] = [email protected][2] = screenargv[3] = -dmargv[4] = bashargv[5] = -cargv[6] = cd path/to/my/script/; pwd > ~/output.txt Now in principle, it would have been possible for ssh to pass argv[2] to argv[6] as separate arguments to the other side, in which case it would probably have worked as expected. But as the debug line shows (and it also seems like this based on the source code), these arguments are concatenated to the string screen -dm bash -c cd path/to/my/script/; pwd > ~/output.txt which is then interpreted at the remote end. From this it's also clear why it doesn't do what you'd like: now you're executing two things in sequence, first screen -dm bash -c cd path/to/my/script/ (so a screen session is started in which only the directory is changed) is executed from the home directory, and then pwd > ~/output.txt is executed, also from the home directory. For completeness, the arguments for the command with the double quotes are argv[0] = sshargv[1] = [email protected][2] = screen -dm bash -c 'cd path/to/my/script/; pwd > ~/output.txt' causing screen -dm bash -c 'cd path/to/my/script/; pwd > ~/output.txt' to be sent to the other side (as shown by the debug line), which does work as intended. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/334242",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/71888/"
]
} |
334,364 | Why does ls .* print out the contents of the hidden directories? I want to print just the hidden files, and now see that Show only Hidden Files is a solution to this, yet I sill want to understand why the contents of the directories are shown. The contents of further nested directories are not shown. Below is a partial output of ls .* in my home directory. .bash_history.bash_profile.bashrc.coin_history.emacs.gitconfig.gitignore_global.grasp_jss.ssh:config github_rsa.pub id_rsa.pub known_hosts.oldgithub_rsa id_rsa known_hosts lambda.pem.vim:colors ftdetect syntax This machine is running RHEL. Similar behavior observed on Mac OSX. | Short answer: shell glob expansion. The shell takes your input and expands the .* part before passing it to ls , so effectively you're doing: $ ls .bash_history .bash_profile .bashrc .coin_history .emacs ... So it lists each entry. When it sees a directory entry, it lists the contents of that directory, just as you would expect ls to do. To see only the files/directories in your working directory, use the -d option to ls : $ ls -d .* The -d option tells ls to "list directories themselves, not their contents" (taken from the ls man page). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/334364",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167765/"
]
} |
334,382 | After starting a bash terminal, I noticed that the PATH variable contains duplicate entries. My terminal starts a login shell , so ~/.bash_profile is sourced, followed by ~/.profile and ~/.bashrc . Only in ~/.profile do I create the paths entries which are duplicated. To be pedantic, this is the order in which the files that SHOULD be sourced are being sourced: Sourced /etc/profileSourced /etc/bash.bashrcSourced .bash_profileSourced .profileSourced .bashrc Before anyone marks this as a duplicate of "PATH variable contains duplicates", keep reading. At first I thought this had to do with ~/.profile being sourced twice, so I had the file write to a log file whenever it was sourced, and surprisingly it only logged one entry, which tells me that it was only sourced once. Even more surprising is the fact that when I comment out the entries which were in ~/.profile , the entries still appear in the PATH variable. This has led me to three conclusions, one of which was quickly ruled out: Bash ignores valid bash comments and still executes the commented code There is a script which reads the ~/.profile and ignores any code that prints an output (the log file for example) There is another copy of my ~/.profile which is being sourced elsewhere The first one, I quickly concluded not to be the case due to some quick testing. The second and third options are where I need help with. How do I gather a list of scripts which are executed when my terminal starts up? I used echo in the files that I checked to know if they are sourced by bash, but I need to find a conclusive method which traces the execution up the point when the terminal is ready for me to start typing into it. If the above is not possible, then can anyone suggest where else I can look to see which scripts are being run . Future reference This is the script I now use for adding to my path: function add_to_path() { for path in ${2//:/ }; do if ! [[ "${!1}" =~ "${path%/}" ]]; then # ignore last / new_path="$path:${!1#:}" export "$1"="${new_path%:}" # remove trailing : fi done} I use it like this: add_to_path 'PATH' "/some/path/bin" The script checks if the path already exists in the variable before prepending it. For zsh users, you can use this equivalent: # prepends the given path(s) to the supplied PATH variable# ex. add_to_path 'PATH' "$(go env GOPATH)/bin"function add_to_path() { # (P)1 path is expanded from $1 # ##: Removes leading : local -x pth="${(P)1##:}" # (s.:.) splits the given variable at : for p in ${(s.:.)2}; do # %%/ Remove trailing / # :P Behaves similar to realpath(3) local p="${${p%%/}:P}" if [[ ! "$pth" =~ "$p" ]]; then pth="$p:$pth" fi done export "$1"="${pth%%:}"} Edit 28/8/2018 One more thing I found I could do with this script is to also fix the path. So at the start of my .bashrc file, I do something like this: _temp_path="$PATH"PATH='/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin'add_to_path 'PATH' "$_temp_path"unset _temp_path It is up to you what the PATH should start with. Examine PATH first to decide. | If your system has strace then you can list the files opened by the shell, for example using echo exit | strace bash -li |& grep '^open' ( -li means login shell interactive; use only -i for an interactive non-login shell.) This will show a list of files which the shell opened or tried to open. On my system, they are as follows: /etc/profile /etc/profile.d/* (various scripts in /etc/profile.d/ ) /home/<username>/.bash_profile (this fails, I have no such file) /home/<username>/.bash_login (this fails, I have no such file) /home/<username>/.profile /home/<username>/.bashrc /home/<username>/.bash_history (history of command lines; this is not a script) /usr/share/bash-completion/bash_completion /etc/bash_completion.d/* (various scripts providing autocompletion functionality) /etc/inputrc (defines key bindings; this is not a script) Use man strace for more information. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/334382",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44793/"
]
} |
334,384 | When executing a "long" listing of a soft link, ls -l displays the file attributes of the soft link. When executing ls -lL (or ls -l --dereference ), the file attributes are those of the file that the link points to, but ls still prints the name of the link itself. The man page doesn't say anything about this. The info page on ls just says that " ls still prints the name of the link itself, not the name of the file that the link points to.", without an explanation as to the reason why.I suppose this is a deliberate choice, but does anyone know the rationale behind this behavior of ls -L ? | Because the filename at the other end of the link is not (or might not be) the filename in the directory that you're accessing with ls . Two problems with displaying the name of the target of the symbolic link in place of the name of the link itself: The file does not exist. If the name of target of the symbolic link is displayed in the directory listing, you may be led to believe that this is the name of the file in that directory, but it isn't. The name of the target is not the name by which you access the file in that directory; the file with that filename does not exist (in that directory), or, in the worst case, it may be a totally different file (or a directory, or whatever) when accessed by that name. Files may appear to have identical names. If the name of the target of the symbolic link is displayed, then you may find that it's exactly identical to another filename in that directory, which can not be true on a Unix system. In these cases, it leads to confusion for the users, and they would have to verify the listing by using ls without -L , which would render the -L option pretty pointless. This ( not displaying the name of the target) is also the behaviour specified by POSIX, quite explicitly: Evaluate the file information and file type for all symbolic links (whether named on the command line or encountered in a file hierarchy) to be those of the file referenced by the link, and not the link itself; however, ls shall write the name of the link itself and not the file referenced by the link . When -L is used with -l , write the contents of symbolic links in the long format (see the STDOUT section). There is no further discussion about this in the Rationale section of the POSIX ls manual . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/334384",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77711/"
]
} |
334,386 | My bluetooth headset works fine. Audio sink works. Everything work. But the problem is that I need to connect it manually to the PC: click on bluetooth icon near the time on kde taskbar => known devices => Connect # bluetoothctl => connect xx:xx:xx:xx:xx:xx It used to connect automatically just a month ago with standard debian installation/updates. I don't know why it got broken. I didn't install any bluetooth-related packages nor change bluetooth-related configuration files. What I did to fix my problem: purged and reinstalled all bluetooth-related packages (see below). added a line load-module module-switch-on-connect to /etc/pulse/default.pa as described here created a file /etc/bluetooth/audio.conf with a line AutoConnect=true and restarted bluetooth service afterwards, as said here script (from askubuntu) does not work. I get this message: Browsing 00:18:09:29:XX:XX ...Sink bluez_sink.00_18_09_29_XX_XX does not exist. How to make it to be able to connect to bluetooth headset automatically when it goes online?I feel like the solution is easy. Debian 8.6, kde 4.14.2. Packages used: bluedevil , bluetooth , bluez , pulseaudio-module-bluetooth . | Normally your headset should try to connect to last device it connected automatically (most, if not all, does that). However this may fail if your device is not a trusted device. First thing to check is the log files. In Ubuntu under /var/log/syslog , may have different name under Debian... There I saw the error: Authentication attempt without agent A quick web search returned this page and all I needed to do is to add device to trusted devices. Run bluetoothctl and then enter trust XX:XX:XX:XX:XX:XX . Replace X'es with MAC address of your device. There is example in the link I provided also You may have a different problem, but check your log files at least to see if your device is trying to connect. If it is trying to connect, you can also see some messages if you run bluetoothctl and wait. I kept seeing Connected: yes , no , yes , no ... messages all the time. It was being disconnected because it was not a trusted device. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/334386",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122054/"
]
} |
334,415 | I've recently examined a RHEL7.2 that hanged almost totally just because it have written to a CIFS filesystem. With the default settings of dirty_ratio = 30 and cifs being cached (for both reading and writing), these dirty pages were mostly cifs ones. Under memory pressure, when system reclaimed most of the read cache, system stubbornly tried to flush&reclaim the dirty (write) cache. So the situation was a huge CPU iowait accompanied with an excellent local disk I/O completion time, a lot of processes in D uninterruptible wait and a totally unresponsive system. OOM killer never engaged, because there was free memory that system wasn't giving out. (I think there is also a bug with CIFS, that crawled the flushing to incredibly slow speeds. But nevermind that here.) I was flabbergasted to find out that kernel treated flushing pages to some slow remote CIFS box in exactly the same way as to super-fast local SSD drive. It's just insensible to have a single dirty_ratio bag, it quickly leads to the situation where 30% of RAM contains dirty data from the slowest devices. What a waste of money. The situation is reproducible; setting dirty_ratio = 1 solves the problem completely. But why do I need to sacrifice the cache of local disks just because I use a cifs mount? Other than completely disabling caching of some devices, or setting vm.dirty_ratio to a very low value, are there any ways to "whitelist" the fast devices to have more write cache? Or to have the slow devices (or remote "devices" like //cifs/paths) use less write cache? The kernel version for RHEL 7.2 is referred to as 3.10.0-327. (It is based on 3.10.0, but includes several years worth of backports). | dirty_ratio per device Q: Are there any ways to "whitelist" the fast devices to have more write cache? Or to have the slow devices (or remote "devices" like //cifs/paths) use less write cache? There are some settings for this, but they are not as effective as you hoped for. See the bdi ("backing device") objects in sysfs : linux-4.18/Documentation/ABI/testing/sysfs-class-bdi min_ratio (read-write) Under normal circumstances each device is given a part of the total write-back cache that relates to its current average writeout speed in relation to the other devices. The 'min_ratio' parameter allows assigning a minimum percentage of the write-back cache to a particular device. For example, this is useful for providing a minimum QoS. max_ratio (read-write) Allows limiting a particular device to use not more than the given percentage of the write-back cache. This is useful in situations where we want to avoid one device taking all or most of the write-back cache. For example in case of an NFS mount that is prone to get stuck. The catch is "this setting only takes effect afterwe have more than (dirty_background_ratio+dirty_ratio)/2 dirty data intotal. Because that is the amount of dirty data when we start to throttleprocesses. So if the device you'd like to limit is the only one which iscurrently written to, the limiting doesn't have a big effect." Further reading: LKML post by Jan Kara (2013). The "test case", at the end of this answer. commit 5fce25a9df48 in v2.6.24. "We allow violation of bdi limits if there is a lot of room on the system. Once we hit half the total limit we start enforcing bdi limits..." This is part of the same kernel release that added the internal per-device "limits". So the "limits" have always worked like this, except for pre-releases v2.6.24-rc1 and -rc2. For simplicity, let us ignore your 30% setting and assume the defaults: dirty_background_ratio=10 and dirty_ratio=20. In this case, processes are allowed to dirty pages without any delays, until the system as a whole reaches the 15% point. Q: The situation is reproducible; setting dirty_ratio = 1 solves the problem completely. :-/ This sounds similar to the "pernicious USB-stick stall problem", which LWN.net wrote an article about. Unfortunately this particular article is misleading . It was so confused that it fabricated a different problem from the one that was reported. One possibility is that you are reproducing a more specific defect. If you can report it to kernel developers, they might be able to analyze it and find a solution. Like the interaction with transparent hugepages was solved . You would be expected to reproduce the problem using the upstream kernel. Or talk to your paid support contact :). Otherwise, there is a patch that can be applied to expose the internal strictlimit setting. This lets you change max_ratio into a strict limit. The patch has not been applied to mainline. If enough people show a need for this, the patch might get applied, or it might encourage some work to remove the need for it. My concern is that while potentially useful, the feature might not be sufficiently useful to justify its inclusion. So we'll end up addressing these issues by other means, then we're left maintaining this obsolete legacy feature. I'm thinking that unless someone can show that this is good and complete and sufficient for a "large enough" set of issues, I'll take a pass on the patch[1]. What do people think? [1] Actually, I'll stick it in -mm and maintain it, so next time someone reports an issue I can say "hey, try this". -- Andrew Morton, 2013 mm-add-strictlimit-knob-v2.patch is still sitting in -mm. A couple of times, people mentioned ideas about better auto-tuning the dirty cache. I haven't found a lot of work on it though. An appealing suggestion is to keep 5 seconds worth of write-back cache per device. However the speed of a device can change suddenly, e.g. depending whether the IO pattern is random or sequential. Analysis (but no conclusion) Q: I was flabbergasted to find out that kernel treated flushing pages to some slow remote CIFS box in exactly the same way as to super-fast local SSD drive. These are not treated exactly the same. See the quote from the BDI doc above. "Each device is given a part of the total write-back cache that relates to its current average writeout speed." However, this still makes it possible for the slow device to fill up the overall write-back cache, to somewhere between the 15-20% marks, if the slow device is the only one being written to. If you start writing to a device which has less than its allowed share of the maximum writeback cache, the "dirty throttling" code should make some allowances. This would let you use some of the remaining margin, and avoid having to wait for the slow device to make room for you. The doc suggests min_ratio and max_ratio settings were added in case your device speeds vary unpredictably, including stalling while an NFS server is unavailable. The problem is if the dirty throttling fails to control the slow device, and it manages to fill up to (or near) the 20% hard limit. The dirty throttling code that we're interested in was reshaped in v3.2. For an introduction, see the LWN.net article " IO-less dirty throttling ". Also, following the release, Fengguang Wu presented at LinuxCon Japan. His presentation slides are very detailed and informative. The goal was to delegate all writeback for a BDI to a dedicated thread, to allow a much better pattern of IO. But they also had to change to a less direct throttling system. At best, this makes the code harder to reason about. It has been well-tested, but I'm not sure that it covers every possible operating regime. In fact looking at v4.18, there is explicit fallback code for a more extreme version of your problem: when one BDI is completely non-responsive. It tries to make sure other BDI's can still make forward progress, but... they would be much more limited in how much writeback cache they can use. Performance would likely be degraded, even if there is only one writer. Q: Under memory pressure, when system reclaimed most of the read cache, system stubbornly tried to flush&reclaim the dirty (write) cache. So the situation was a huge CPU iowait accompanied with an excellent local disk I/O completion time, a lot of processes in D uninterruptible wait and a totally unresponsive system. OOM killer never engaged, because there was free memory that system wasn't giving out. (I think there is also a bug with CIFS, that crawled the flushing to incredibly slow speeds. But nevermind that here.) You mention your system was under memory pressure. This is one example of a case which could be very challenging. When "available" memory goes down, it can put pressure on the size of the write-back cache. "dirty_ratio" is actually a percentage of "available" memory, which means free memory + page cache . This case was noticed during the original work. There is an attempt to mitigate it. It says that "the new dirty limits are not going to avoid throttling the light dirtiers, but could limit their sleep time to 200ms." Test case for "max_ratio" Set up a VM / laptop / whatever, which does not have an expensively large amount of RAM. Run dd if=/dev/zero bs=1M of=~/test , and watch the write cache with grep -E '^(Dirty:|Writeback:)' /proc/meminfo . You should see dirty+writeback settle around a "set point". The set point is 17.5%, half-way between 15% and 20%. My results on Linux v4.18 are here . If you want to see an exact percentage, be aware that the ratios are not a percentage of total RAM; I suggest you use the tracepoint in dirty_balance_pages(). I ran this test with different values of max_ratio in the filesystem's BDI. As expected, it was not possible to limit the write-back cache below the 15% point. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/334415",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26411/"
]
} |
334,425 | I create a script, paste data in it, saving, executing, and delete: vi ~/ms.sh && chmod +x ~/ms.sh && nohup ~/ms.sh && rm ~/ms.sh #!/bin/bashcommands...function myFunc {commands...}myFunc () How could I properly run only myFunc , in the background, or alternatively, in another process? If it's even possible? | You can use a shell function pretty much anywhere you can use a program. Just remember that shell functions don't exist outside the scope in which they were created. #!/bin/bash#f() { sleep 1 echo "f: Hello from f() with args($*)" >&2 sleep 1 echo "f: Goodbye from f()" >&2}echo "Running f() in the foreground" >&2f oneecho "Running f() in the background" >&2f two &echo "Just waiting" >&2waitecho "All done"exit 0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/334425",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
334,437 | When my connection has been idle for some time, the remote host closes the connection. I try to type a new command, it hangs, then closes the connection. How can I keep the ssh connection active for longer? $ ssh [email protected] login: Tue Jan 3 03:09:39 2017 from c-99-99-99-99.hsd1.xx.comcast.net[root@ip-172-99-99-99 ~]# groupsroot bin daemon sys adm disk wheel[root@ip-172-99-99-99 ~]# usersroot root[root@ip-172-99-99-99 ~]# less /etc/passwd[root@ip-172-99-99-99 ~]# Connection reset by 52.99.99.99 port 22 The remote is CentOS on Amazon (AWS) and local is Cygwin. | Because your last login was from comcast.net I am guessing that you are connecting from a home system where you are behind a router which is doing NAT. A problem that a system doing NAT has is knowing when the connection is no longer wanted. If the connection is taken down cleanly then there is no problem but if the two machines just reboot then there is no signal to the NAT box that this has happened. Therefore your typical NAT box has timers which say "if there is no traffic for an hour then the two endpoints clearly are not going to send anything more" and so recycles the resources that the NAT was taking. So the thing to do is send some traffic every once in a while. The simplest way assuming you are using the openssh implementation on your cygwin machine is to enable TCPKeepAlive. replace ssh [email protected] with ssh -o TCPKeepAlive=true [email protected] For long term use you are better setting up a ~/.ssh/config file. host fred hostname host.com user root TCPKeepAlive=true and then you will be able to say just ssh fred to connect to the machine with the needed option. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/334437",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45701/"
]
} |
334,448 | I’m taking two input files, one with certain ID numbers, and another with a large list of ID numbers and additional columns. The latter file contains multiple lines for each ID number and I need to extract all lines that match an ID from the first file. Those lines then must be printed in a new file. Edit 1: Replaced sample files with excerpts from actual Edit 2: Removed extra spaces that were in excerpt, but not actual file. Files likely need to be sanitized in some way, but how is unclear. file1: AT1G56430AT3G55190AT3G22880 file2: AT1G01010|GO:0043090|RCAAT1G56430|GO:0010233|IGI AT1G56430|GO:0009555|IGI AT1G56430|GO:0030418|IGI expected output AT1G56430|GO:0010233|IGI AT1G56430|GO:0009555|IGI AT1G56430|GO:0030418|IGI [ [ I have tried: awk -F'|' 'NR==FNR{c[$1$2]++;next};c[$1$2] > 0' file1 file2 > output.txt and: grep -Ff file2 file1 > output.txt I’m aware that there are many somewhat similar questions posted in these forums and others. However, these don’t mention how the output is handled… nor do they mention duplicates. I’ve tried solutions from 4 of them, have been messing with this for many hours and keep getting the same problem: a blank output file. I’m new to awk and I greatly appreciate the help. Sorry if this is a simple problem with syntax etc; please let me know. Thanks for the help. | Your AWK script is nearly there: awk -F'|' 'NR==FNR{c[$1]++;next};c[$1] > 0' file1 file2 > output.txt works, after changing the line-endings from Mac to Unix: tr '\r' '\n' < file1 > file1.newmv file1.new file1tr '\r' '\n' < file2 > file2.newmv file2.new file2 $1 is the first field in AWK. Instead of c[$1] > 0 , you can write c[$1] . The > 0 isn't needed: any non-zero value works, so we might as well use the contents of c directly: awk -F'|' 'NR==FNR{c[$1]++;next};c[$1]' file1 file2 > output.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/334448",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/208324/"
]
} |
334,460 | I'm running Leafnode as an easy-to-use NNTP server and am considering switching from pan as news reader to trn4. To configure trn4 I set it to, effectively, localhost . Specifically, the FQDN which leafnode uses, which is also the hostname for the system. Presumably trn4 doesn't use DNS to try and resolve the FQDN but realizes that this is localhost in this case. That's working fine. How can I add additional news servers? Specifically, news.gmane.org so that I have the most recent postings, which Leafnode might not have. Is the trn4 news host configuration file even /etc/news/server ? The man page was too long. | Your AWK script is nearly there: awk -F'|' 'NR==FNR{c[$1]++;next};c[$1] > 0' file1 file2 > output.txt works, after changing the line-endings from Mac to Unix: tr '\r' '\n' < file1 > file1.newmv file1.new file1tr '\r' '\n' < file2 > file2.newmv file2.new file2 $1 is the first field in AWK. Instead of c[$1] > 0 , you can write c[$1] . The > 0 isn't needed: any non-zero value works, so we might as well use the contents of c directly: awk -F'|' 'NR==FNR{c[$1]++;next};c[$1]' file1 file2 > output.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/334460",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17056/"
]
} |
334,477 | I have a strange scenario over here:If I run nmcli dev wifi list it shows me a list of all networks which is fine. As soon as I add the device (wlan0 in my case) to the /etc/network/interfaces file and reboot it shows no networks. So before reboot the /etc/network/interfaces contains: #iface wlan0 inet manual# wpa-driver wext# wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf# wpa_supplicant.conf contains no networks at the momentsource-directory /etc/network/interfaces.d# this directory is empty, so currently it is a redundant statement I remove the first three # , reboot the device and nmcli shows no networks. How do I address this issue? I need the wpa_supplicant.conf empty because it will be filled by a script. Said script displays a list of networks (via nmcli ) and generates a wpa_supplicant.conf (via wpa_passphrase ) I'm aware there is a similar question over there , but the only answer to start the wpa_supplicant.service won't fix my issue, as the service is already running (according to # systemctl status wpa_supplicant.service ). Restarting it does not change anything either. | This is normal. The NetworkManager don't manages devices in /etc/network/interfaces by default. You can change it in /etc/NetworkManager/NetworkManager.conf key [ifupdown]managed=true | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/334477",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/196567/"
]
} |
334,492 | i created a simple dynamic module(.ko).How can a user application can access driver from the kernel space.How to get the major and minor number of dynamically loaded module. | This is normal. The NetworkManager don't manages devices in /etc/network/interfaces by default. You can change it in /etc/NetworkManager/NetworkManager.conf key [ifupdown]managed=true | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/334492",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/206526/"
]
} |
334,513 | I have text that goes like this: I am happy. I am here. How are you, Meg? I want this to be: I am happy.I am here.How are you, Meg? For full stops I tried tr -s '. ' '\n' <file.txt >out.txt But it's not working. | As far as I know tr only works with single characters and ". " is a string not a character, so it is possible to do what you want by using sed or awk , for example: sed -e "s/\. /\n/g" file.txt > out.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/334513",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
334,543 | I have rbenv (ruby version manager) installed on machine and it works like that: $ rbenv local2.3.1 Writing to stdout the local version of my ruby. I want to rescue this version and declare it in a variable to reuse in another occasion. $ declare -r RUBY_DEFINED_VERSION=$(rbenv local)$ echo Using ruby version $RUBY_DEFINED_VERSIONUsing ruby version 2.3.1 It works! But I don't want to use a subshell to do the work (using $() or `` ). I want to use the same shell and I don't want to create a tmp file to do the work. Is there a way to do this? Note: declare -r is not mandatory, it can be a simple var=FOOBAR . | There is a hack, but I think it just make sense if you need it in a loop. you can open a cat coproc like this: coproc CAT { cat; } This will start a cat command in background, and set two environment variables: CAT_PID and CAT . The CAT variable is an array with STDOUT and STDIN (in this order) file descriptor (pipes) used by cat . So, you can execute anything writing the output to &${CAT[1]} that represents the STDIN , and use the builtin command read to set your variable reading from ${CAT[0]} that is the STDOUT of cat. Here a sample: coproc CAT { cat; }echo 123 >&${CAT[1]}read myvar <&${CAT[0]} To test: echo $myvar123 Don't forget to stop the cat after use it. You can do it by by killing the process. kill $CAT_PID This makes a great difference in performance tuning. Update: bash implements strings null delimited. So when dealing with binary data, read is really tricky. You can read with LC_ALL=C read -r -n1 -d $'\0' one byte at time, then the null will be empty strings on ${REPLY} variable. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/334543",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/208394/"
]
} |
334,578 | One of my coworkers has provided me with a Bash syntax that I am unfamiliar with. My Google foo has failed me on figuring out what it does and why/when I should use it. The command that he sent me was of this form: someVariable=something command Initially, I thought that this was equivalent to the following: someVariable=something ; command Or someVariable=something command But this doesn't appear to the be case. Examples: [Jan-03 11:26][~]$ # Look at the environment variable BAZ. It is currently empty[Jan-03 11:26][~]$ echo $BAZ[Jan-03 11:27][~]$ # Try running a command of the same format [Jan-03 11:27][~]$ BAZ=jake echo $BAZ[Jan-03 11:27][~]$ [Jan-03 11:27][~]$ # Now, echo BAZ again. It is still empty: [Jan-03 11:27][~]$ echo $BAZ[Jan-03 11:27][~]$ [Jan-03 11:28][~]$ [Jan-03 11:28][~]$ # If we add a semi-colon to the command, we get dramatically different results:[Jan-03 11:28][~]$ BAZ=jake ; echo $BAZjake[Jan-03 11:28][~]$[Jan-03 11:28][~]$ # And we can see that the variable is actually set:[Jan-03 11:29][~]$ echo $BAZjake[Jan-03 11:29][~]$ What does this syntax do? What happens to the variable that has been set? Why does this work? | This is equivalent to: ( export someVariable=something; command ) This makes someVariable an environment variable, with the assigned value, but only for the command being run. Here are the relevant parts of the bash manual: Simple Commands A simple command is a sequence of optional variable assignments followed by blank-separated words and redirections, and terminated by a control operator. The first word specifies the command to be executed, and is passed as argument zero. The remaining words are passed as arguments to the invoked command. (...) Simple Command Expansion If no command name results [from command expansion], the variable assignments affect the current shell environment. Otherwise, the variables are added to the environment of the executed command and do not affect the current shell environment . Note: bear in mind that this is not specific to bash , but specified by POSIX . Edit - Summarized discussion from comments in the answer The reason BAZ=JAKE echo $BAZ , doesn't print JAKE is because variable substitution is done before anything else. If you by-pass variable substitution, this behaves as expected: $ echo_baz() { echo "[$BAZ]"; }$ BAZ=Jake echo_baz[Jake]$ echo_baz[] | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/334578",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2650/"
]
} |
334,597 | I'm trying to upgrade my ssh server from 2048-bit RSA keys to larger keys, as recommendations are to phase out 2048-bit keys soon. I generated a new key, then added it to the sshd config, like this: HostKey /etc/ssh/ssh_host_rsa_key (old 2k-bit key first) HostKey /etc/ssh/ssh_host_rsa4096_key (new larger key 2nd ) After restarting sshd , I ssh'd to the host, I don't get the identification changed warning, however the new also isn't cached in ~/.ssh/known_hosts . If I put the lines in the opposite order, I get the identification changed warning. Similarly, when I add an ed25519 key, no matter what order I put it in, the client doesn't add the new key to the known hosts file. This seems to make SSH host key rollover impossible—difficult to believe that's really the case, though, considering security routinely requires upgrading keys. I know you can just swap the key, then every client needs to run ssh-keygen -R to remove the old key, then manually verify and accept the new key—but that's a real pain, especially if you have a lot of clients connecting or don't administer all the clients. Not to mention, if you don't administer the clients, there is a very good chance they won't actually check the host key and instead just hit Y—so the attempt to improve security will likely actually open you to man-in-the-middle attacks instead. Is there some way to make SSH host key upgrades work? That is, clients should learn the new more secure key (and also hopefully un-learn the obsolete key). And without giving the host key changed man-in-the-middle warning. | The Host Key rotation is supported since OpenSSH 6.8 (both client and server adds support in this version). So the process should work like this: Generate and add new keys with the option HostKey newkey (after the existing ones) to the /etc/ssh/sshd_config Restart sshd The clients have to set up UpdateHostKeys yes in their configuration (either globally, or per-host) The connecting clients will pick up all the new keys After some time (months?) you can remove the old keys from the sshd_config and restart sshd The clients (that connected during the transition period) will already have the new keys (the old will not be removed, which is the only problem here) and they will not show the MitM attack warning. The new enough-clients will be able to pick up the new keys. This feature is not enabled by default, probably because it is quite new and soon showed some security consideration. But these days, it should be fine to use it. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/334597",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/977/"
]
} |
334,666 | debian8 is a standard user on my OS (Debian8+LXDE). I follow below steps to change the font on LXterminal. Launch LXterminal after logging in as debian8 . Click Edit > Preferences. The default font size is 10 px, which is small. I change the font size to 14 px. Click OK to apply the changes. Unfortunately, when the computer is restarted, the font size for user debian8 reverts to 10px regardless of the font size chosen before restart. Why is that so? Is there a script which can be saved on /home/debian8/.bashrc to set font size for user debian8 ? | check ~/.config/lxterminal/lxterminal.conf You can save the right config and overwrite at every boot. But it's better to find out what is overwriting constantly your config files. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/334666",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102745/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.