source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
424,492
I am defining a shell script which a user should source rather than execute. Is there a conventional or intelligent way to hint to the user that this is the case, for instance via a file extension? Is there shell code I can write in the file itself, which will cause it to echo a message and quit if it is executed instead of sourced, so that I can help the user avoid this obvious mistake?
Assuming that you are running bash, put the following code near the start of the script that you want to be sourced but not executed: if [ "${BASH_SOURCE[0]}" -ef "$0" ]then echo "Hey, you should source this script, not execute it!" exit 1fi Under bash, ${BASH_SOURCE[0]} will contain the name of the current file that the shell is reading regardless of whether it is being sourced or executed. By contrast, $0 is the name of the current file being executed. -ef tests if these two files are the same file. If they are, we alert the user and exit. Neither -ef nor BASH_SOURCE are POSIX. While -ef is supported by ksh, yash, zsh and Dash, BASH_SOURCE requires bash. In zsh , however, ${BASH_SOURCE[0]} could be replaced by ${(%):-%N} .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/424492", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20716/" ] }
424,583
I have this string extension_dir => /some/path/php/extensions/no-debug-non-zts-20160303 => /some/path/php/extensions/no-debug-non-zts-20160303sqlite3.extension_dir => no value => no value which is the output of php -i | grep extension_dir . How can I parse it in bash to get the first /some/path/php/extensions/no-debug-non-zts-20160303 ? So far I have tried: echo $(php -i | grep extension_dir | sed 's/extension_dir => //g' | sed 's/=> .*//g') but that gives me /some/path/php/extensions/no-debug-non-zts-20160303 sqlite3.no value . I have no idea why it doesn't replace all matches of => .* My base idea is to get rid of first extension_dir => and than rid of everything after first => including => Probably sed matches things differently than regex.
php -i | sed -n '/extension_dir/{s/^[^/]*//;s/ *=>.*$//;p;}' or, as suggested in comments below, php -i | sed '/extension_dir/!d;s/[^/]*//;s/ *=>.*//' The sed above replaces your grep and will, for every line that matches extensions_dir , first remove everything up to the first / and then everything from the first => onwards in the modified string. Any spaces before the => are also removed. Lines not matching extensions_dir are ignored. This will return the wanted path for the first line of input, and an empty line for the second. To disregard the second line of input, use /^extension_dir/ instead of /extension_dir/ in the sed above. This will discard the second line since does not start with that string. It's the combination of your two sed scripts that produces the surprising result for the second line of input. The line is sqlite3.extension_dir => no value => no value and the first sed will modify this to sqlite3.no value => no value The second sed will then remove the => no value bit at the end. Note that echo $( ... ) is a bit useless as it gobbles up the newlines in the output of the command inside $( ... ) . Instead test with just the command without the echo or the $( ... ) command substitution. It is possibly this use of echo that has had you confused about the nature of the output from php -i and grep (two matching lines rather than one).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/424583", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/263020/" ] }
424,602
The new pstate Intel driver pisses me off for good, because they have removed the good old powersave governor that allowed me to set the lowest available CPU frequency and run my numerical simulations for hours or days in a nicely silent, cold laptop. To make things worse, what they call now powersave is essentially the old governor ondemand , i.e. a mode in which the CPU frequency rams up with the load, and with it the damn fan noise: Since these policies are implemented in the driver, they are not same as the cpufreq scaling governors implementation, even if they have the same name in the cpufreq sysfs (scaling_governors). For example the "performance" policy is similar to cpufreq(TM) "performance" governor, but "powersave" is completely different than the cpufreq "powersave" governor. The strategy here is similar to cpufreq "ondemand", where the requested P-State is related to the system load. (Extracted from https://www.kernel.org/doc/Documentation/cpu-freq/intel-pstate.txt ) Now, please, is there any other way to keep my CPU frequency at a minimum? It's really important for me. I just prefer to dump the laptop through the window if I eventually cannot set a constant, lowest CPU frequency. That is how I use my laptop and that is what I want a laptop for, and I have been trying to achieve this for several days already! I'm trying this, and it doesn't work: echo 42 | sudo dd of=/sys/devices/system/cpu/intel_pstate/max_pref_pct to set the maximum speed at 42%, and it doesn't have any affect, the CPU keeps on going to 100% whenever I do something. What am I doing wrong? (should I restart some service or something?) Is there any way to get this? Also, will a non-Intel CPU allow me to do that? I don't mind buying another laptop it that is going to solve the problem.
Well, well, turns out that the new pstate Intel driver is awesome, but first one needs to practice a bit of the ancient, lost art of reading the documentation. I'll leave my question as it is because, judging by all the grief and frustration I see scattered all around the internet, I am not the first to have this issues. The new CPU driver has lots of options, but I will restrict my explanation to something simple and, for me, more than enough and satisfactory. First of all: sudo apt-get install linux-cpupower (or the equivalent in your non-debian based distros) There are now 2 behaviors (governors) with the names powersave and performance but that is quite an unfortunate naming scheme, because these governors have nothing to do with those that bear the same names in the old driver: powersave means now variable frequency that depends on the load , i.e. this is essentially the old ondemand governor. You get to set the minimum and maximum frequency, and if that maximum frequency is set to the maximum frequency your CPU is able of (which I believe it's the default) then you don't save a crap. You may even crank up the minimum frequency to the second highest value, and the result will be the CPU nearly at full throttle 24/7 and the governor will still be named powersave . They should have named this governor VARIABLE or something similar, avoiding a lot of confusion among users and sparing the developers a lot of false reports of kernel bugs. performance means here constant frequency, no matter the load , and this depends on what the user has set as maximum. When this governor is set, the mininum frequency is ignored and the CPU runs at the frequency you set as maximum. If as such you have set a very low frequency, then you will not see any special performance or anything, you just will get a slowed down CPU at constant frequency. So they'd better named this governor CONSTANT or something similar, sparing frustration to many people like me, used to the old scheme. So, here are some examples that work like a charm, at least with kernel 4.14. I will use as minimum and maximum frequency the values for my CPU: 0.4 and 3.1 GHz. See yours with cpupower frequency-info CONSTANT, LOWEST FREQUENCY IN ALL CORES This IS what I wanted! We get that by setting the constant frequency governor and setting the lowest available frequency as the maximum: sudo cpupower frequency-set -g performancesudo cpupower frequency-set -u 400MHz CONSTANT, HIGHEST FREQUENCY IN ALL CORES (You see this asked very often for desktop computers where it makes sense, although there are also people out there willing to destroy their laptop fan) sudo cpupower frequency-set -g performancesudo cpupower frequency-set -u 3100MHz VARIABLE FREQUENCY BETWEEN THE MINIMUM AND MAXIMUM POSSIBLE FREQUENCIES IN ALL CORES (this was called ondemand with the old acpi-cpufreq driver) sudo cpupower frequency-set -g powersavesudo cpupower frequency-set -d 400MHzsudo cpupower frequency-set -u 3100MHz VARIABLE FREQUENCY BETWEEN THE MINIMUM AND A MODERATE FREQUENCY IN ALL CORES (maybe because you want to get some more speed when required but you don't want to reach the maximum and hear the fan blowing like mad) sudo cpupower frequency-set -g powersavesudo cpupower frequency-set -d 400MHzsudo cpupower frequency-set -u 1200MHz And so on. It is very easy and works really well. You can set too a constant low frequency in one core where the heavy numerical stuff is running, while you leave a variable frequency in other core where you launch the more usual stuff (email, web browsing...). See taskset for more info.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/424602", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104233/" ] }
424,628
Why md5sum is prepending "\" in front of the checksum when finding the checksum of a file with "\" in the name? $ md5sum /tmp/test\\test\d41d8cd98f00b204e9800998ecf8427e /tmp/test\\test The same is noted for every other utility.
This is documented , for Coreutils’ md5sum : If file contains a backslash or newline, the line is started with a backslash, and each problematic character in the file name is escaped with a backslash, making the output unambiguous even in the presence of arbitrary file names. ( file is the filename, not the file’s contents). b2sum , sha1sum , and the various SHA-2 tools behave in the same way as md5sum . sum and cksum don’t; sum is only provided for backwards-compatibility (and its ancestors don’t produce quoted output), and cksum is specified by POSIX and doesn’t allow this type of output. This behaviour was introduced in November 2015 and released in version 8.25 (January 2016), with the following NEWS entry: md5sum now ensures a single line per file for status on standard output, by using a '\' at the start of the line, and replacing any newlines with '\n'. This also affects sha1sum , sha224sum , sha256sum , sha384sum and sha512sum . The backslash at the start of the line serves as a flag: escapes in filenames are only processed if the line starts with a backslash. (Unescaping can’t be the default behaviour: it would break sums generated with older versions of Coreutils containing \\ or \n in the stored filenames.)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/424628", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104666/" ] }
424,662
This is beyond my current skills it seems as I've been trying for a while and not making much headway. I've been asked to get a list of hosts and IPs for security to run a scan against those servers. There is a hostlist named hosts.linux on the server with all the hostnames, just no IPs. I'm trying to come up with a script that will take those names from that file and then run a command such as the host command to get the IP. This command works for instance: host csx-svc-spls-06 | awk '{ print $3 }' and it returns just the IP of that server. Is it possible to read from the file, have it run the command, and export the name of the server and then the IP address on one line to a new file?
I'm not sure of the implications of using nslookup over dig, but I think this might work: while IFS= read -r i; do nslookup "$i" | grep '^Name' -A1 | awk '{print $2}' echodone < linux.hosts > outputfile
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/424662", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/174013/" ] }
424,715
Is it possible to generate real random numbers with a specific precision and in a specific range using the Integer Random Generator $RANDOM? For example how we can generate real number with 4 precision between 0 and 1? 0.12340.03090.90010.00001.0000 A simple workaround: printf "%d04.%d04\n" $RANDOM $RANDOM
awk -v n=10 -v seed="$RANDOM" 'BEGIN { srand(seed); for (i=0; i<n; ++i) printf("%.4f\n", rand()) }' This will output n random numbers (ten in the example) in the range [0,1) with four decimal digits. It uses the rand() function in awk (not in standard awk but implemented by most common awk implementations) which returns a random value in that range. The random number generator is seeded by the shell's $RANDOM variable. When an awk program only has BEGIN blocks (and no other code blocks), awk will not try to read input from its standard input stream. On any OpenBSD system (or system that has the same jot utility , originally in 4.2BSD), the following will generate 10 random number as specified: jot -p 4 -r 10 0 1
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/424715", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64110/" ] }
424,719
I started a VPN connection with sudo openvpn --daemon --config connection.ovpn . Is there a way to terminate this connection without using ps to search for the process and then kill it myself?
Since OpenVPN does not seem to offer any function of its own for this, you are probably looking for pkill <process-name> , which will search for all processes matching the given name, and kill them. If you got multiple instances running, but would like to kill only a specific one, the -f option allows you to match against the full process-call including parameters, e.g. pkill -f "openvpn --config connection.ovpn" . See the output from ps x or pgrep -lf <process-name> (same as pkill , but doesn't kill them, so essentially similar to ps | grep <name> ) to find out with which parameters the daemon was started.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/424719", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269/" ] }
424,755
As part of my effort to reduce noise in logs and slightly reducing discoverability (and on top of fail2ban, allowing only public key authentication etc.) I routinely change sshd-ports on servers I set up to a different port, let's say 5492. Currently I either append -p 5492 to my ssh command, or add the port for each specific server into my ssh_config . Is there a way to configure ssh to try connecting to both port 22 and port 5492 if port 22 doesn't work?
You could wrap a shell script around ssh but ssh itself will not do it. One way using a bash function is this (put into ~/.bashrc ): function ssh() { command ssh -p 22 "$@" || command ssh -p 5492 "$@"; } By the way, it is recommended to use root -reserved ports for system services like ssh in order to avoid users from having a process that listens on, say, port 5492. They may otherwise play man in the middle and possibly capture login data. So, use a port < 1024.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/424755", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205664/" ] }
424,758
How do I resize a btrfs filesystem to the minimum possible size in a single step? I only want to dd the smallest amount of data possible, and want to use dd since btrfs send is progressively slower the more snapshots there are. btrfs filesystem resize has a max argument, but no min argument. If I try to resize more than the free space available, I get a message like: ERROR: unable to resize '/media/backup-alt': No space left on device I've been progressively resizing downwards in steps of decreasing size (eg passing arguments -128G -64G , -32G , ...) but this is a time consuming convergence on a solution. Is there a way to shrink to minimum in a single step?
The best solution I've come across so far is to get the minimum free space (using -b for bytes): sudo btrfs filesystem usage -b /mountpoint Free (estimated): 71890542592 (min: 71890542592) And then resize by the negative of the min amount: sudo btrfs filesystem resize -71890542592 /mountpoint Alternatively, if there is a big difference between the min free and the unallocated, you may choose to use (unallocated * 0.9) since resizing by the exact unallocated bytes seems to fail. You can then repeatedly shrink by small amounts until the resize fails: while sudo btrfs filesystem resize -200M /mountpoint; do true; done This is not exactly a single step, but at least mostly automated. The loop by itself could be a single step, but it will probably take longer doing small incremental resizes rather than initially shrinking by a large chunk.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/424758", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
424,799
I'm using a local BIND9 server to host some local dns records. When trying to dig for a local domain name I can't find it if I don't explicitly tell dig to use my local BIND9 server. user@heimdal:~$ dig +short heimdal.lan.seuser@heimdal:~$ dig +short @192.168.1.7 heimdal.lan.se192.168.1.2 Ubuntu 17.04 and systemd-resolved are used. This is the content of my /etc/resolved # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN# 127.0.0.53 is the systemd-resolved stub resolver.# run "systemd-resolve --status" to see details about the actual nameservers.nameserver 127.0.0.53 And the output from systemd-resolve --status Global DNS Servers: 192.168.1.7 192.168.1.1 DNSSEC NTA: 10.in-addr.arpa 16.172.in-addr.arpa 168.192.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa corp d.f.ip6.arpa home internal intranet lan local private test The DNS Servers section does seem to have rightfully configured 192.168.1.7 as the main DNS server (my local BIND9 instance). I can't understand why it's not used ... ?
So, changing my wired eth0 interface to be managed solved this issue for me. Changing ifupdown to managed=true in /etc/NetworkManager/NetworkManager.conf [ifupdown]managed=true Then restart NetworkManager sudo systemctl restart NetworkManager After this it works flawlessly.. This was not 100%. I also applied theses changes to try and kill resolver sudo service resolvconf disable-updatessudo update-rc.d resolvconf disablesudo service resolvconf stop Big thanks to this blog post regarding the subject: https://ohthehugemanatee.org/blog/2018/01/25/my-war-on-systemd-resolved/ Lets pray this works.. This whole systemd-resolve business is just so ugly.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/424799", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/253998/" ] }
424,871
I have submitted 800 jobs on Slurm. I want to cancel those jobs that have job ID/number bigger than a number(since there is a mistake in them). I don't want to cancel all my jobs because some are running and some that are in the queue are correct.
It is not strictly answering how to cancel jobs greater that a given number, but it would work for the problem @mona-jalilvand was trying to solve : cancel jobs in a range as described here scancel {1000..1050} Much simpler then getting into bash scripting... worked well for me.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/424871", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/276553/" ] }
424,958
After the latest updates (I assume to gnome-shell ), my touchpad right-click switched from "pressing bottom right zone" to "two-finger tap anywhere". I find this confusing and I want to restore my right-clicks to pressing the bottom-right part of the touchpad. What I tried so far: Settings -> Mouse & touchpad but no luck Middle click with two-finger tap on touchpad but I didn't have synclient installed. After installing, I get: Couldn't find synaptics properties. No synaptics driver loaded? Any ideas?
I came across https://askubuntu.com/questions/999631/ubuntu-17-10-disable-touchpad-bottom-right-corner-right-click That pointed me in the right direction. Running gsettings list-recursively org.gnome.desktop.peripherals.touchpad lists all touchpad settings. I'm only interested in gsettings get org.gnome.desktop.peripherals.touchpad click-method which at the moment returns fingers . Now I just needed to know what are the valid options I can set this key to.After some digging, I found range gsettings range org.gnome.desktop.peripherals.touchpad click-method which returns enum 'default' 'none' 'areas' 'fingers' Finally, running this solved it: gsettings set org.gnome.desktop.peripherals.touchpad click-method areas
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/424958", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114416/" ] }
424,967
My /etc/group has grown by adding new users as well as installing programs that have added their own user and/or group. The same is true for /etc/passwd . Editing has now become a little cumbersome due to the lack of structure. May I sort these files (e.g. by numerical id or alphabetical by name) without negative effect on the system and/or package managers? I would guess that is does not matter but just to be sure I would like to get a 2nd opinion. Maybe root needs to be the 1st line or within the first 1k lines or something? The same goes for /etc/*shadow .
You should be OK doing this : in fact, according to the article and reading the documentation, you can sort /etc/passwd and /etc/group by UID/GID with pwck -s and grpck -s , respectively.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/424967", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115138/" ] }
424,969
I'm using Debian in a chroot environment on my android. As I don't use GUI at all, I think it's better to uninstall packages related to GUI to free up space. What can I do to remove all GUI packages? How can I reinstall all those removed packages, if anything breaksafter the package removal?(optional) As a response to @Arpit Agarwal's comment, here's a link to debian installation procedure on android. debian on termux output of apt purge libx11-6 libwayland-client0 : Reading package lists...Building dependency tree...Reading state information...Package 'libwayland-client0' is not installed, so not removedThe following packages were automatically installed and are no longer required: aglfn fontconfig fontconfig-config fonts-dejavu-core fonts-droid-fallback fonts-liberation fonts-noto-mono ghostscript gnuplot-data gsfonts hicolor-icon-theme imagemagick-6-common info java-common krb5-locales libaec0 libamd2 libarpack2 libasound2 libasound2-data libauthen-sasl-perl libavahi-client3 libavahi-common-data libavahi-common3 libblas-common libblas3 libcamd2 libccolamd2 libcholmod3 libcolamd2 libcups2 libcupsfilters1 libcupsimage2 libcurl3-gnutls libcxsparse3 libdatrie1 libdjvulibre-text libdjvulibre21 libdrm-amdgpu1 libdrm-freedreno1 libdrm-nouveau2 libdrm-radeon1 libdrm2 libedit2 libencode-locale-perl libfftw3-double3 libfftw3-single3 libfile-listing-perl libflac8 libfont-afm-perl libfontconfig1 libfreetype6 libgdk-pixbuf2.0-common libgfortran3 libgl1-mesa-dri libglapi-mesa libglib2.0-0 libglib2.0-data libglpk40 libgraphite2-3 libgs9 libgs9-common libgssapi-krb5-2 libharfbuzz0b libhdf5-100 libhtml-form-perl libhtml-format-perl libhtml-parser-perl libhtml-tagset-perl libhtml-tree-perl libhttp-cookies-perl libhttp-daemon-perl libhttp-date-perl libhttp-message-perl libhttp-negotiate-perl libice6 libijs-0.35 libilmbase12 libio-html-perl libio-socket-ssl-perl libjack-jackd2-0 libjbig0 libjbig2dec0 libjpeg62-turbo libjxr-tools libjxr0 libk5crypto3 libkeyutils1 libkrb5-3 libkrb5support0 liblapack3 liblcms2-2 libldap-2.4-2 libldap-common libllvm3.9 liblqr-1-0 libltdl7 liblua5.1-0 liblwp-mediatypes-perl liblwp-protocol-https-perl libmailtools-perl libmetis5 libmng1 libnet-http-perl libnet-smtp-ssl-perl libnet-ssleay-perl libnetpbm10 libnghttp2-14 libnspr4 libnss3 libogg0 libopenblas-base libopenexr22 libopenjp2-7 libopus0 libosmesa6 libpango-1.0-0 libpangoft2-1.0-0 libpaper-utils libpaper1 libpcsclite1 libpixman-1-0 libpng16-16 libportaudio2 libqhull7 libqrupdate1 libqscintilla2-l10n libqt4-dbus libqt4-network libqt4-xml libqtcore4 libqtdbus4 librtmp1 libsamplerate0 libsasl2-2 libsasl2-modules libsasl2-modules-db libsensors4 libsm6 libsndfile1 libssh2-1 libsuitesparseconfig4 libsz2 libtext-unidecode-perl libthai-data libthai0 libtiff5 libtimedate-perl libtxc-dxtn-s2tc libumfpack5 liburi-perl libvorbis0a libvorbisenc2 libwebp6 libwww-perl libwww-robotrules-perl libx11-data libx11-xcb1 libxau6 libxcb-dri2-0 libxcb-dri3-0 libxcb-glx0 libxcb-present0 libxcb-render0 libxcb-shm0 libxcb-sync1 libxcb1 libxdmcp6 libxml-libxml-perl libxml-namespacesupport-perl libxml-parser-perl libxml-sax-base-perl libxml-sax-expat-perl libxml-sax-perl libxshmfence1 libzip4 netpbm octave-common octave-info perl-openssl-defaults poppler-data psutils qdbus qtchooser qtcore4-l10n shared-mime-info tex-common texinfo ucf x11-common xdg-user-dirsUse 'apt autoremove' to remove them.The following packages will be REMOVED: ca-certificates-java* default-jre-headless* gnuplot-nox* groff* imagemagick* imagemagick-6.q16* libaudio2* libcairo2* libfltk-gl1.3* libfltk1.3* libgd3* libgdk-pixbuf2.0-0* libgl1-mesa-glx* libgl2ps1* libglu1-mesa* libgraphicsmagick++-q16-12* libgraphicsmagick-q16-3* libmagick++-6.q16-7* libmagickcore-6.q16-3* libmagickcore-6.q16-3-extra* libmagickwand-6.q16-3* liboctave3v5* libpangocairo-1.0-0* libplot2c2* libpstoedit0c2a* libqscintilla2-12v5* libqt4-opengl* libqtgui4* libwmf0.2-7* libx11-6* libxaw7* libxcursor1* libxdamage1* libxext6* libxfixes3* libxft2* libxi6* libxinerama1* libxmu6* libxpm4* libxrender1* libxt6* libxtst6* libxxf86vm1* octave* openjdk-8-jre-headless* pstoedit* qt-at-spi* Need some suggestions regarding which packages can be removed safely without affecting Octave. Otherwise this question can be closed , if the some specific answer can't be given.
On Debian, to remove all GUI packages, you can remove the two libraries used to connect to display servers: apt purge libx11-6 libwayland-client0 This will remove all packages depending on these libraries. The removals will be logged in the history logs in /var/log/apt , so you can look there if you need to revert a removal. This might catch some packages which contain both CLI and GUI tools, although in most, if not all, cases those are packaged separately (so that it is possible to have a functional text-only system).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/424969", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/247138/" ] }
424,998
I was reading Ritchie and Thompson's paper about the Unix file system. They write, 'It is worth noting that the system is totally self-supporting'. Were the systems before Unix not self-supporting? In what ways?
The question in your title is addressed immediately after your quote in the paper : All Unix software is maintained on the system; likewise, this paper and all other documents in this issue were generated and formatted by the Unix editor and text formatting programs. So “self-supporting” means that once a Unix system is setup, it is self-sufficient, and its users can use it to make changes to the system itself. “This issue” in the quote above refers to Bell System Technical Journal, Volume 57, Number 6, Part 2, July-August 1978 (also available on the Internet Archive ), which was all about the Unix system (and makes fascinating reading for anyone interested in Unix and its history). The fact that Unix is self-supporting doesn’t mean all other systems before it weren’t; but some operating systems did require the use of other systems to build them (this became more common later, in fact, with the advent of micro-computers, whose systems were often developed on minis). Unix was novel in that it also included typesetting tools, which meant that it could not only build itself, but also produce its documentation, both online and in print (I imagine Unix might not be the first such system, but this would have been at least unusual).
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/424998", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/276636/" ] }
425,029
I currently do not understand what is going on when I try to find the contents of or information for a symbolic link to a directory when using the ls command. I understand there is an option -H for the ls command that will follow symbolic links listed on the command line. What the manual page for ls does not state is that this is only necessary when using the -l option. If I simply do something like ls symLinkToDir it will list the linked directories contents with no additional options. But if I do ls -l symLinkToDir it will only display information about the link UNLESS I include the -H option as well. This example is what I am talking about: brian@LinuxBox:~$ ls playground/linkedDirfile4 file5brian@LinuxBox:~$ ls -l playground/linkedDirlrwxrwxrwx 1 brian brian 4 Feb 18 16:42 playground/linkedDir -> dir2brian@LinuxBox:~$ ls -lH playground/linkedDirtotal 0-rw-rw-r-- 1 brian brian 0 Feb 18 16:41 file4-rw-rw-r-- 1 brian brian 0 Feb 18 16:41 file5 Am I not understanding something here? Is this just a weird way of how it works? If this is indeed how it works, shouldn't the manual page say the symbolic link will be followed under certain conditions without the need for the -H option? Thanks in advance for your input.
The behavior of ls on symbolic links to directories depends on many options, not just -l and -H . In the absence of symlink behavior options ( -L , -H ), ls symlinkToDir displays the contents of the directory, but ls -l symlinkToDir , ls -d symlinkToDir and ls -F symlinkToDir all display information about the symbolic link. If you're reading the man page of the GNU implementation of ls , it doesn't give the full story. GNU man pages are just summaries. The full documentation is in the Info manual ( info ls ), usually available in HTML these days. I can't find the default behavior on symlinks to directories in the Info manual either, though, this may be a bug in the documentation. The FreeBSD man page , for example, is more precise, but you have to read the description of the -H option to find the default behavior. -H Symbolic links on the command line are followed. This option is assumed if none of the -F , -d , or -l options are specified. If you want a more formal description (but less easy to read), read the POSIX specification . This won't have the extensions of your implementation. If one or more of the -d, -F, or -l options are specified, and neither the -H nor the -L option is specified, for each operand that names a file of type symbolic link to a directory, ls shall write the name of the file as well as any requested, associated information. If none of the -d, -F, or -l options are specified, or the -H or -L options are specified, for each operand that names a file of type symbolic link to a directory, ls shall write the names of files contained within the directory as well as any requested, associated information.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/425029", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/276055/" ] }
425,044
I'm transforming some data with awk (or gawk ) and want to delete one of the input fields before printing the output again. What I want to achieve is this: ~ $ echo 'field1,field2,field3' | awk -F, '{transform($1); delete($2); print $0;}'new_field1,field3 I can't just assign an empty string to $2 because that leads to new_field1,,field3 (note the two commas). I could explicitly print only the fields that I want but that's not very elegant because I've got far more field than 3 and also there are optional fields at the end (not shown here). That's why I prefer print $0 . Just need to get rid of some fields first. Any idea?
Deleting fields in awk is notoriously difficult. It seems to be such a simple (and often required) operation but it's harder than it should be. See Is there a way to completely delete fields in awk, so that extra delimiters do not print? from Stack Overflow for a good way to do this. I've copied the rmcol() function in @ghoti's answer, so that we have a copy here on U&L: function rmcol(col, i) { for (i=col; i<NF; i++) { $i=$(i+1) } NF--} It deletes the specified column from the current input line and decrements the field counter ( NF ) to match. I have no idea what your transform() function does, so I won't even attempt to duplicate that - but here's an example of using rmcol() in an awk one-liner: $ echo 'field1,field2,field3' | awk -F, -v OFS=, ' function rmcol(col, i) { for (i=col; i<NF; i++) { $i=$(i+1) } NF-- } { rmcol(2); print; } 'field1,field3 BTW, if you need to delete multiple fields from an input line, it is best/easiest to delete them in reverse order. That is, delete the highest-numbered fields first . Why? Because the higher-numbered fields will be renumbered every time you delete a lower-numbered field, making it very difficult to keep track of which field number belongs to which field. BTW, delete() in awk is for deleting elements of an array - not for deleting fields from an input line. You could split() each input line (on FS ) into an array and delete the 2nd array element, but then you'd have to write a join() function to print the array with a comma (or OFS ) separating each field. Even doing that would be more complicated than one would expect because all arrays in awk are associative arrays (i.e. they're not numerically indexed) - so delete(array[2]) won't automatically shift array elements 3+ into elements 2+. You'd have to write your own wrapper function around delete() to do pretty much the same thing for arrays that rmcol() does for input fields.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/425044", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41979/" ] }
425,065
Is there a way to know which cores currently have a process pinnedto them? Even processes run by other users should be listed in the output. Or, is it possible to try pinning a process to a core butfail in case the required core already has a process pinned to it? PS: processes of interest must have bin pinned to the given cores, not justcurrently running on the given core PS: this is not a duplicate, the other question is on how to ensure exclusive use of one CPU by one process. Here we are asking how to detect that a process was pinned to a given core (i.e. cpuset was used, not how to use it).
Under normal circumstances Linux processes are not explicitly pinned to a given core, there's typically no reason to do that, but is possible. You can manage process affinity using taskset or view which process runs on which CPU in the present instant using ps with the field 'psr'. Check current CPU affinity of process 27395: $ ps -o psr 27395PSR 6 Check affinity list of process 27395: $ taskset -pc 27395pid 27395's current affinity list: 0-7 Set affinity of process 27395 to CPU 3 $ taskset -pc 3 27395pid 27395's current affinity list: 0-7pid 27395's new affinity list: 3 Check current CPU affinity of process 27395: $ ps -o psr 27395PSR 3 To check if any process is pinned to any CPU, you can loop through your process identifiers and run taskset -p against them: $ for pid in $(ps -a -o pid=); do taskset -pc $pid 2>/dev/null; donepid 1803's current affinity list: 0-7pid 1812's current affinity list: 0-7pid 1986's current affinity list: 0-7pid 2027's current affinity list: 0-7pid 2075's current affinity list: 0-7pid 2083's current affinity list: 0-7pid 2122's current affinity list: 0-7pid 2180's current affinity list: 0-7pid 2269's current affinity list: 0-7pid 2289's current affinity list: 0-7pid 2291's current affinity list: 0-7pid 2295's current affinity list: 0-7pid 2300's current affinity list: 0-7pid 2302's current affinity list: 0-7pid 3872's current affinity list: 0-7pid 4339's current affinity list: 0-7pid 7301's current affinity list: 0-7pid 7302's current affinity list: 0-7pid 7309's current affinity list: 0-7pid 13972's current affinity list: 0-7
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/425065", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/97256/" ] }
425,168
I upgraded to plasma 5.12 and the global menus disappeared, and the entry in settings also disappeared. Application Style ⇒ Widget Style ⇒ Fine Tuning I tried deleting the plasma* files from ~/.config and also the whole ~/.kde folder thinking it may be some misconfiguration, but none of this helped.
According to announcements you need to add the corresponding Plasma Widget or the menu button from "Window Decorations" settings. Global Menus have returned. KDE's pioneering feature to separate the menu bar from the application window allows for a new user interface paradigm, with either a Plasma Widget showing the menu, or with the menu neatly tucked away in the window title bar. Setting it up has been greatly simplified in Plasma 5.12: as soon as you add the Global Menu widget or title bar button, the required background service gets started automatically. No need to reload the desktop or click any confirmation buttons! Step by step instructions: From the Application Dashboard/Launcher/Menu, select "Settings" (in the Launcher: under "Applications") → "System Settings" → "Application Style" → "Window Decorations" → "Titlebar Buttons"; then drag and drop the "Application menu" icon to the titlebar.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/425168", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/214276/" ] }
425,205
How can I find all files I can not write to? Would be good if it takes standard permissions and acls into account. Is there an "easy" way or do I have to parse the permissions myself?
Try find . ! -writable the command find returns a list of files, -writable filters only the ones you have write permission to, and the ! inverts the filter. You can add -type f if you want to ignore the directories and other 'special files'.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/425205", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23987/" ] }
425,211
I am developing a application to don't forget the pendrive. This app must lock the shutdown if a pendrive is connected to the machine. As this form, if the user wants to shutdown the system while a pendrive is connected, the system shows a notification to alert about it must disconnect the pendrive to unlock shutdown. To detect the shutdown event, I set a polkit rule what call a script to check if any pendrive are connected to the system. If there are any pendrive connected, the polkit rule calls to notify-send through the script send_notify.sh , which execute this command: notify-send "Pendrive-Reminder" "Extract Pendrive to enable shutdown" -t 5000 The polkit rule is this: polkit.addRule(function(action, subject) { if (action.id == "org.freedesktop.consolekit.system.stop" || action.id == "org.freedesktop.login1.power-off" || action.id == "org.freedesktop.login1.power-off-multiple-sessions" || action.id == "org.xfce.session.xfsm-shutdown-helper") { try{ polkit.spawn(["/usr/bin/pendrive-reminder/check_pendrive.sh", subject.user]); return polkit.Result.YES; }catch(error){ polkit.spawn(["/usr/bin/pendrive-reminder/send_notify.sh", subject.user]); return polkit.Result.NO; } } } But. after put this polkit rule and press shutdown button, my user don't receive any notification. I debug the rule and I checked that second script It's executed, but the notify-send don't shows the notification to my user. How can I solve this? UPDATE: I tried to modify the script as this: #!/bin/bashuser=$1XAUTHORITY="/home/$user/.Xauthority"DISPLAY=$( who | grep -m1 $user.*\( | awk '{print $5}' | sed 's/[(|)]//g')notify-send "Extract Pendrive to enable shutdown" -t 5000exit 0 The user is passed as parameter by pòlkit But the problem continues UPDATE: I've just seen this bug https://bugs.launchpad.net/ubuntu/+source/libnotify/+bug/160598 that don't allows to send notifications as root. Later I'll test to modify workaround changing user UPDATE2: After change code to this. the problem continues: #!/bin/bashexport XAUTHORITY="/home/$user/.Xauthority"export DISPLAY=$(cat "/tmp/display.$user")user=$1su $user -c 'notify-send "Pendrive Reminder" "Shutdown lock enabled. Disconnect pendrive to enable shutdown" -u critical'
Try find . ! -writable the command find returns a list of files, -writable filters only the ones you have write permission to, and the ! inverts the filter. You can add -type f if you want to ignore the directories and other 'special files'.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/425211", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/255730/" ] }
425,276
I'm curious about the theory behind how heredocs can be passed as a file to a command line utility. Recently, I discovered I can pass a file as heredoc. For example: awk '{ split($0, arr, " "); print arr[2] }' <<EOFfoo bar bazEOFbar This is advantageous for me for several reasons: Heredocs improve readability for multi line inputs. I don't need to memorize each utilities flag for passing the file contents from the command line. I can use single and double quotes in the given files. I can control shell expansion. For example: ruby <<EOFputs "'hello $HOME'"EOF'hello /Users/mbigras'ruby <<'EOF'puts "'hello $HOME'"EOF'hello $HOME' I'm not clear what is happening.It seems like the shell thinks the heredoc is a file with contents equal to the value of the heredoc.I've this technique used with cat, but I'm still not sure what was going on: cat <<EOLhello worldEOLhello world I know cat prints the contents of a file, so presumably this heredoc is a temporary file of some kind. I'm confused about what precisely is going on when I "pass a heredoc to a command line program". Here's an example using ansible-playbook .I pass the utility a playbook as a heredoc; however it fails, as shown using echo $? : ansible-playbook -i localhost, -c local <<EOF &>/dev/null---- hosts: all gather_facts: false tasks: - name: Print something debug: msg: hello worldEOFecho $?5 However, if I pass the utility the same heredoc but preceed it with /dev/stdin it succeeds ansible-playbook -i localhost, -c local /dev/stdin <<EOF &>/dev/null---- hosts: all gather_facts: false tasks: - name: Print something debug: msg: hello worldEOFecho $?0 What precisly is going on when one "passes a heredoc as a file"? Why does the first version with ansible-playbook fail but second version succeed? What is the significance of passing /dev/stdin before the heredoc? Why do other utilities like ruby or awk not need the /dev/stdin before the heredoc?
What precisely is going on when one "passes a heredoc as a file"? You aren't. Here-documents provide standard input, like a pipe. Your example awk '{ ... }' <<EOFfoo bar bazEOF is exactly equivalent to echo foo bar baz | awk '{ ... }' awk , cat , and ruby all read from standard input if they aren't given a filename to read from on the command line. That is an implementation choice. Why does the first version with anisble-playbook fail but second version succeed? ansible-playbook does not read from standard input by default, but requires a file path instead. This is a design choice. /dev/stdin is quite likely a symlink to /dev/fd/0 , which is a way of talking about the current process's file descriptor #0 (standard input). That's something exposed by your kernel (or system library). The ansible-playbook command opens /dev/stdin like a regular filesystem file and ends up reading its own standard input, which would otherwise have been ignored. You likely also have /dev/stdout and /dev/stderr links to FDs 1 & 2, which you can use as well if you're telling something where to put its output. What is the significance of passing /dev/stdin before the heredoc? It is an argument to the ansible-playbook command. Why do other utilities like ruby or awk not need the /dev/stdin before the heredoc? They read from standard input by default as a design choice, because they are made to be used in pipelines. They write to standard output for the same reason.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/425276", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/173557/" ] }
425,297
I want to use files from the World Wide Web as prerequisites in my makefiles: local.dat: http://example.org/example.gz curl -s $< | gzip -d | transmogrify >$@ I only want to "transmogrify" if the remote file is newer than the local file, just like make normally operates. I do not want to keep a cached copy of example.gz - the files are large, and I don't need the raw data. Preferably I would want to avoid downloading the file at all. The goal is to process a few of these in parallel using the -j make flag. What is a clean way to solve this? I can think of a few ways to go: Keep an empty dummy file stashed away, updated every time the target is recreated Some plugin using GNU make's new plugin system (which I know nothing about) A make-agnostic way that mounts HTTP servers in the local filesystem Before digging further, I would like some advice, preferably specific examples!
Try something like this in your Makefile: .PHONY: local.datlocal.dat: [ -e example.gz ] || touch -d '00:00' example.gz curl -z example.gz -s http://example.org/example.gz -o example.gz [ -e $@ ] || touch -d 'yesterday 00:00' $@ if [ "$(shell stat --printf '%Y' example.gz)" \ -gt "$(shell stat --printf '%Y' $@)" ] ; then \ zcat example.gz | transmogrify >$@ ; \ fi truncate -s 0 example.gz touch -r $@ example.gz (note: this is a Makefile, so the indents are tabs, not spaces. of course. It is also important that there are no spaces after the \ on the continuation lines - alternatively get rid of the backslash-escapes and make it one long, almost-unreadable line) This GNU make recipe first checks that a file called example.gz exists (because we're going to be using it with -z in curl ), and creates it with touch if it doesn't. The touch creates it with a timestamp of 00:00 (12am of the current day). Then it uses curl 's -z ( --time-cond ) option to only download example.gz if it has been modified since the last time it was downloaded. -z can be given an actual date expression, or a filename. If given a filename, it will use the modification time of the file as the time condition. After that, if local.dat doesn't exist, it creates it with touch , using a timestamp guaranteed to be older than that of example.gz . This is necessary because local.dat has to exist for the next command to use stat to get its mtime timestamp. Then, if example.gz has a timestamp newer than local.dat , it pipes example.gz into transmogrify and redirects the output to local.dat . Finally, it does the bookkeeping & cleanup stuff: it truncates example.gz (because you only need to keep a timestamp, and not the whole file) touch es example.gz so that it has the same timestamp as local.dat The .PHONY target ensures that the local.dat target is always executed, even if the file of that name already exists. Thanks to @Toby Speight for pointing out in the comments that my original version wouldn't work, and why. Alternatively, if you want to pipe the file directly into transmogrify without downloading it to the filesystem first: .PHONY: local.datlocal.dat: [ -e example.gz ] || touch -d '00:00' example.gz [ -e $@ ] || touch -d 'yesterday 00:00' $@ if [ "$(shell stat --printf '%Y' example.gz)" \ -gt "$(shell stat --printf '%Y' $@)" ] ; then \ curl -z example.gz -s http://example.org/example.gz | transmogrify >$@ ; \ fi touch -r $@ example.gz NOTE: this is mostly untested so may require some minor changes to get the syntax exactly right. The important thing here is the method, not a copy-paste cargo-cult solution. I have been using variations of this method (i.e. touch -ing a timestamp file) with make for decades. It works, and usually allows me to avoid having to write my own dependency resolution code in sh (although I've had to do something similar with stat --printf %Y here). Everyone knows make is a great tool for compiling software...IMO it's also a very much under-rated tool for system admin and scripting tasks.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/425297", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145426/" ] }
425,360
this is simple CLI to remove couple file on remote machine ssh 182.2.34.1 "rm -f /etc/yum.repos.d/repo.1 master.er top.fg REPO.l" but only repo.1 file was deleted what is wrong with my syntax
master.er, top.fg, and REPO.1 are being removed from current directory (which is probably your home directory). You should provide full path to the directories.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/425360", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
425,373
how to remove files that could be with lower/upper case for example, the file_name could be: STOCK.Repo or Stock.REPO or stOCK.repo or stock.repo ... etc I would run: rm -f $file_name the goal is to remove file as stock.repo that could be in lower/upper case on remote machine
For Bash-specific solution: $ shopt -s nocaseglob and then run the rm command. Note to unset this option, use shopt -u nocaseglob For completeness, I would point out an alternative but less elegant solution: $ rm [sS][tT][oO][cC][kK].[rR][eE][pP][oO]
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/425373", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
425,394
I need a program to compile python source code; as I found out at first I need to make a binary file from my python script. I've already checked a lot of links, but still I haven't found something for Linux. I found py2bin for OS/X, but there are no versions for Linux.
In my opinion your problem in Google stems for calling a compiler capable of producing binaries from python a "disassembler". I have not found a true compiler, however I have found in Google a python compiler packager, which packs all the necessary files in a directory, obfuscating them, with an executable frontend: pyinstaller at http://www.pyinstaller.org/ ; it appears to be actively supported, as the last version 3.4 which was released on 2018-09-09, contrary to py2bin which seems to be not actively maintained. Features: Packaging of Python programs into standard executables, that work on computers without Python installed. Multi-platform, works under: Windows (32-bit and 64-bit), Linux (32-bit and 64-bit), Mac OS X (32-bit and 64-bit), contributed suppport for FreeBSD, Solaris, HPUX, and AIX. Multi-version: supports Python 2.7 and Python 3.3—3.6. To install: pip install pyinstaller Then, go to your program’s directory and run: pyinstaller yourprogram.py This will generate the bundle in a subdirectory called dist.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/425394", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/263851/" ] }
425,418
I'm using Linux Mint as day-by-day OS, but I'm having a hard time with RAM usage, as 4 of 16 GB are used just when idling...Why is this happening? Is it something I forgot to configure? What can I do to lower RAM usage? I only have Skype, Spotify and Discord open. Resources page: Processes page: CPU Usage: Uptime stats: dragos@madscientistlab ~ $ uptime 16:40:10 up 3 days, 3:53, 1 user, load average: 1,95, 1,42, 1,13 free -g command: dragos@madscientistlab ~ $ free -g total used free shared buff/cache availableMem: 15 4 6 0 5 10Swap: 15 0 15
In my opinion your problem in Google stems for calling a compiler capable of producing binaries from python a "disassembler". I have not found a true compiler, however I have found in Google a python compiler packager, which packs all the necessary files in a directory, obfuscating them, with an executable frontend: pyinstaller at http://www.pyinstaller.org/ ; it appears to be actively supported, as the last version 3.4 which was released on 2018-09-09, contrary to py2bin which seems to be not actively maintained. Features: Packaging of Python programs into standard executables, that work on computers without Python installed. Multi-platform, works under: Windows (32-bit and 64-bit), Linux (32-bit and 64-bit), Mac OS X (32-bit and 64-bit), contributed suppport for FreeBSD, Solaris, HPUX, and AIX. Multi-version: supports Python 2.7 and Python 3.3—3.6. To install: pip install pyinstaller Then, go to your program’s directory and run: pyinstaller yourprogram.py This will generate the bundle in a subdirectory called dist.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/425418", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/266984/" ] }
425,442
I am using below python code to reset the environment variable http_proxy in Linux CentOS 6, but it is not unsetting the variable for the rest of the Python script. Code: import os print "Unsetting http..." os.system("unset http_proxy") os.system("echo $http_proxy") print "http is reset" Output: Unsetting http...http://web-proxy.xxxx.xxxxxxx.net:8080http is resetProcess finished with exit code 0
Each invocation of os.system() runs in its own subshell, with its own fresh environment: >>> import os>>> os.system("echo $$")976780>>> os.system("echo $$")976790 You are unsetting the http_proxy variable, but then your subshell has completed executing the command (to wit: unset ), and terminates. You then start a new subshell with a new environment in which to run echo . I believe what you are trying to do is del os.environ['http_proxy'] , or os.environ.pop('http_proxy') if you want to ensure there is no http_proxy environment variable whether or not one previously existed: $ export foo=bar$ python2Python 2.7.10 (default, Jul 15 2017, 17:16:57)[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.31)] on darwinType "help", "copyright", "credits" or "license" for more information.>>> import os>>> os.environ['foo']'bar'>>> del os.environ['foo']>>> os.system('echo $foo')0
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/425442", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/276977/" ] }
425,446
I created a symlink to a directory but it seems that new output files from executing some analyses are also stored in the original directory - is this to be expected? From my understanding, the new output files should not be generated in the original directory. Would appreciate any insight - I'd technically not want the original directory to be modified/added to.
Each invocation of os.system() runs in its own subshell, with its own fresh environment: >>> import os>>> os.system("echo $$")976780>>> os.system("echo $$")976790 You are unsetting the http_proxy variable, but then your subshell has completed executing the command (to wit: unset ), and terminates. You then start a new subshell with a new environment in which to run echo . I believe what you are trying to do is del os.environ['http_proxy'] , or os.environ.pop('http_proxy') if you want to ensure there is no http_proxy environment variable whether or not one previously existed: $ export foo=bar$ python2Python 2.7.10 (default, Jul 15 2017, 17:16:57)[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.31)] on darwinType "help", "copyright", "credits" or "license" for more information.>>> import os>>> os.environ['foo']'bar'>>> del os.environ['foo']>>> os.system('echo $foo')0
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/425446", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/276979/" ] }
425,481
The echo command doesn't want to interpret backlash escapes (with -e option attached). For example, I want it to ring a bell with: echo -e \a Nothing happens, except it prints: a or \a How to turn on interpreting or how to fix it?
In echo -e \a the \ in front of the a will be stripped off from the argument to echo by the shell before echo is called. It is exactly equivalent to echo -e 'a' For echo to receive \a as backslash-followed-by-a, the \ has to be passed as is to echo . This is done either through echo -e '\a' or echo -e \\a If this will actually produce an audible or visible bell may depend on other settings.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/425481", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/277002/" ] }
425,487
I've read that you need to put the swap partition on HDD rather than SSD. My questions are the following: When and how is the "checking" done by the distribution (or something else) to find its Swap partition? Does it happen during boot? It just checks all available disks and searches for a partition with 'swap' flag? What happens if there are several partitions like that? Also, how many swap partitions do I need to have if I run, for example, two different distributions on the same disk, let's say Fedora and Ubuntu?
Statically configured swap space (the type that pretty much every distribution uses) is configured in /etc/fstab just like filesystems are. A typical entry looks something like: UUID=21618415-7989-46aa-8e49-881efa488132 none swap sw 0 0 You may also see either discard or nofail specified in the flags field (the fourth field). Every such line corresponds to one swap area (it doesn't have to be a partition, you can have swap files, or even entire swap disks). In some really specific cases you might instead have dynamically configured swap space, although this is rather rare because it can cause problematic behavior relating to memory management. In this case, the configuration is handled entirely by a userspace component that creates and enables swap files as needed at run time. As far as how many you need, that's a complicated question to answer, but the number of different Linux distributions you plan to run has zero impact on this unless you want to be able to run one distribution while you have another in hibernation (and you probably don't want to do this, as it's a really easy way to screw up your system). When you go to run the installer for almost any major distribution (including Fedora, OpenSUSE, Linux Mint, Debian, and Ubuntu), it will detect any existing swap partitions on the system, and add those to the configuration for the distribution you're installing (except possibly if you select manual partitioning), and in most cases this will result in the system being configured in a sensible manner. Even aside from that, I would personally suggest avoiding having multiple swap partitions unless you're talking about a server system with lots of disks, and even then you really need to know what you're doing to get set up so that it performs well.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/425487", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/186014/" ] }
425,532
for a single .mp3, I can convert it to wav using sox ./input/filename.mp3 ./output/filename.wav I tried: #!/bin/bashfor i in $(ls *mp3)do sox -t wav $i waves/$(basename $i)done But it throws the following error: sox FAIL formats: can't open input file `filename.mp3': WAVE: RIFF header not found How would I run this sox conversion over all mp3 files in the input folder and save the generated wav's to the output folder? PS : I don't know why it shows the file enclosed between a back quote ( ` ) and an apostrophe ' `filename.mp3' I played all the mp3's and they work perfectly file.
It sounds like you're running into a problem with spaces in the filename. If you have a file named "My Greatest Hits.mp3", your command will try to convert the three different files named "My", "Greatest", and "Hits.mp3". Instead of using the "$()" syntax, just use "*.mp3" in the for line, and make sure to quote the file names in the sox command. In addition, the basename command doesn't remove the file extension, just any folder names. So this command will create a bunch of WAV files with a ".mp3" extension. Adding "-s .mp3" to the command tells basename to strip the extension, and then put ".wav" on the end adds the correct extension. Put it all together, and you have this: for i in *.mp3do sox "$i" "waves/$(basename -s .mp3 "$i").wav"done
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/425532", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/257063/" ] }
425,538
I can't understand how this sed script works: echo -e "Line #1\n\n\n\nLine #2\n\n\nLine #3" | sed '1s/^$//p;/./,/^$/!d' It suppresses repeated empty lines like cat -s But I have a couple of questions: For what 1s/^$//p ? As I understand it do nothing with the first line even if it empty Is this /./,/^$/ matches only before first ^$ like Line #1\n\n and not matches Line #1\n\n\n ? Are ranges not greedy by default in sed? To clarify question 3 I tried next tests: echo -e "Line #1\n\n\n\nLine #2\n\n\nLine #3" | sed -n '/#/,/#/p' And result was: Line #1Line #2Line #3 (so, it is greedy) But when I tried: echo -e "Line #1\n\n\n\nLine #2\n\n\nLine #3" | sed -n '/#1/,/#/p' result was: Line #1Line #2 (now it seems to be not greedy)
1s/^$//p prints the first line, if it's empty. /./,/^$/ matches lines from the first non-empty line, to the first empty line encountered. It's not greedy in the sense that a regex qualifier is: sed can't look ahead to the file or backtrack, so it has to stop the first time the ending pattern matches. After the ending line, the search for the beginning line starts again, so the next non-empty line again starts the range. In effect, the range matches contiguous nonempty lines, plus the first following empty one. Since the range is used as /./,/^$/!d , all lines not matching it are deleted. This includes the very first line if it's empty, which is why it's explicitly printed by the first rule. Without the 1s/^$//p rule, the first line would be removed if empty, even though it's not really "repeating". $ echo $'\nfoo' | sed '1s/^$//p;/./,/^$/!d'foo$ echo $'\nfoo' | sed '/./,/^$/!d'foo$ In your test, the range /#/,/#/ is a bit different since it starts and ends with the same pattern. Line #1 matches the beginning pattern, (so the intervening empty lines are printed) Line #2 matches the ending one, (the following empty lines aren't) and on Line #3 , the range begins again. In the other one, the starting pattern is /#1/ , but that's only found once in the input.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/425538", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/276103/" ] }
425,539
I have a webserver with cPanel, and I tried to install mod-pagespeed. but when use the commands in every tutorial, I get permissions denied and when I use sudo I get bash: sudo commando not found. Somebody know what's happening? I connect with my server by ssh.
1s/^$//p prints the first line, if it's empty. /./,/^$/ matches lines from the first non-empty line, to the first empty line encountered. It's not greedy in the sense that a regex qualifier is: sed can't look ahead to the file or backtrack, so it has to stop the first time the ending pattern matches. After the ending line, the search for the beginning line starts again, so the next non-empty line again starts the range. In effect, the range matches contiguous nonempty lines, plus the first following empty one. Since the range is used as /./,/^$/!d , all lines not matching it are deleted. This includes the very first line if it's empty, which is why it's explicitly printed by the first rule. Without the 1s/^$//p rule, the first line would be removed if empty, even though it's not really "repeating". $ echo $'\nfoo' | sed '1s/^$//p;/./,/^$/!d'foo$ echo $'\nfoo' | sed '/./,/^$/!d'foo$ In your test, the range /#/,/#/ is a bit different since it starts and ends with the same pattern. Line #1 matches the beginning pattern, (so the intervening empty lines are printed) Line #2 matches the ending one, (the following empty lines aren't) and on Line #3 , the range begins again. In the other one, the starting pattern is /#1/ , but that's only found once in the input.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/425539", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/277041/" ] }
425,615
I face disk space full issue in Linux. When checked with df command I found the '/' directory is occupying 100%. So to check which folders consume much space I ran cd / and du -sh . But it takes forever to run the command. But ultimately I want to get the details on which top immediate sub folders of '/' folder are consuming huge disk space. So can any one tell the command for the same.
This command will list the 15 largest in order: du -xhS | sort -h | tail -n15 We use the -x flag to skip directories on separate file systems. The -h on the du gives the output in human readable format, sort -h can then arrange this in order. The -S on the du command means the size of subdirectories is excluded. You can change the number of the tail to see less or more. Super handy command.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/425615", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270747/" ] }
425,828
I have a list.txt file including a list of log file. For example server_log1 server_log2 ........ server_log50 I have another shell script used to download these logs. It worked like this ./script.sh serverlog1 I want to make it automatically that means it can automatically pass in each log file's name in list.txt to be executed. Is that possible?I tried #!/bin/bashfor i in `cat /home/ec2-user/list.txt` ; dosh ./workaround.sh $idone But it didn't work
The easiest method for reading arguments can be described as follows; Each argument is referenced and parsed by the $IFS or currently defined internal file separator . The default character is a space. For example, take the following; # ./script.sh arg1 arg2 The argument list in that example is arg1 = $1 and arg2 = $2 which can be rewritten as arg1 arg2 = $@ . Another note is the use of a list of logs, how often does that change? My assumption is daily. Why not use the directory output as the array of your iterative loop? For example; for i in $(ls /path/to/logs); do ./workaround.sh $i;done Or better yet, move on to use of functions in bash to eliminate clutter. function process_file(){ # transfer file code/command}function iterate_dir(){ local -a dir=($(ls $1)) for file in ${dir[@]}; do process_file $file done}iterate_dir /path/to/log/for While these are merely suggestions to improve your shell scripting knowledge I must know if there is an error you are getting and would also need to know the details of each scripts code and or functionality. Making the use of the -x argument helps debug scripting as well. If you are simply transferring logs you may wish to do away with the scripts all together and make use of rsync , rsyslog or syslog as they all are much more suited for the task in question.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/425828", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138782/" ] }
425,834
So I am trying to start a service on systemd enabled system. Name of service is ossec-hids-authd which is the authentication engine(agents) in ossec(Intrusion Detection Software). When I go and start the init script then systemctl times out and on getting the status I am seeing this error. /etc/init.d/ossec-hids-authd status● ossec-hids-authd.service - LSB: Authentication Daemon for OSSEC-HIDS. Loaded: loaded (/etc/rc.d/init.d/ossec-hids-authd; bad; vendor preset: disabled) Active: failed (Result: timeout) since Thu 2018-02-22 07:34:28 UTC; 11min ago Docs: man:systemd-sysv-generator(8)Feb 22 07:24:11 ip-10-0-197-117.ec2.internal systemd[1]: Starting LSB: Authentication Daemon for OSSEC-HIDS....Feb 22 07:24:11 ip-10-0-197-117.ec2.internal ossec-hids-authd[21142]: [39B blob data]Feb 22 07:24:11 ip-10-0-197-117.ec2.internal systemd[1]: PID file /var/run/ossec-authd.pid not readable (yet?) after start.Feb 22 07:24:11 ip-10-0-197-117.ec2.internal ossec-hids-authd[21142]: 2018/02/22 07:24:11 ossec-authd: INFO: Started (pid: 21148).Feb 22 07:34:28 ip-10-0-197-117.ec2.internal systemd[1]: ossec-hids-authd.service start operation timed out. Terminating.Feb 22 07:34:28 ip-10-0-197-117.ec2.internal systemd[1]: Failed to start LSB: Authentication Daemon for OSSEC-HIDS..Feb 22 07:34:28 ip-10-0-197-117.ec2.internal systemd[1]: Unit ossec-hids-authd.service entered failed state.Feb 22 07:34:28 ip-10-0-197-117.ec2.internal systemd[1]: ossec-hids-authd.service failed.Feb 22 07:40:20 ip-10-0-197-117.ec2.internal ossec-hids-authd[21142]: 2018/02/22 07:40:20 ossec-authd(1225): INFO: SIGNAL [(15)-(Terminated)] Received. Exit Cleaning... Now in the init script this process is actually making pid file in /var/ossec/var/run instead of /var/run and I checked pid file is actually created there. But somehow systemctl is failing to recognize it. Is it possible that systemd does not recognize pid files created outside of /var/run and if such is the case how to do that? Below is the init script #!/bin/sh## ossec-authd Start the OSSEC-HIDS Authentication Daemon## chkconfig: 2345 99 01# description: Provides key signing for OSSEC Clients# processname: ossec-authd# config: /var/ossec/etc/ossec.conf# pidfile: /var/run/ossec-authd.pid### BEGIN INIT INFO# Provides: ossec-authd# Required-Start: $network $local_fs $remote_fs# Required-Stop: $network $local_fs $remote_fs# Default-Start: 2 3 4 5# Default-Stop: 0 1 6# Short-Description: Authentication Daemon for OSSEC-HIDS.# Description: Provides key signing for OSSEC Clients### END INIT INFO# Author: Brad Lhotsky <[email protected]>NAME=ossec-authdDAEMON=/var/ossec/bin/ossec-authdDAEMON_ARGS="-p 1515 2>&1 >> /var/ossec/logs/ossec-authd.log &"PIDDIR=/var/ossec/var/runSCRIPTNAME=/etc/init.d/ossec-authd. /etc/rc.d/init.d/functionsgetpid() { for filename in $PIDDIR/${NAME}*.pid; do pidfile=$(basename $filename) pid=$(echo $pidfile |cut -d\- -f 3 |cut -d\. -f 1) kill -0 $pid &> /dev/null RETVAL=$? if [ $RETVAL -eq 0 ]; then PIDFILE=$filename PID=$pid else rm -f $filename fi; done;}start() { echo -n $"Starting $NAME: " daemon $DAEMON $DAEMON_ARGS retval=$? if [ $retval -eq 0 ]; then echo_success echo else echo_failure echo fi return $retval}stop() { echo -n $"Stopping $NAME: " getpid killproc -p $PIDFILE $NAME retval=$? echo return $retval}restart() { stop start}case "$1" in start) start ;; stop) stop ;; status) getpid if [ -z $PIDFILE ]; then status $NAME else status -p $PIDFILE $NAME fi; ;; restart) restart ;; *) echo "Usage: $0 {start|stop|status}" exit 2 ;;esacexit $?
systemd parses an init script's comments to generate temporary .service file at boot or upon daemon-reload command. Change the line # pidfile: /var/run/ossec-authd.pid to # pidfile: /var/ossec/var/run/ossec-authd.pid and run systemctl daemon-reload UPD: now I see that pid file name is generated by authd at runtime and init script has to search for $PIDDIR/${NAME}*.pid. Systemd can not search for pidfile, but can work without it. Sou you may try to remove # pidfile: line completely, or write your own .service file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/425834", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114331/" ] }
425,885
When doing a tail -f error.log , how to insert programmatically a line break after nothing has been appened to the file for 3 seconds ? (obviously, once one line break has been added, no other line break should be added until other line(s) of text is added to the log file) For instance, these lines are appened to error.log : foobarboo [[wait 4 seconds]]2far2foo2bar2boo [[wait 40 seconds]]2far This would be the output in the console : foobarboo2far2foo2bar2boo2far
You could always implement the tail -f (well here, unless you uncomment the seek() , more like tail -n +1 -f as we're dumping the whole file) by hand with perl for instance: perl -e ' $| = 1; # seek STDIN, 0, 2; # uncomment if you want to skip the text that is # already there. Or if using the ksh93 shell, add # a <((EOF)) after < your-file while (1) { if ($_ = <STDIN>) { print; $t = 0 } else { print "\n" if $t == 3; # and a line of "-"s after 10 seconds: print "-" x 72 . "\n" if $t == 10; sleep 1; $t++; } }' < your-file Or let tail -f do the tailing and use perl to insert the newlines if there's no input for 3 seconds: tail -f file | perl -pe 'BEGIN{$SIG{ALRM} = sub {print "\n"}} alarm 3' Those assume that the output itself is not slowed down (like when the output goes to a pipe that is not actively read).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/425885", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/251121/" ] }
425,907
Upon opening a pdf in evince and then making a change to that document (recompiling it in LaTeX), evince will automatically refresh to the latest version of the document. mupdf however does not do this: it keeps showing the version I originally opened.The latest version can be loaded with the r command, but is there a way to make mupdf behave like evince in that respect?The manual doesn’t mention this.
Poke mupdf with a HUP signal after the document changes (e.g. after recompiling it, or use entr or something to note the filesystem change) pkill -HUP mupdf or with more complication one might write an open-or-signal- mupdf script .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/425907", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/246902/" ] }
425,924
If you have (in Linux) these two routes: default via 192.168.1.1 dev enp58s0f1default via 192.168.16.1 dev wlp59s0 proto static metric 600 I would expect that the first one is used, but that's not the case: the second one is used instead. If I change that to this: default via 192.168.1.1 dev enp58s0f1 proto static metric 100 default via 192.168.16.1 dev wlp59s0 proto static metric 600 Then it works as expected. It seems that "no metric" is a worse (higher) metric than any number, instead of metric 0. What is this happening? Is it specific to Linux, or a networking standard? Thanks in advance.
Are you sure about your first observation? What does ip route show or route -n show then? Does the result change if you add proto static in first case? I have found at least two resources that explicitely says that 0 is the default value in Linux: http://0pointer.de/lennart/projects/ifmetric/ : The default metric for a route in the Linux kernel is 0, meaning the highest priority. http://www.man7.org/linux/man-pages/man8/route.8.html : If this option is not specified the metric for inet6 (IPv6) address family defaults to '1', for inet (IPv4) it defaults to '0'. (it then hints that the default may be different when using iproute2 but analysis of these sources do not show what it is) A Linux kernel hacker would surely be needed to sort that out. Also whatever default is chosen is clearly OS specific.This article ( https://support.microsoft.com/en-us/help/299540/an-explanation-of-the-automatic-metric-feature-for-ipv4-routes ) for example shows that Windows choose the default metric based on the bandwidth of the link.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/425924", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3686/" ] }
425,945
The system I am working with uses ssh to remotely connect to a Linux machine. It then executes a single shell command and analyses the output from the shell command. If I run reboot , I get exit code -1 , since rebooting of course kills the ssh connection. Any exit code other than 0 makes the system register a failure, thus I have been trying to write a single line command that will reboot and exit the ssh session gracefully. The machines in question are very bare bones and the reboot utility does not allow any options so I can't just schedule a reboot for later. After some thought I tried running $ sleep 3 && reboot & exit Which works when I call it manually: the connection closes with error code 0 and 3 seconds later the machine reboots. Great. But the same command run through our system doesn't actually reboot. It just returns exit code 0 and the reboot never happens. Why would this be?
Use the shutdown command. shutdown --reboot +1 "System is going down for reboot in 1 minute" I suspect the reason reboot doesn't work is because it requires a tty. You could try running it with a background tty, but the shutdown command has everything you need, including cancelling -- as it says in response: Shutdown scheduled for Thu 2018-02-22 15:19:33 MST, use 'shutdown -c' to cancel.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/425945", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/277357/" ] }
425,948
I wanted to ssh to multiple servers remotely and check whether any processes running on those servers and wait until the process to get finished. I have written the below code but this checks only for the first ip in the file(ip.txt) since I added 'continue' statement. I need to modify this code. while read IP do ssh ubuntu@$IP "pgrep -f pattern" if [ $? -eq 0 ]; then echo "Process is running" sleep 10 continue else echo "Process is not running" fi done < ip.txt
Use the shutdown command. shutdown --reboot +1 "System is going down for reboot in 1 minute" I suspect the reason reboot doesn't work is because it requires a tty. You could try running it with a background tty, but the shutdown command has everything you need, including cancelling -- as it says in response: Shutdown scheduled for Thu 2018-02-22 15:19:33 MST, use 'shutdown -c' to cancel.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/425948", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/252845/" ] }
426,187
I frequently (all day every day) have a minicom terminal tab open and execute commands on an embedded Linux system from my Ubuntu laptop. Sometimes I have to execute the reboot command, and sometimes, I am ashamed to admit, I accidentally execute reboot in the wrong tab and my laptop does exactly what it was designed to do without asking any questions... I am using Ubuntu 16.04 LTS and tried installing molly-guard but that has had no effect. 99 times out of 100 I don't mess up but my laptop takes a good 10 minutes to reboot and I execute reboot frequently enough for this to be an annoyance. Is there some bit of black magic, I can add to my custom terminal window setup bash script, that will make reboot map to something else (just for that bash session preferably)?
In the ~/.bashrc file on your laptop ( not on the embedded machine), add the line: reboot() { echo "Hey, don't do that!"; } If you actually wanted to run reboot on the laptop, you can get around this function by running sudo reboot or /sbin/reboot . Or, you could make it more friendly, as man0v suggested, by using: reboot () { echo 'Reboot? (y/n)' && read x && [[ "$x" == "y" ]] && /sbin/reboot; } I suggest putting such a function in ~/.bashrc because we want it available in interactive bash sessions. Alternative One may also want to consider the package molly-guard which is designed to protect machines from accidental shutdowns or reboots. It is available on debian and can be installed via: apt-get install molly-guard
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/426187", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/277552/" ] }
426,188
I am running here a couple of Debian 9.3 VMs with the kernel 4.9.0-5-amd64 in VMware Fusion 10.1.1 (in High Sierra). What concerns me is that the total memory in Free does not reflect correctly the memory I am setting aside for several VMs. As a test, I have set aside exactly 2GB for the VM and yet, free -m only shows 1986MB: $ free -m total used free shared buff/cache availableMem: 1986 51 1864 1 70 1825 or without the -m : $ free total used free shared buff/cache availableMem: 2033760 52264 1909584 1108 71912 1869628Swap: 999420 0 999420 Debugging the situation I also looked to /proc/meminfo and found this: $ egrep "MemTotal|DirectMap2M" /proc/meminfoMemTotal: 2033760 kBDirectMap2M: 2054144 kB So actually DirectMap2M reflects the 2GB; why has MemTotal aprox. less 20MB then? Interestingly enough, while Googling for MemTotal/DirectMap2M found this article: Memory / Ram is wrong If your operating system is showing the wrong RAM allocation via the free –m or top command please be assured that you really do have the correct amount of RAM allocated to your VPS. It is simply the reporting that is wrong, this is due to us using the latest Xen 4.x.x Versions for performance enhancements which unfortunately can cause this anomaly, more so on 32bit OS templates than others. Also please keep in mind that more modern kernels do not present some of the kernel reserved memory to the OS. So what is happening here? Is it the kernel reserving memory as they suggest? For what end? (yes, they are talking about Xen and this is about VmWare Fusion, yet it might be a clue) For complementing: vmstat output: $ vmstatprocs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 1909344 13988 57928 0 0 31 0 19 37 0 0 100 0 0 top output: $ top -b -n 1 | grep MemKiB Mem : 2033760 total, 1906228 free, 52836 used, 74696 buff/cacheKiB Swap: 999420 total, 999420 free, 0 used. 1867664 avail Mem dmidecode output (partial); Handle 0x0085, DMI type 6, 12 bytesMemory Module Information Socket Designation: RAM socket #0 Bank Connections: None Current Speed: Unknown Type: EDO DIMM Installed Size: 2048 MB (Single-bank Connection) Enabled Size: 2048 MB (Single-bank Connection) Error Status: OK Output of dmesg : $ sudo dmesg | egrep "Memory|Free|ACPI" | egrep -v "edge|wakeup|noapic|Added|Bug|IRQ|pnp|Plug"[ 0.000000] BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data[ 0.000000] BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS[ 0.000000] ACPI: Early table checksum verification disabled[ 0.000000] ACPI: RSDP 0x00000000000F6A10 000024 (v02 PTLTD )[ 0.000000] ACPI: XSDT 0x000000007FEEB683 00005C (v01 INTEL 440BX 06040000 VMW 01324272)[ 0.000000] ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240)[ 0.000000] ACPI: DSDT 0x000000007FEEC923 012550 (v01 PTLTD Custom 06040000 MSFT 03000001)[ 0.000000] ACPI: FACS 0x000000007FEFFFC0 000040[ 0.000000] ACPI: FACS 0x000000007FEFFFC0 000040[ 0.000000] ACPI: BOOT 0x000000007FEEC8FB 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001)[ 0.000000] ACPI: APIC 0x000000007FEEC1B9 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000)[ 0.000000] ACPI: MCFG 0x000000007FEEC17D 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001)[ 0.000000] ACPI: SRAT 0x000000007FEEB77F 000880 (v02 VMWARE MEMPLUG 06040000 VMW 00000001)[ 0.000000] ACPI: HPET 0x000000007FEEB747 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001)[ 0.000000] ACPI: WAET 0x000000007FEEB71F 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001)[ 0.000000] ACPI: Local APIC address 0xfee00000[ 0.000000] ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff][ 0.000000] ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff][ 0.000000] ACPI: PM-Timer IO Port: 0x1008[ 0.000000] ACPI: Local APIC address 0xfee00000[ 0.000000] Using ACPI for processor (LAPIC) configuration information[ 0.000000] ACPI: HPET id: 0x8086af01 base: 0xfed00000[ 0.000000] Memory: 2011544K/2096628K available (6196K kernel code, 1159K rwdata, 2848K rodata, 1408K init, 688K bss, 85084K reserved, 0K cma-reserved)[ 0.005465] ACPI: 1 ACPI AML tables successfully acquired and loaded[ 0.005475] ACPI: setting ELCR to 0200 (from 0e80)[ 0.044362] x86/mm: Memory block size: 128MB[ 0.046853] PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes)[ 0.170037] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f])[ 0.178855] pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI[ 0.282772] ACPI: Enabled 2 GPEs in block 00 to 0F[ 0.995687] Freeing initrd memory: 17572K[ 1.085789] Freeing unused kernel memory: 1408K[ 1.085832] Freeing unused kernel memory: 1980K[ 1.085873] Freeing unused kernel memory: 1248K Full /proc/meminfo output: $ cat /proc/meminfo MemTotal: 2033760 kBMemFree: 1906864 kBMemAvailable: 1868300 kBBuffers: 14720 kBCached: 50220 kBSwapCached: 0 kBActive: 45004 kBInactive: 28204 kBActive(anon): 8308 kBInactive(anon): 1064 kBActive(file): 36696 kBInactive(file): 27140 kBUnevictable: 0 kBMlocked: 0 kBSwapTotal: 999420 kBSwapFree: 999420 kBDirty: 0 kBWriteback: 0 kBAnonPages: 8284 kBMapped: 14860 kBShmem: 1108 kBSlab: 23996 kBSReclaimable: 9756 kBSUnreclaim: 14240 kBKernelStack: 3052 kBPageTables: 1348 kBNFS_Unstable: 0 kBBounce: 0 kBWritebackTmp: 0 kBCommitLimit: 2016300 kBCommitted_AS: 38852 kBVmallocTotal: 34359738367 kBVmallocUsed: 0 kBVmallocChunk: 0 kBHardwareCorrupted: 0 kBAnonHugePages: 0 kBShmemHugePages: 0 kBShmemPmdMapped: 0 kBHugePages_Total: 0HugePages_Free: 0HugePages_Rsvd: 0HugePages_Surp: 0Hugepagesize: 2048 kBDirectMap4k: 42880 kBDirectMap2M: 2054144 kBDirectMap1G: 0 kB
free , /proc/meminfo etc. only show the memory actually available to user space; the kernel sets aside some memory for its own use. If you look for a Memory: line in your boot logs ( /var/log/dmesg.0 or some such, or journalctl ), you’ll see something like Memory: 32818828K/33439808K available (5612K kernel code, 1083K rwdata, 1896K rodata, 1264K init, 832K bss, 620980K reserved, 0K cma-reserved) The amount of memory available after boot will typically be slightly larger than the amount indicated here, because some of the memory used for initialisation is returned to the system, and the amount of reserved memory can change ( e.g. if it’s reserved for an integrated GPU); in my case, MemTotal shows 32062 MiB instead of the 32049 MiB given above. In your case, only 62MiB are reserved (2048 – 1986); that is quite sufficient to cover the kernel code and data, plus some reserved memory. The boot logs will also include details of the system’s memory map, which should account for most of the reserved memory (it’s reserved for the firmware, ACPI etc., even in a VM). MemTotal never corresponds to the amount of installed physical memory, or allocated memory for a VM, and that’s perfectly normal .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/426188", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138261/" ] }
426,214
for example $ gcc -Wall abc.c$ ./a.out <font name="Moronicity" size=12><!-- ignore this comment --><i></i><div style="aa">hello</div></font><img src="spacer.gif"><div style="bb"><img src="spacer.gif"></div> -bash: syntax error near unexpected token `<' Would keep getting this error
You could use escape characters before each special character ( < , [ , > , ] ), but that'd be quite cumbersome in this case. Instead, you can simply surround the entire argument with single quotes as follows: $ ./a.out '<font name="Moronicity" size=12><!-- ignore this comment --><i></i><div style="aa">hello</div></font><img src="spacer.gif"><div style="bb"><img src="spacer.gif"></div>' Another option is to place the parameter string <font name="Moronicity" size=12><!-- ignore this comment --><i></i><div style="aa">hello</div></font><img src="spacer.gif"><div style="bb"><img src="spacer.gif"></div> into a file (for example, params ). This allows calling of your function in combination with the cat command, which outputs the contents of a file: $ ./a.out "$(cat params)" Note that the $() is used to execute the cat params command, and the double quotations are used to include the entirety of the file as the parameter to a.out . With the combination of the two, we can pass the contents of the file into the parameters of your program.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/426214", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/251369/" ] }
426,226
I'm new to bash and I need to do a little script to sum all file sizes, excluding subdirectories. My first idea was to keep the columns when you do ls -l . I cannot use grep, du or other advanced commands I've seen around here. $9 corresponds to the 9th column where the name is shown. $5 is the size of the file. ls -l | awk '{if(-f $9) { total +=$5 } }; END { print total }
With GNU find and awk: find . -maxdepth 1 -type f -printf "%s\n" | awk '{sum+=$1} END{print sum+0}' Output is file size in bytes. The final statement is print sum+0 rather than just print sum to handle the case where there are no files(i.e., to correctly print 0 in that case). This is an alternative to doing BEGIN {sum=0} .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/426226", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/277583/" ] }
426,320
I have a rsync command with following parameters: rsync -avz --{partial,stats,delete,exclude=".*"} I want to put that parameters inside a variable to reuse it after in the script. Something like this: #!/bin/bashVAR=rsync -avz --{partial,stats,delete,exclude=".*"}$VAR /dir1 /dir2 I've tried with quotes, single quotes, brackets, without any success.
Putting a complex command in a variable is a never a recommended approach. See BashFAQ/050 - I'm trying to put a command in a variable, but the complex cases always fail! Your requirement becomes really simple, if you just decide to use a function instead of a variable and pass arguments to it. Something like rsync_custom() { [ "$#" -eq 0 ] && { printf 'no arguments supplied' >&2; exit 1 ; } rsync -avz --{partial,stats,delete,exclude=".*"} "$@"} and now pass the required arguments to it as rsync_custom /dir1 /dir2 The function definition is quite simple in a way, we first check the input argument count using the variable $# which shouldn't be zero. We throw a error message saying that no arguments are supplied. If there are valid arguments, then "$@" represents the actual arguments supplied to the function. If this is a function you would be using pretty frequently i.e. in scripts/command-line also, add it to the shell startup-files, .bashrc , .bash_profile for instance. Or as noted, it may be worthwhile to expand the brace expansion to separate args for a better readability as rsync_custom() { [ "$#" -eq 0 ] && { printf 'no arguments supplied' >&2; exit 1 ; } rsync -avz --partial --stats --delete --exclude=".*" "$@"}
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/426320", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274823/" ] }
426,358
How to match a leading space with sed (all of them)? I'm not talking about leading tabs, but rather only on leading spaces. From a small test I did in Nano this seems to be correct: sed "s/^ //g" Do you find something wrong with this method? Note: "All of them" means all leading spaces in the document, in case there are 2 or more, and not just one.
Remove leading spaces: sed "s/^ *//" Remove leading whitespace: sed "s/^[[:space:]]*//" Remove leading spaces and tabs: sed "s/^[ \t]*//" (works in GNU sed) or sed 's/^[[:blank:]]*//' (works with any sed ) or sed $'s/^[ \t]*//' (in ksh/Bash/etc. to give a literal tab to sed ) As said in the comments, the /g specifier does nothing, as the beginning of line appears only once in the line, and even /g does not retry the pattern more than one. You'd need to add a conditional branch explicitly to repeat the substitution: sed -e :a -e 's/^ //' -e ta ^ * matches the empty string (no spaces) too, but that doesn't matter here. If you want to match lines that have at least one space, use ^ * (double space) or ^ + in extended regex. E.g. to change all indentations to exactly two spaces, use sed -e 's/^ */ /' or sed -Ee 's/^ +/ /' ( -E is supported in e.g. GNU and FreeBSD)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/426358", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273994/" ] }
426,371
Is there a way to put a process on some sort of "black list" in Linux?
The normal way would be to change the configuration of the monitor program so that it doesn't keep doing that thing you don't want it to do. I'm going to assume you can't do that for some reason, but anything else is a workaround that won't work in all circumstances. You can't blacklist a process : a process is a runtime entity. The process doesn't exist until it's started. Once it's started, it's too late to prevent it from starting. And how would you identify the process that shouldn't have started, anyway? You can blacklist a program , or more precisely, a particular installation of a program. All programs are started from an executable file. So if you arrange for the executable file not to exist, it won't start. You could remove it, rename it, or even just make it not executable: chmod a-x /path/to/program If you don't want or can't modify the filesystem for some reason, but have root access, you could even use a security framework such as SELinux or AppArmor to forbid the monitor from executing this particular program. But that's more complicated. However, if a monitor keeps trying to respawn that program, it may or may not cope sensibly if the executable disappears. It may spam you (or some log files with error messages). Assuming that the monitor only keeps the program alive (as opposed to checking the program functionality, e.g. a monitor for a web server process might periodically try to access a web page and restart the server if it isn't responding), you could replace the program by a program that does nothing but block forever. There's no program that does this in the basic utility collection, but you can write one easily: #!/bin/shwhile sleep 999999999; do :; done Depending on why you want to block that program, you may or may not be able to achieve a similar result by suspending the process of the original program, with pkill -STOP programname or kill -STOP 1234 where 1234 is the process ID. This keeps the process around, but doing nothing until explicitly resumed (with kill -CONT ). The process won't consume any CPU time, and its memory will get swapped out when the system requires RAM for other things, but it does keep consuming resources such as open files.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/426371", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/277690/" ] }
426,458
I got the following error: ./assemblyDB.116.lastest.sh: line 9: ${ls $filename | sed 's/assemblyDB.//' | sed 's/.las//'}: bad substitution and this is the script: for filename in $(find . -type f -name "assemblyDB.*.las"); do echo $filename no=${ls $filename | sed 's/assemblyDB.//' | sed 's/.las//'} echo $nodone
${ ... } (curly braces) marks several sorts of parameter expansion , the simplest of which is just expanding the value of a variable. The stuff inside braces in your code isn't a valid parameter name, or any other expansion, so the shell complains. You seem to want command substitution instead, for that, the syntax is $( ... ) (regular parenthesis). Also, the ls in ls $filename | sed... seems a bit unnecessary, the variable expands to your filename, and ls just passes it through. You could just use echo "$filename" | sed ... instead. That said, you could do those modifications directly in the shell: no="${filename/assemblyDB.}" # remove first matchno="${no/.las}" or, using the standard operators: no="${filename#assemblyDB.}" # remove from start of stringno="${no%.las}" # remove from end of string If you do run sed , you may want to note that . matches any character in regular expressions, so it would be more correct to quote it with a backslash. Also you can give one sed instance both commands: sed -e 's/assemblyDB\.//' -e 's/\.las//' . And then for filename in $(find . -type f -name "assemblyDB.*.las"); do has all the issues parsing ls has, mostly the fact that whitespace and wildcards in file names will break it. In ksh/Bash/zsh, you could do that whole loop in the shell: shopt -s globstar # in Bashfor filename in **/assemblyDB.*.las; do ...
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/426458", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34872/" ] }
426,483
In option string when using getopts , from http://wiki.bash-hackers.org/howto/getopts_tutorial If the very first character of the option-string is a : (colon), which would normally be nonsense because there's no option letter preceding it, getopts switches to " silent error reporting mode" . In productive scripts, this is usually what you want because it allows you to handle errors yourself without being disturbed by annoying messages. I was wondering what the followings mean: "silent error reporting mode" "it allows you to handle errors yourself without being disturbed by annoying messages"? Could you maybe give some examples?
If the very first character of optstring is a colon, getopts will not produce any diagnostic messages for missing option arguments or invalid options. This could be useful if you really need to have more control over the diagnostic messages produced by your script or if you simply don't want anything to appear on the standard error stream if the user provides wonky command line options. In silent reporting mode (with the initial : ), if you want to alert the user of an invalid option, you will have to look for ? in the variable passed to getopts . Likewise, for missing option arguments, it's a : . These are the two errors usually handled by getopts itself, but to do your own error reporting to the user, you will need to catch these separately to be able to give the correct diagnostic message. In non-silent reporting mode, getopts does its own error reporting on standard error and you just have to catch a * for "any error". Compare these two examples: #!/bin/bashwhile getopts 'a:b:' opt; do case "$opt" in a) printf 'Got a: "%s"\n' "$OPTARG" ;; b) printf 'Got b: "%s"\n' "$OPTARG" ;; *) echo 'some kind of error' >&2 exit 1 esacdone The * case catches any kind of command line parsing error. $ bash script.sh -ascript.sh: option requires an argument -- asome kind of error$ bash script.sh -cscript.sh: illegal option -- csome kind of error #!/bin/bashwhile getopts ':a:b:' opt; do case "$opt" in a) printf 'Got a: "%s"\n' "$OPTARG" ;; b) printf 'Got b: "%s"\n' "$OPTARG" ;; :) echo 'missing argument!' >&2 exit 1 ;; \?) echo 'invalid option!' >&2 exit 1 esacdone The : case above catches missing argument errors, while the ? case catches invalid option errors (note that ? needs to be escaped or quoted to match a literal ? as it otherwise matches any single character). $ bash script.sh -amissing argument!$ bash script.sh -bmissing argument!$ bash script.sh -cinvalid option!
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/426483", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
426,505
I've the following file <?xml version="1.0"?><!DOCTYPE fontconfig SYSTEM "fonts.dtd"><fontconfig> <dir>/usr/local/texlive/2017/texmf-dist/fonts/opentype</dir> <dir>/usr/local/texlive/2017/texmf-dist/fonts/truetype</dir> <dir>/usr/local/texlive/2017/texmf-dist/fonts/type1</dir></fontconfig> and I've to add the following lines: <dir>/usr/local/texlive/texmf-local</dir> <dir>/usr/local/share/fonts</dir> before the closing tag /fontconfig>. I'm not sure that it's always on 7th line, so I must look for it as a string. I've some troubles in these strings with <> and / ... How can I solve with sed? thanx
Don't use sed , awk and alike for parsing XML/HMTL data - it'll never come to robust and scalable result. Use a proper XML/HTML processors. The right way with xmlstarlet tool: xmlstarlet ed -s '//fontconfig' -t elem -n 'dir' -v '/usr/local/texlive/texmf-local' \-s '//fontconfig' -t elem -n 'dir' -v '/usr/local/share/fonts' input.xml The output: <?xml version="1.0"?><!DOCTYPE fontconfig SYSTEM "fonts.dtd"><fontconfig> <dir>/usr/local/texlive/2017/texmf-dist/fonts/opentype</dir> <dir>/usr/local/texlive/2017/texmf-dist/fonts/truetype</dir> <dir>/usr/local/texlive/2017/texmf-dist/fonts/type1</dir> <dir>/usr/local/texlive/texmf-local</dir> <dir>/usr/local/share/fonts</dir></fontconfig> To modify/edit the file in-place - add -L option: xmlstarlet ed -L .... For more details type: xmlstarlet ed --help
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/426505", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/277770/" ] }
426,526
Whenever I open an image in feh, the background is set to the standard, dark gray and gray checkboard pattern like this: As you can see, it's the checkboard background. How do I permanently change this to black? I've search Google and other places, but I can't seem to find a straight answer. I'm guessing feh's config file is involved, but I can't find any examples of how to do it in the config file. I know you can do it in the command line with --bg-color black (or something) but I'd like to just have it set to black by default.
It seems that you cannot put your desired default options in a config file. If you know about $PATH you can resort to a hack. Create this script: #!/bin/shfeh --bg-color black "$@" Call it feh and place it in your $PATH before /usr/bin/ (assuming that feh itself is in /usr/bin/ ). Some distros have ~/bin/ in $PATH by default. So you would put that script into ~/bin/ (and make it executable). Otherwise just create this folder yourself and prepend it to your $PATH . Also, if you want to set multiple default options, you can group them into themes. (Theme is the feh developer's name for a named group of options.) Create ~/.config/feh/themes and add this line to that file: default --bg-color black feh -Tdefault will then start feh with your desired default options. This is handy if you want to set multiple options at once. Unfortunately there is no way to set a default theme either. So, in your case it doesn't help. But you can fallback to the same hack as above: #!/bin/shfeh -Tdefault "$@" Alternative: If you are just going to call feh manually from the commandline, you can instead set an alias in your shell. In bash you would add this line to your ~/.bashrc and restart the interpreter (e.g. re-open the terminal): alias feh="feh --bg-color black" In fish shell you would run: abbr -a feh feh --bg-color black
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/426526", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/248942/" ] }
426,527
Let's look at following example: [user@user~]$ sudo docker run -d -e XYZ=123 ubuntu sleep 100002543e7235fa9[user@user~]$ sudo docker exec 2543e7235fa9 echo testtest[user@user~]$ sudo docker exec 2543e7235fa9 echo $XYZ<empty row> [user@user~]$ sudo docker exec -it 2543e7235fa9 bashroot@2543e7235fa9:/# echo $XYZ123 Why did I get <empty row> instead of 123 ? And why after executing and entering to bash I am able to see XYZ=123 ?
It seems that you cannot put your desired default options in a config file. If you know about $PATH you can resort to a hack. Create this script: #!/bin/shfeh --bg-color black "$@" Call it feh and place it in your $PATH before /usr/bin/ (assuming that feh itself is in /usr/bin/ ). Some distros have ~/bin/ in $PATH by default. So you would put that script into ~/bin/ (and make it executable). Otherwise just create this folder yourself and prepend it to your $PATH . Also, if you want to set multiple default options, you can group them into themes. (Theme is the feh developer's name for a named group of options.) Create ~/.config/feh/themes and add this line to that file: default --bg-color black feh -Tdefault will then start feh with your desired default options. This is handy if you want to set multiple options at once. Unfortunately there is no way to set a default theme either. So, in your case it doesn't help. But you can fallback to the same hack as above: #!/bin/shfeh -Tdefault "$@" Alternative: If you are just going to call feh manually from the commandline, you can instead set an alias in your shell. In bash you would add this line to your ~/.bashrc and restart the interpreter (e.g. re-open the terminal): alias feh="feh --bg-color black" In fish shell you would run: abbr -a feh feh --bg-color black
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/426527", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/266351/" ] }
426,536
I am trying to download a .txt file using terminal. Here is the link (the download starts immediately you open the link): https://es.osdn.net/projects/sfnet_kaldi/downloads/wsj0-train-spkrinfo.txt I tried to download it with the command wget "https://es.osdn.net/projects/sfnet_kaldi/downloads/wsj0-train-spkrinfo.txt" Unfortunately the only thing I got after running that command was the page source :( Could someone tell me what the correct way to download this file from Terminal is? Thanks in advance!
If you follow their HTML, they do some tricks to hide the actual source of contents. The file you want to download, can be downloaded from the source where they are getting it with the command: wget http://jaist.dl.sourceforge.net/project/kaldi/wsj0-train-spkrinfo.txt So, it is not you are lacking in Unix knowledge, at the end of the day, they are just being obtuse on purpose. You can also access all the files of this project (kaldi) in their main page at https://sourceforge.net/projects/kaldi/files/ (older version, pointed by your original link) And going there, you see there is a newer version at https://github.com/kaldi-asr/kaldi
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/426536", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/272793/" ] }
426,542
I've created the following port forwarding: ssh -vL localhost:4433:example.com:443 remote-linux-host Note: I'm using 4433 on my local instead of 443 to avoid running with sudo . however when I go to https://localhost:4433/ , after ignoring the certificate check, there is the following error (on both Chrome and Firefox): 404 - Not Found Same when using curl : $ curl -s https://localhost:4433/ | html2text<?xml version="1.0" encoding="iso-8859-1"?>****** 404 - Not Found ****** How do I make the port forwarding for HTTPS? My aim is to open https://example.com/ (which works fine over HTTPS) on my local (port 4433) via remote server.
Let´s suppose that example.com is 93.184.216.34. One of the methods could be as follows. Do: ssh -L 4433:93.184.216.34:443 remote-linux-host Define in your local /etc/hosts in the machine using the browser: 127.0.0.1 example.com And then open in the browser: https://example.com:4433/ In this way, it will send the correct Host header. Beware of browser/resolver DNS caches after creating the hosts entry.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/426542", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21471/" ] }
426,568
for filename in *do if [ "$filename" -ne even ] && [ "$filename" -ne odd ] then echo "$filename" fidone Above is a simple shell script that checks the files in the current repository and outputs the file whose names are not "even" and "odd" Wont work
Change the if line to: if [ "$filename" != even ] && [ "$filename" != odd ] In the bash shell, the entire script, ( i.e. from for to done inclusive), can be simplified to: GLOBIGNORE=even:odd ; printf "%s\n" * Another bash method: shopt -s extglob # only needs to be done once, if not set already.printf "%s\n" !(odd|even)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/426568", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/251369/" ] }
426,585
I've been using reportbug in novice mode on Debian 9 and needed to cancel the report because no editor was installed (on a Docker image). The last interaction was Submit this report on postgresql (e to edit) [y|n|a|c|E|i|l|m|p|q|d|t|s|?]? nSaving a backup of the report at /tmp/reportbug-postgresql-backup-20180226-11446-cwjfs5euBug report written as /tmp/reportbug-postgresql-20180226-11446-mrfjtcvz Now, I don't seem to find a way to open the draft again based on the output of reportbug --help ( draftpath seems to be for storage of new drafts only): Usage: reportbug [options] Options: --version show program's version number and exit -h, --help show this help message and exit -c, --no-config-files do not include conffiles in report -C CLASS, --class=CLASS specify report class for GNATS BTSes -d, --debug send report only to yourself --test operate in test mode (maintainer use only) -e EDITOR, --editor=EDITOR specify an editor for your report -f SEARCHFOR, --filename=SEARCHFOR report the bug against the package containing the specified file --from-buildd=BUILDD_FORMAT parse information from buildd format: $source_$version --path only search the path with -f -g, --gnupg, --gpg sign report with GNU Privacy Guard (GnuPG/gpg) -G, --gnus send the report using Gnus --pgp sign report with Pretty Good Privacy (PGP) -K KEYID, --keyid=KEYID key ID to use for PGP/GnuPG signatures -H HEADERS, --header=HEADERS add a custom RFC2822 header to your report -P PSEUDOS, --pseudo-header=PSEUDOS add a custom pseudo-header to your report --license show copyright and license information -m, --maintonly send the report to the maintainer only -M, --mutt send the report using mutt --mirror=MIRRORS add a BTS mirror -n, --mh, --nmh send the report using mh/nmh -N, --bugnumber specify a bug number to look for --mua=MUA send the report using the specified mail user agent --mta=MTA send the report using the specified mail transport agent --list-cc=LISTCC send a copy to the specified address -p, --print output the report to standard output only --report-quiet file report without any mail to the maintainer or tracking lists -q, --quiet reduce the verbosity of the output -s SUBJECT, --subject=SUBJECT the subject for your report -x, --no-cc do not send a copy of the report to yourself -z, --no-compress do not strip blank lines and comments from config files -o OUTFILE, --output=OUTFILE output the report to the specified file (both mail headers and body) -O, --offline disable all external queries -i INCLUDE, --include=INCLUDE include the specified file in the report -A ATTACHMENTS, --attach=ATTACHMENTS attach the specified file to the report -b, --no-query-bts do not query the BTS for reports --query-bts query the BTS for reports -T TAGS, --tag=TAGS add the specified tag to the report --http_proxy=HTTP_PROXY, --proxy=HTTP_PROXY use this proxy for HTTP accesses --email=EMAIL specify originating email address --realname=REALNAME specify real name for your report --smtphost=SMTPHOST specify SMTP server for mailing --tls use TLS to talk to SMTP servers --source, --src report the bug against the source package --smtpuser=SMTPUSER username to use for SMTP --smtppasswd=SMTPPASSWD password to use for SMTP --replyto=REPLYTO, --reply-to=REPLYTO specify Reply-To address for your report --query-source query on source packages, not binary packages --no-query-source query on binary packages only --security-team send the report only to the security team, if tag=security --no-security-team do not send the report only to the security team, if tag=security --debconf include debconf settings in your report --no-debconf exclude debconf settings from your report -j JUSTIFICATION, --justification=JUSTIFICATION include justification for the severity of your report -V PKGVERSION, --package-version=PKGVERSION specify the version number for the package -u INTERFACE, --interface=INTERFACE, --ui=INTERFACE choose which user interface to use -Q, --query-only only query the BTS -t TYPE, --type=TYPE choose the type of report to file -B BTS, --bts=BTS choose BTS to file the report with -S SEVERITY, --severity=SEVERITY identify the severity of the report --template output a template report only --configure reconfigure reportbug for this user --check-available check for new releases on various sites --no-check-available do not check for new releases --mode=MODE choose the operating mode for reportbug -v, --verify verify integrity of installed package using debsums --no-verify do not verify package installation -k, --kudos send appreciative email to the maintainer, rather than filing a bug report --body=BODY specify the body for the report as a string --body-file=BODYFILE, --bodyfile=BODYFILE use the specified file as the body of the report -I, --no-check-installed don't check whether the package is installed --check-installed check whether the specified package is installed when filing a report (default) --exit-prompt prompt before exiting --paranoid show contents of message before sending --no-paranoid don't show contents of message before sending (default) --no-bug-script don't execute the bug script (if present) --draftpath=DRAFTPATH Save the draft in this directory --timeout=TIMEOUT Specify the network timeout, in seconds [default: 60] --no-cc-menu don't show additional CC menu --no-tags-menu don't show tags menu --mbox-reader-cmd=MBOX_READER_CMD Specify the program to open the reports mbox. --max-attachment-size=MAX_ATTACHMENT_SIZE Specify the maximum size in byte for an attachment [default: 10485760]. --latest-first Order bugs to show the latest first --envelope-from=ENVELOPEFROM Specify the Envelope From (Return-path) address used to send the bug report Specifying the two files in /tmp as filename fails due to No packages match.No package specified or we were unable to find it in the apt cache; stopping. which might be wrong or right depending on what this unexplained argument expects as input. I'm aware that it's way more easy to create a new report. I'm asking this for reference. I'm pretty sure I reported this once, but unfortunately was too honest about the integration test coverage and documentation review of reportbug (such problems simply shouldn't happen if you want to improve a FLOSS project), so the maintainer closed all of my otherwise constructive reports. I'm sure there's a lesson to be learned from this, but I'm still not certain which one...
Unfortunately there is no way to open a draft bug report in reportbug . This has been reported several times, and one of the bug reports gives a solution (assuming your system is configured in such a way that sendmail works): edit the draft in your favourite text editor, then send it using sendmail -t < bugdraft That’s not much help on many systems nowadays... Some mail clients can import a message, that’s another possible approach.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/426585", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63502/" ] }
426,604
I want to run a Fortran program on a server. I am able to log into that server using the command: ssh -X [email protected]. address I used mkdir directoryname command to create a directory. Then I compile the Fortran source code using gfortran code.f90 -o code1 and run it with the command: ./code1 This should start to compile the program. How do I know whether the process has started, is continuing and or has finished? Please also tell me what does the commands top , bg and kill PID number mean.
If your shell prompt doesn't reappear after running ./code1 , then your program is running. When your shell prompt comes back, your program has exited. top is like the Task Manger on Windows or the Activity Monitor on macOS. It's a program that lets a user view and manipulate processes. If you want to start your program in the background so that you have access to your shell when it's executing, run the program as ./code1 & . Or, press Ctrl+z while the program is running to pause it and enter bg to resume it in the background. You can kill (terminate) a program if you know its PID. The kill command actually sends signals to programs so you can do other things with it besides using it to tell programs to exit. You can view a list of the processes currently running under your user account with ps -u $USER .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/426604", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/261384/" ] }
426,616
In FreeBSD, man mount_nullfs states that: The primary differences between a virtual copy of the file system and a symbolic link are that the getcwd(3) functions work correctly in the virtual copy, and that other file systems may be mounted on the virtual copy without affecting the original. A different device number for the virtual copy is returned by stat(2) , but in other respects it is indistinguishable from the original. What is the full meaning/implication of this paragraph?
The primary differences between a virtual copy of the file system and a symbolic link are that the getcwd(3) functions work correctly in the virtual copy, getcwd ’s behaviour with symlinked directories is a fairly well-known gotcha, documented in Advanced Unix Programming for example (see this SO question for a quote): chdir and getcwd aren’t symmetric when symlinks are involved. One might expect that changing directories, using chdir , to a given directory, and then retrieving the current directory, using getcwd , would return the same value; but that’s not the case when a process changes directory using a path containing a symbolic link — getcwd returns the path obtained after de-referencing all symbolic link(s). This can have unexpected consequences when changing directories to a parent directory, when the path containing symbolic link(s) and the de-referenced path have different numbers of components. and that other file systems may be mounted on the virtual copy without affecting the original. Continuing Stéphane’s example , you can mount another file system on a sub-directory of /tmp/b without affecting /some/dir , whereas mounting a file system on a sub-directory of /tmp/a will make it show up under /some/dir too. A different device number for the virtual copy is returned by stat(2) , but in other respects it is indistinguishable from the original. This means that running stat on the copy, or any file thereunder, will return a different device number compared to the original, but that’s the only difference; apart from that, stat("/tmp/b/c", &buf) and stat("/some/dir/c", &buf) would return the same information.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/426616", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/120614/" ] }
426,652
I'm having an instance of qemu running on Windows 7, running without an open terminal. Now I want to shutdown the machine with the name MyMachineName or add an USB-Device to it. I need a scriptable solution. Libvirt is not a solution, because it has other disadvantages for my system. Im looking for a magic line like: qemu-monitor -connect=MyMachineName command="shutdown" How can I do it?
Someone might be able to chime in with a proper command for operating on TTYs, but I'll post a solution in the meantime involving the network. There are a couple options for redirecting the QEMU monitor. One way is to have QEMU offer access to its monitor via telnet: $ qemu-system-i386 -monitor telnet:127.0.0.1:55555,server,nowait; Then, QEMU can be scripted by piping commands to telnet . This is fine as long as the output of commands can be discarded since the telnet session will probably close too quickly for visual feedback: $ echo system_powerdown |telnet 127.0.0.1 55555Trying 127.0.0.1...Connected to 127.0.0.1.Escape character is '^]'.Connection closed by foreign host.$ _ # qemu sends the guest an ACPI shutdown signal If the output of the commands executed on the monitor need to be collected, a TCP session can be used instead: $ qemu-system-i386 -monitor tcp:127.0.0.1:55555,server,nowait; Then, commands can be sent to the listening monitor via netcat or a similar utility: $ echo info\ kvm |nc -N 127.0.0.1 55555QEMU 2.11.0 monitor - type 'help' for more information(qemu) info kvmkvm support: enabled(qemu) $ echo system_powerdown |nc -N 127.0.0.1 55555QEMU 2.11.0 monitor - type 'help' for more information(qemu) system_powerdown(qemu) $ # hit return$ _ # qemu sends the guest an ACPI shutdown signal Here is a link to partial documentation of QEMU monitor commands: https://en.wikibooks.org/wiki/QEMU/Monitor
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/426652", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/263629/" ] }
426,683
I would like to do something like this: > grep pattern file.txt | size -h16.4 MB or something equivalent to: > grep pattern file.txt > grepped.txt> ls -h grepped.txt16.4 MB> rm grepped.txt (that would be a bit inconvenient, though) Is that possible?
You can use wc for this: grep pattern file.txt | wc -c will count the number of bytes in the output. You can post-process that to convert large values to “human-readable” format . You can also use pv to get this information inside a pipe: grep pattern file.txt | pv -b > output.txt (this displays the number of bytes processed, in human-readable format).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/426683", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60673/" ] }
426,693
On Alpine Linux, I'd like to know how to extract just the IP address from a DNS / dig query. The query I'm running looks like this: lab-1:/var/# dig +answer smtp.mydomain.net +short smtp.ggs.mydomain.net10.11.11.11 I'd like to be able to get just the IP address returned.I'm currently playing around with the bash pipe and the awk command. But so far, nothing I've tried is working. Thanks.
I believe dig +short outputs two lines for you because the domainyou query, smtp.mydomain.net is a CNAME for smtp.ggs.mydomain.net ,and dig prints the intermediate resolution step. You can probably rely on the last line from dig's output being the IPyou want, though, and therefore the following should do: dig +short smtp.mydomain.net | tail -n1
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/426693", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52684/" ] }
426,748
I have about 15,000 files that are named file_1.pdb , file_2.pdb , etc. I can cat about a few thousand of these in order by doing: cat file_{1..2000}.pdb >> file_all.pdb However, if I do this for 15,000 files, I get the error -bash: /bin/cat: Argument list too long I have seen this problem being solved by doing find . -name xx -exec xx but this wouldn't preserve the order with which the files are joined. How can I achieve this?
Using find , sort and xargs : find . -maxdepth 1 -type f -name 'file_*.pdb' -print0 |sort -zV |xargs -0 cat >all.pdb The find command finds all relevant files, then prints their pathnames out to sort that does a "version sort" to get them in the right order (if the numbers in the filenames had been zero-filled to a fixed width we would not have needed -V ). xargs takes this list of sorted pathnames and runs cat on these in as large batches as possible. This should work even if the filenames contains strange characters such as newlines and spaces. We use -print0 with find to give sort nul-terminated names to sort, and sort handles these using -z . xargs too reads nul-terminated names with its -0 flag. Note that I'm writing the result to a file whose name does not match the pattern file_*.pdb . The above solution uses some non-standard flags for some utilities. These are supported by the GNU implementation of these utilities and at least by the OpenBSD and the macOS implementation. The non-standard flags used are -maxdepth 1 , to make find only enter the top-most directory but no subdirectories. POSIXly, use find . ! -name . -prune ... -print0 , to make find output nul-terminated pathnames (this was considered by POSIX but rejected). One could use -exec printf '%s\0' {} + instead. -z , to make sort take nul-terminated records. There is no POSIX equivalence. -V , to make sort sort e.g. 200 after 3 . There is no POSIX equivalence, but could be replaced by a numeric sort on specific parts of the filename if the filenames have a fixed prefix. -0 , to make xargs read nul-terminated records. There is no POSIX equivalence. POSIXly, one would need to quote the file names in a format recognised by xargs . If the pathnames are well behaved, and if the directory structure is flat (no subdirectories), then one could make do without these flags, except for -V with sort .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/426748", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90334/" ] }
426,831
% x=abracadabra% echo ${x//a/o}obrocodobro Hmph... Is there a way to replace the last occurrence of a pattern using shell substitions (IOW, without rolling out sed , awk , perl , etc.)?
Note: this replaces a trailing occurrence - not quite the same as "the last occurrence" From the Bash reference manual Section 3.5.3 Shell Parameter Expansion : ${parameter/pattern/string} The pattern is expanded to produce a pattern just as in filename expansion. Parameter is expanded and the longest match of pattern against its value is replaced with string. If pattern begins with ‘/’, all matches of pattern are replaced with string. Normally only the first match is replaced. If pattern begins with ‘#’, it must match at the beginning of the expanded value of parameter. If pattern begins with ‘%’, it must match at the end of the expanded value of parameter. If string is null, matches of pattern are deleted and the / following pattern may be omitted. If the nocasematch shell option (see the description of shopt in The Shopt Builtin) is enabled, the match is performed without regard to the case of alphabetic characters. If parameter is ‘@’ or ‘ ’, the substitution operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with ‘@’ or ‘ ’, the substitution operation is applied to each member of the array in turn, and the expansion is the resultant list. So $ x=abracadabra$ echo "${x/%a/o}"abracadabro
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/426831", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10618/" ] }
426,837
I tried to use sha256sum in High Sierra; I attempted to install it with MacPorts , as: sudo port install sha256sum It did not work. What to do?
After investigating a little, I found a ticket in an unrelated software in GitHub sha256sum command is missing in MacOSX , with several solutions: installing coreutils sudo port install coreutils It installs sha256sum at /opt/local/libexec/gnubin/sha256sum As another possible solution, using openssl : function sha256sum() { openssl sha256 "$@" | awk '{print $2}'; } As yet another one, using the shasum command native to MacOS: function sha256sum() { shasum -a 256 "$@" ; } && export -f sha256sum
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/426837", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138261/" ] }
426,849
I need to copy huge files in my Linux machine. Example: cp source.txt target.txt I want to create bar progress that will show that copy still in progress on each copy file Examples" cp file file1 copy file > file1 ......... cp moon mars copy moon > mars .......
In short, you won't find cp native functionality for progress bar output. Why? Many reasons . However, you have some options: Use a different tool. rsync , as mentioned by @user1404316 has --progress : rsync -P largeFile copyLocation If you don't need the extra semantics that cp and rsync take care of, create a new file with pv ("Pipe Viewer") by redirecting stdout : pv < largeFile > copyLocation If you do need the extra semantics, you can use progress , though it doesn't give the bar specifically. It attaches to already running processes, so you would invoke it like: # In one shell$ cp largeFile copyLocation# In another shell$ progress -m[ 4714] cp /home/hunteke/largeFile 1.1% (114 MiB / 10.2 GiB) # -m tells progress to continually update Another option is gcp , which does exactly what you've requested with a progress bar: gcp largeFile copyLocation Another option abuses curl 's ability to handle file:// urls: curl -o copyLocation file:///path/to/largeFile You can write a shell script
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/426849", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
426,862
I am writing a shell script that I would like to run as a daemon on startup without using external tools like daemontools or daemonize . Linux Daemon Writing HOWTO According to the Linux Daemon Writing HOWTO , a proper daemon has the following characteristics: forks from the parent process closes all file descriptors (i.e., stdin , stdout , stderr ) opens logs for writing (if configured) changes the working directory to one that is persistent (usually / ) resets the file mode mask (umask) creates an unique Session ID (SID) daemonize Introduction The daemonize Introduction goes further, stating that a typical daemon also: disassociates from its control terminal (if there is one) and ignores all terminal signals disassociates from its process group handles SIGCLD How would I do all this in a sh , dash , or bash script with common Linux tools only? The script should be able to run on as many distros as possible without additional software, although Debian is our primary focus. NOTE: I know there are plenty of answers on the StackExchange network recommending the use of nohup or setsid , but neither of these methods tackles all of the requirements above. EDIT: The daemon(7) manpage also gives some pointers, although there seem to be some differences between older-style SysV daemons and newer systemd ones. Since compatibility with a variety of distros is important, please ensure the answer makes clear any differences.
Using systemd you should be able to run a script as a daemon by creating a simple unit.There are a lot of different options you can add but this is about as simple as you can get. Say you have a script /usr/bin/mydaemon . #!/bin/shwhile true; do date; sleep 60;done Don't forget to sudo chmod +x /usr/bin/mydaemon . You create a unit /etc/systemd/system/mydaemon.service . [Unit]Description=My daemon[Service]ExecStart=/usr/bin/mydaemonRestart=on-failure[Install]WantedBy=multi-user.target To start the demon you run systemctl start mydaemon.service To start at boot you enable it systemctl enable mydaemon.service If on a systemd based system, which a majority of Linux distributions are today, this isn't really an external tool. The negative would be that it won't work everywhere though.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/426862", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87443/" ] }
426,883
How to capture the first IP address that comes from ifconfig command? ifconfig -aenw178032: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 100.14.22.12 netmask 255.255.0.0 broadcast 100.14.255.255 inet6 fe80::250:56ff:fe9c:158a prefixlen 64 scopeid 0x20<link> ether 00:10:56:9c:65:8a txqueuelen 1000 (Ethernet) RX packets 26846250 bytes 12068811576 (11.2 GiB) RX errors 0 dropped 58671 overruns 0 frame 0 TX packets 3368855 bytes 1139160934 (1.0 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Expected result: IP=100.14.22.12
It is better avoid using ifconfig for getting an IP address in a scriptas it is deprecated in some distributions (e.g. CentOS and others, do not install it by default anymore). In others systems, the output of ifconfig varies according to the release of the distribution (e.g. the output/spacing/fields of ifconfig differs from Debian 8 to Debian 9, for instance). For getting the IP address with ip , in a similar way you are asking: ip addr | awk ' !/127.0.0.1/ && /inet/ { gsub(/\/.*/, "", $2); print "IP="$2 } ' Or better yet: $ ip -o -4 address show | awk ' NR==2 { gsub(/\/.*/, "", $4); print $4 } '192.168.1.249 Or, as you ask "IP=" #!/bin/bashecho -n "IP="ip -o -4 address show | awk ' NR==2 { gsub(/\/.*/, "", $4); print $4 } ' Adapting shamelessly the idea from @Roman $ ip -o -4 address show | awk ' NR==2 { gsub(/\/.*/, "", $4); print "IP="$4 } ' IP=192.168.1.249 Normal output: $ ip -o -4 address show 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever2: eth0 inet 192.168.1.249/24 brd 192.168.1.255 scope global eth0\ valid_lft forever preferred_lft forever From man ip : -o, -oneline output each record on a single line, replacing line feeds with the '\' character. This is convenient when you want to count records with wc(1) or to grep(1) the output. See one example of why ifconfig is not advised: BBB: `bbb-conf --check` showing IP addresses as `inet` - ifconfig woes For understanding why ifconfig is on the way out, see Difference between 'ifconfig' and 'ip' commands ifconfig is from net-tools, which hasn't been able to fully keep up with the Linux network stack for a long time. It also still uses ioctl for network configuration, which is an ugly and less powerful way of interacting with the kernel. Around 2005 a new mechanism for controlling the network stack was introduced - netlink sockets. To configure the network interface iproute2 makes use of that full-duplex netlink socket mechanism, while ifconfig relies on an ioctl system call.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/426883", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
426,928
As is, the code below is invalid, because the brackets can not be used like that. if we remove them, it runs fine, and outputs: truetrue code: #!/usr/bin/fishif ( false ; and true ) ; or true echo "true"else echo "false"endif false ; and ( true ; or true ) echo "true"else echo "false"end How to get the functionality indicated by the brackets? desired output: truefalse
You can use begin and end for conditionals as well: From fish tutorial : For even more complex conditions, use begin and end to group parts of them. For a simpler example, you can take a look at this answer from stackoverflow. For your code, you just have to replace the ( with begin ; and the ) with ; end . #!/usr/bin/fishif begin ; false ; and true ; end ; or true echo "true"else echo "false"endif false; and begin ; true ; or true ; end echo "true"else echo "false"end
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/426928", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/189656/" ] }
426,944
I am running a userspace driver for a /dev/uinput device in a Wayland desktop session. The instructions suggest running xinput list to confirm that the device is detected. Of course, xinput is an X.org application. What is the equivalent command for Wayland? (A GNOME GUI equivalent is acceptable.)
On Debian the command is: $ sudo libinput list-devices# requires the libinput-tools package On arch-linux : # libinput list-devices To list just the device names, no details, use grep: $ sudo libinput list-devices | grep Device
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/426944", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269/" ] }
426,950
Someone ask me in other site about this question, i.e. a file named "abc.dat" has 0 file size but 8 blocks, and this is the output I ask him to give me (Some text has been translated from Chinese to English): $ cp abc.dat abc2.dat; ls -ls abc2.dat #try to copy, it still 8 blocks but 0 byte8 -rw-rw-r-- 1 rokeabbey rokeabbey 0 Feb 27 19:39 abc2.dat 8 -rw-rw-r-- 1 rokeabbey rokeabbey 0 Sep 18 19:11 abc.dat #sorry, this may be the extra wrong output he added $ stat abc.dat File: 'abc.dat' Size: 0 Blocks: 16 IO Block: 4096 regular empty fileDevice: 32h/50d Inode: 3715853 Links: 1Access: (0664/-rw-rw-r--) Uid:( 1000/rokeabbey) Gid:( 1000/rokeabbey)Access: 2018-02-26 21:13:57.640639992 +0800Modify: 2017-09-18 19:11:42.221533011 +0800Change: 2017-09-18 19:11:42.221533011 +0800 Birth: -$ touch abc3.dat ; ls -sl | grep abc #try to create new empty file, it still 8 blocks by default8 -rw-rw-r-- 1 rokeabbey rokeabbey 0 Feb 27 19:39 abc2.dat8 -rw-rw-r-- 1 rokeabbey rokeabbey 0 Feb 27 19:40 abc3.dat8 -rw-rw-r-- 1 rokeabbey rokeabbey 0 Sep 18 19:11 abc.dat I've learned a bit about sparse file, file metadata, symlink cases, but none of that cases will causes 0 byte file size with 8 blocks. Is there any filesystems setup such as minimum block size for ANY file ? He told me that his systems is Ubuntu 16.04 and ext4. [UPDATE] $ df -Th /home/rokeabbey/home/rokeabbey/.Private ecryptfs 138G 39G 92G 30% /home/rokeabbey [UPDATE] I can reproduced with ecryptfs xb@dnxb:/tmp/test$ sudo mkdir /opt/dataxb@dnxb:/tmp/test$ sudo apt-get install ecryptfs-utils...xb@dnxb:/tmp/test$ sudo mount -t ecryptfs /opt/data /opt/dataPassphrase: ...Selection [aes]: 1...Selection [16]: 1Enable plaintext passthrough (y/n) [n]: yEnable filename encryption (y/n) [n]: y...Would you like to proceed with the mount (yes/no)? : yes...in order to avoid this warning in the future (yes/no)? : no Not adding sig to user sig cache file; continuing with mount.Mounted eCryptfsxb@dnxb:/tmp/test$ l /opt/datatotal 8.0K52953089 drwxr-xr-x 9 root root ? 4.0K Feb 27 23:16 ../56369402 drwxr-xr-x 2 root root ? 4.0K Feb 27 23:16 ./xb@dnxb:/tmp/test$ sudo touch /opt/data/testingxb@dnxb:/tmp/test$ less /opt/data/testing xb@dnxb:/tmp/test$ sudo umount /opt/dataxb@dnxb:/tmp/test$ ls -ls /opt/datatotal 88 -rw-r--r-- 1 root root 8192 Feb 27 23:42 ECRYPTFS_FNEK_ENCRYPTED.FWbECDhE0C37e-Skw2B2pnQpP9gB.b3yDfkVU5wk7WhvMreg8yVnuEaMME--xb@dnxb:/tmp/test$ less /opt/data/ECRYPTFS_FNEK_ENCRYPTED.FWbECDhE0C37e-Skw2B2pnQpP9gB.b3yDfkVU5wk7WhvMreg8yVnuEaMME-- "/opt/data/ECRYPTFS_FNEK_ENCRYPTED.FWbECDhE0C37e-Skw2B2pnQpP9gB.b3yDfkVU5wk7WhvMreg8yVnuEaMME--" may be a binary file. See it anyway? xb@dnxb:/tmp/test$ sudo mount -t ecryptfs /opt/data /opt/dataPassphrase: Select cipher: ...Selection [aes]: 1 ...Selection [16]: 1Enable plaintext passthrough (y/n) [n]: yEnable filename encryption (y/n) [n]: y...Would you like to proceed with the mount (yes/no)? : yes...in order to avoid this warning in the future (yes/no)? : no Not adding sig to user sig cache file; continuing with mount.Mounted eCryptfsxb@dnxb:/tmp/test$ ls -ls /opt/datatotal 88 -rw-r--r-- 1 root root 0 Feb 27 23:42 testingxb@dnxb:/tmp/test$
This happens if the file system is encrypted; the FS needs to store extra metadata for the file, even if it is empty. As I happen to have a machine handy with a vanilla ecryptfs mount (Ubuntu 12.04-LTS), I can confirm that an empty file will get 8 blocks: $ touch test$ ls -ls test8 -rw-rw-r-- 1 admin admin 0 feb 27 16:45 test
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/426950", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64403/" ] }
426,963
I want to create a local mail account suitable for apt-listchanges . In other words, local services will send mail to local@localhost (?) and I should be able to check that mailbox using a regular mail client (Thunderbird, Geany...) This would preferably be a "system" account rather than a "user" account, but if userland apps can't access that, a "user" account will do.
This happens if the file system is encrypted; the FS needs to store extra metadata for the file, even if it is empty. As I happen to have a machine handy with a vanilla ecryptfs mount (Ubuntu 12.04-LTS), I can confirm that an empty file will get 8 blocks: $ touch test$ ls -ls test8 -rw-rw-r-- 1 admin admin 0 feb 27 16:45 test
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/426963", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269/" ] }
426,976
I am working with a csv file which contains data in the following structure: "12345","BLAH","DEDA","0.000","1.111","2.22222","3.3333333,"15/12/2017 4:26:00 PM" I want to convert the 12 hour time into 24 hour time.The following shows what I am trying to achieve in the end: "12345","BLAH","DEDA","0.000","1.111","2.22222","3.3333333,"15/12/2017 16:26:00" I found the following answer to a question which seems to solve the conversion of the time segment of my problem. https://stackoverflow.com/questions/8083973/bash-and-awk-converting-a-field-from-12-hour-to-24-hour-clock-time#8084087 So with the above, I believe I must do the following process (There is probably a more efficient method): Temporarily separate the date and time into there own records "12345","BLAH","DEDA","0.000","1.111","2.22222","3.3333333,"15/12/2017","4:26:00 PM" Target the time record and convert it into my desired 24 hour format Concatenate the date and time records back into a single record I am trying to achieve this using awk and am stuck on the first section! Is awk to right tool for this job, or would you recommend a different tool? I'm starting with step 1. I can't even successfully target the date! awk 'BEGIN {FS=","} { gsub(/[0-9]\{2\}\/[0-9]\{2\}\/[0-9]\{4\}/, "TESTING"); print }' myfile.csv
I'd use perl here: perl -pe 's{\b(\d{1,2})(:\d\d:\d\d) ([AP])M\b}{ $1 + 12 * (($3 eq "P") - ($1 == 12)) . $2}ge' That is add 12 to the hour part if PM (except for 12PM) and change 12AM to 0. With awk , not doing the word-boundary part (so could give false positives on 123:21:99 AMERICA for instance) and assuming there's only one occurrence per line: awk ' match($0, /[0-9]{1,2}:[0-9]{2}:[0-9]{2} [AP]M/) { split(substr($0, RSTART, RLENGTH), parts, /[: ]/) if (parts[4] == "PM" && parts[1] != 12) parts[1] += 12 if (parts[4] == "AM" && parts[1] == 12) parts[1] = 0 $0 = substr($0, 1, RSTART - 1) \ parts[1] ":" parts[2] ":" parts[3] \ substr($0, RSTART + RLENGTH) } {print}'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/426976", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/231209/" ] }
427,115
Say I have a pid in hand, mypid=$$ is there some bash/system command I can use to listen for the exit of that process with the given pid? If no process with mypid exists, I guess the command should simply fail.
I got what I needed from this answer: https://stackoverflow.com/a/41613532/1223975 ..turns out using wait <pid> will only work if that pid is a child process of the current process . However the following will work for any process: To wait for any process to finish Linux: tail --pid=$pid -f /dev/null Darwin (requires that $pid has open files): lsof -p $pid +r 1 &>/dev/null With timeout (seconds) Linux: timeout $timeout tail --pid=$pid -f /dev/null Darwin (requires that $pid has open files): lsof -p $pid +r 1m%s -t | grep -qm1 $(date -v+${timeout}S +%s 2>/dev/null || echo INF)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/427115", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/113238/" ] }
427,234
I am missing my bash aliases in fish, and don't want to manually convert all of them to fish functions.How to get access to them all from within fish? Bonus points if: the solution supports an iterative process, as in: i can easily change the aliases in bash, and re-convert/re-import them into fish the solution also imports bash functions
I stumbled upon this post,the first script is quite nice but I really like using one file for all my aliases.Another simpler and neat way to do it would be: Move all your aliases to ~/.bash_aliases then just adding this line in ~/.config/fish/config.fish source ~/.bash_aliases
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/427234", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/189656/" ] }
427,291
I want to find the right hand side of an expression in a file and replace its value with something else using sed . With grep , we see $ grep power TheFile power = 1 Also with cut , I can access the value $ grep power TheFile | cut -d = -f 2 1 However, I don't know how to pipe that with the sed command. Any idea to accomplish that?
How about: sed '/^power /s/=.*$/= your-replacement/' TheFile /^power / is an address specifier, in this case looking for a line that matches a regex ^power . s is a replacement command, matching a regex being everything after the = . Note that there is no need to pipe the contents of the file; you can just specify it (or a list of files) as a command argument. If you want / need to pipe something into sed , that's easy - just | sed .. . If the whitespace immediately after the initial word (eg. power ) might be a tab, use [[:blank:]] instead. Some versions of sed allow you to use \t for a definite tab character.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/427291", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118395/" ] }
427,345
Working on a project that uses a little keyboard and an E-ink display which will run on a raspberry pi Zero. I have tried a couple of keyboard packages for python (pynput, pyxhook) and have written/tested simple keystroke loggers that work fine on my desktop (ubuntu) However I try to run them on the pi with no monitor both libraries die DisplayConnectionError: Can't connect to display ":0": [Errno 111] Connection refused I know at least pyxhook has the ability to capture mouse movement so that makes sense why it would "Need" access to the monitor. But all I want is a way to capture the keyboard input in a process running in the background but with no monitor attached The libraries can also return the current window that has focus as part of the key event, and that may be the other reason the monitor is tied in so deep.I tried $export DISPLAY=":0" did not help. here is simple code for pynput, works with monitor but not when running headless(running it from SSH) #!/usr/bin/env pythonfrom pynput import keyboarddef on_press(key): print('Key {} pressed.'.format(key)) if str(key) == 'Key.esc': print('Exiting...') return Falsewith keyboard.Listener(on_press = on_press) as listener: listener.join() Is there any way to get these to work, or possibly a different way of a approaching this. full stack trace of above program failing Traceback (most recent call last): File "./keylog.py", line 3, in <module> from pynput import keyboard File "/usr/local/lib/python2.7/dist-packages/pynput/__init__.py", line 23, in <module> from . import keyboard File "/usr/local/lib/python2.7/dist-packages/pynput/keyboard/__init__.py", line 49, in <module> from ._xorg import KeyCode, Key, Controller, Listener File "/usr/local/lib/python2.7/dist-packages/pynput/keyboard/_xorg.py", line 38, in <module> from pynput._util.xorg import ( File "/usr/local/lib/python2.7/dist-packages/pynput/_util/xorg.py", line 38, in <module> _check() File "/usr/local/lib/python2.7/dist-packages/pynput/_util/xorg.py", line 36, in _check display = Xlib.display.Display() File "/usr/local/lib/python2.7/dist-packages/Xlib/display.py", line 89, in __init__ self.display = _BaseDisplay(display) File "/usr/local/lib/python2.7/dist-packages/Xlib/display.py", line 71, in __init__ protocol_display.Display.__init__(self, *args, **keys) File "/usr/local/lib/python2.7/dist-packages/Xlib/protocol/display.py", line 90, in __init__ self.socket = connect.get_socket(name, protocol, host, displayno) File "/usr/local/lib/python2.7/dist-packages/Xlib/support/connect.py", line 87, in get_socket return mod.get_socket(dname, protocol, host, dno) File "/usr/local/lib/python2.7/dist-packages/Xlib/support/unix_connect.py", line 113, in get_socket raise error.DisplayConnectionError(dname, str(val))Xlib.error.DisplayConnectionError: Can't connect to display ":0": [Errno 111] Connection refused
Ok, i figured it out, figured I would post the answer. pythons keyboard module, docs and source here as stated in the "Known Limitations" section, (even though I don't think this is a limitation!) "To avoid depending on X, the Linux parts reads raw device files (/dev/input/input*) but this requires root." So this does bring up security issues obviously since the program now needs root privileges, but for my case this is not an issue. pip install keyboard simple program import keyboardimport timedef key_press(key): print(key.name)keyboard.on_press(key_press)while True: time.sleep(1)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/427345", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273974/" ] }
427,348
I would like to print the contents of a web request, similar as "cat" command does for local files. I tried lynx but it does not simply prints to unix shell.
Ok, i figured it out, figured I would post the answer. pythons keyboard module, docs and source here as stated in the "Known Limitations" section, (even though I don't think this is a limitation!) "To avoid depending on X, the Linux parts reads raw device files (/dev/input/input*) but this requires root." So this does bring up security issues obviously since the program now needs root privileges, but for my case this is not an issue. pip install keyboard simple program import keyboardimport timedef key_press(key): print(key.name)keyboard.on_press(key_press)while True: time.sleep(1)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/427348", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/278329/" ] }
427,360
I know how send a command as a input in a program like this: echo toto | ./my_prog And with process substitution + redirection: r < <(echo toto) But how to do this if I want to input a second or a third input? For example, I have a program that ask my username first and after this it ask me others informations like a number phone or whatever in different input.
Use { and } to collect the output of multiple programs. For instance, { echo one; echo two; } |program . Leave a space after { and before } and ensure there is a semicolon after the last command within the braces.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/427360", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139943/" ] }
427,372
I'm using Mac, and I'm trying to time the command execution. If I do time echo it doesn't have any output But If I do time ls it does give me the output of time function Any idea why that happens? Update: turns out it's cuz I'm using zsh, with oh-my-zsh installed. It works well in bash, but no output in zsh. Any idea why?
In zsh, the time keyword has no effect on builtins (or other similar shell-internal constructs). From this mailing list post : Additional note: The time builtin applied to any construct that is executed in the current shell, is silently ignored. So although it's syntactically OK to put an opening curly or a repeat-loop or the like immediately after the time keyword, you'll get no timing statistics. You have to use parens instead, to force a subshell, which is then timed. $ time echo$ time (echo)( echo; ) 0.00s user 0.00s system 51% cpu 0.001 total
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/427372", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23210/" ] }
427,421
I am manipulating data from a text file with the following data structure: "1111","2222","3333","4444","5555","6666","7777","2017/12/15 16:26:00" I am trying to change the '/' in the date to '-'.Here is my awk command: awk -F "," '{gsub("/", "-", $8); print}' my-input.txt It successfully changes the /, but has the unintended consequence of replacing the ',' commas with a ' ' space character: "1111" "2222" "3333" "4444" "5555" "6666" "7777" "2017-12-15 16:26:00" Does anyone know why this is happening?
As pointed out by taliezin and pfnuesel, when defining the input file separator as a ',' it is necessary to also define the output file separator as a ',' to keep it. If the output file separator is omitted and a modification to an existing field has been done, awk will use the default value, in this case a ' ' [space] character. The below is the corrected awk command: awk -F "," -v OFS="," '{gsub("/", "-", $8); print}' my-input.txt Which outputs the intended result which maintains the ',': "1111","2222","3333","4444","5555","6666","7777","2017-12-15 16:26:00"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/427421", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/231209/" ] }
427,464
I have a csv containing the following data structure: 1111,2222,3333,4444,5555,6666,7777,2017-1-5 1:07:09,2017-1-5 1:11:531111,2222,3333,4444,5555,6666,7777,2017-11-25 19:57:17,2017-11-25 19:58:54 I want to display the dates month and day as always being 2 digits long. I also want the times Hour field to always be 2 digits. Essentially adding leading zeros if the month/day/hour fields are only a single digit as in the example line above. Using awk, how would I go about achieving the following result: 1111,2222,3333,4444,5555,6666,7777,2017-01-05 01:07:09,2017-01-05 01:11:531111,2222,3333,4444,5555,6666,7777,2017-11-25 19:57:17,2017-11-25 19:58:54
A great tool for text processing is awk . The following example is using plain standard awk on FreeBSD 11.1. @RomanPerekhrest has an elegant solution in another answer if you prefer GNU awk. Your input is comma-separated. Because of this we invoke awk with the -F, parameter. We can then print out columns using the print statement. $1 is the first column. $2 is the second column. $ awk -F, '{ print $8 }' inputfile.csv2017-1-5 1:07:092017-11-25 19:57:17 This gives us the 8th column for each row. This is then the date field you want to manipulate. Rather than setting the delimiter using the command-line parameter we can do it as part of the script. FS for the input delimiter and OFS for the output delimiter. $ awk 'BEGIN { FS = "," } ; { print $8 }' inputfile.csv2017-1-5 1:07:092017-11-25 19:57:17 When working with dates I often prefer to use the date util to make sure I handle them correctly. And I do not need to worry if I am using regular or GNU awk. Furthermore I get a big fat failure if the date does not parse correctly. The interesting parameter are: -j Specify we do not want to set the date at all-f The format string we use for input+ The format string we use for output So if we run this for one date: $ date -j -f "%Y-%m-%d %H:%M:%S" +"%Y-%m-%d %H:%M:%S" "2017-1-5 1:07:09"2017-01-05 01:07:09 We can then combine this with awk. Notice how the quotes are escaped . This is probably the biggest stumbling block for a beginner. $ awk -F, '{ system("date -j -f \"%Y-%m-%d %H:%M:%S\" +\"%Y-%m-%d %H:%M:%S\" \""$8"\"")}' inputfile.csv2017-01-05 01:07:092017-11-25 19:57:17 The system call seems correct - but unfortunately it only allows us to capture the returncode and it prints directly to the output. To avoid this we use the cmd | getline pattern. The following simple example will read the current date into mydate: $ awk 'BEGIN { cmd = "date"; cmd | getline mydate; close(cmd); print mydate }'Thu Mar 1 16:26:15 CET 2018 We use the BEGIN keyword as we have no input to this simple example. So let us expand this: awk 'BEGIN { FS=","; OFS=FS }; { cmd = "date -j -f \"%Y-%m-%d %H:%M:%S\" +\"%Y-%m-%d %H:%M:%S\" \""$8"\""; cmd | getline firstdate; close(cmd); cmd = "date -j -f \"%Y-%m-%d %H:%M:%S\" +\"%Y-%m-%d %H:%M:%S\" \""$9"\""; cmd | getline seconddate; close(cmd); print $1,$2,$3,$4,$5,$6,$7,firstdate,seconddate }' inputfile.csv And we can collapse it to a one-liner: awk 'BEGIN {FS=",";OFS=FS};{cmd="date -j -f \"%Y-%m-%d %H:%M:%S\" +\"%Y-%m-%d %H:%M:%S\" \""$8"\"";cmd | getline firstdate;close(cmd);cmd="date -j -f \"%Y-%m-%d %H:%M:%S\" +\"%Y-%m-%d %H:%M:%S\" \""$9"\"";cmd | getline seconddate;close(cmd);print $1,$2,$3,$4,$5,$6,$7,firstdate,seconddate}' inputfile.csv Which gives me the output: 1111,2222,3333,4444,5555,6666,7777,2017-01-05 01:07:09,2017-01-05 01:11:531111,2222,3333,4444,5555,6666,7777,2017-11-25 19:57:17,2017-11-25 19:58:54 Addendum As the purpose here is to learn good habit I better update this answer. It is a bad habit to repeat code. When you start doing that you should split things into a function. As you will notice the code below immediately becomes more readable. awk 'function convertdate(the_date) { cmd = "date -j -f \"%Y-%m-%d %H:%M:%S\" +\"%Y-%m-%d %H:%M:%S\" \""the_date"\""; cmd | getline formatted_date; close(cmd); return formatted_date } BEGIN { FS=","; OFS=FS }; { print $1,$2,$3,$4,$5,$6,$7,convertdate($8),convertdate($9) }' inputfile.csv Make a habit of this and you will notice how much easier it will become to introduce error handling later on.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/427464", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/231209/" ] }
427,482
I have a feed in hdfs. I have to find the the rows which has the 3rd column as not null. The feed is separated by the delimter | SQL Equivalent select * from feed_table where column_3 is not null; Input: 1|abc|1232|def|3|ff|1244|gh| Output: Here the 3rd column is not null. 1|abc|1233|ff|124
You can use awk for this task. Set the delimiter in awk to | and then check if the 3rd column is not an empty string. $ cat /tmp/foo 1|abc|1232|def|3|ff|1244|gh|$ awk -F'|' '$3 != ""' /tmp/foo1|abc|1233|ff|124
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/427482", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/278419/" ] }
427,488
Search engine turns up loads of reports of this error message when installing guest additions to an application. I read about 50 of them, and tried some, but none of them made any difference. Here's what I have: I had a virtual machine created with Slackware current (about a year ago. kernel was 4.3.90) The screen didn't maximize, so I realized I had to install Guest Additions to get that working. Mounted the Guest additions and tried to execute /run/media/.../VBoxLinuxAdditions.run . It started executing fine but stopped at the error Failed to set up vboxadd . It points to a log file, which points to another log file - neither with any useful information. I have the compilers installed, kernel headers installed, dkms compiled (and installed, though I later read that VBox 5.* doesn't need that anymore?). I can't seem to tease more information (logs etc) from the procedure. Virtualbox is 5.1.22, kernel 4.3.90, slackware is Slackware-current (about a year ago). gcc is 4.7.1. Any suggestions/ideas/debugging I could try?
You can use awk for this task. Set the delimiter in awk to | and then check if the 3rd column is not an empty string. $ cat /tmp/foo 1|abc|1232|def|3|ff|1244|gh|$ awk -F'|' '$3 != ""' /tmp/foo1|abc|1233|ff|124
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/427488", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/168761/" ] }
427,517
It's the first time that I install Linux (Debian testing) on a computer with UEFI and not BIOS. I've installed Windows first and then Debian (like I always did), but system keeps booting straight into Windows 10 no matter what. I've tried many solutions: disabled secure-boot, tried multiple bios settings (CSM support enabled-disabled, UEFI boot only, UEFI and Legacy, etc), disabled Windows fastboot, tried installing rEFInd, tried with bcdedit from Windows shell, tried completely reinstalling the system.. The only way to boot into GRUB (which is installed and perfectly working) is to use a rEFInd USB. In this way I managed to add GRUB to EFI (which was missing) with efibootmgr EFI/debian/grubx64.efi command, but it's still not working. My computer is a Thinkpad T470.
You can use awk for this task. Set the delimiter in awk to | and then check if the 3rd column is not an empty string. $ cat /tmp/foo 1|abc|1232|def|3|ff|1244|gh|$ awk -F'|' '$3 != ""' /tmp/foo1|abc|1233|ff|124
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/427517", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/278452/" ] }
427,531
I have the following data structure in my test file: "111","222","AAABBB","333","444","555" I want to transform the third field so there is a '-' after the 3rd [A-Z] like so: "111","222","AAA-BBB","333","444","555" Is using the split() function the best tool for this job?Here is what I've attempted: awk 'BEGIN{OFS=FS=","} {split($3, a, "[A-Z]{3}", seps); print seps[1]"/"seps[2]};' test The above command does what I want, but how can I print the whole row including my updated $3 field?Result: AAA-BBB
Short awk solution: awk 'BEGIN{ OFS=FS="," }{ sub(/[A-Z]{3}/, "&-", $3) }1' file [A-Z]{3} - regex pattern to match 3 uppercase letters & - stands for the precise substring that was matched by the regexp pattern The output: "111","222","AAA-BBB","333","444","555"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/427531", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/231209/" ] }
427,596
Always wondered this, but never fully investigated - is there any way to get named parameters in bash? For example, I have this: function ql_maybe_fail { if [[ "$1" == "true" ]]; then echo "quicklock: exiting with 1 since fail flag was set for your 'ql_release_lock' command. " exit 1; fi} is it somehow possible to convert it to something like this: function ql_maybe_fail (isFail) { if [[ "$isFail" == "true" ]]; then echo "quicklock: exiting with 1 since fail flag was set for your 'ql_release_lock' command. " exit 1; fi}
Functions in Bash currently do not support user-named arguments.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/427596", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/113238/" ] }
427,611
I'd like to make my IDE window partially transparent. I achieved this in Unity using compiz as described in the accepted answer to: How to make a window transparent in gnome . However I don't believe compiz will work for this with gnome unless I'm mistaken. There WAS a gnome extension for this but it has been abandoned and the github repo is gone. Anyone know of a way to achieve this? I'm on ubuntu 17.10
There's another extension called Glassy Gnome that works with newer versions of gnome-shell . For more details consult the included README .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/427611", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/278457/" ] }
427,705
Here is output from nproc vs nproc --all and other command found on internet. I still cannot understand why. It is a QEMU\KVM VM with CentOS 6.5 running undre other CentOS 6.5. Below are outputs from some other commands: [root@h1-nms ~]# nproc1[root@h1-nms ~]# nproc --all3[root@h1-nms ~]# lscpuArchitecture: x86_64CPU op-mode(s): 32-bit, 64-bitByte Order: Little EndianCPU(s): 3On-line CPU(s) list: 0-2Thread(s) per core: 1Core(s) per socket: 1Socket(s): 3NUMA node(s): 1Vendor ID: GenuineIntelCPU family: 6Model: 13Stepping: 3CPU MHz: 2194.710BogoMIPS: 4389.42Hypervisor vendor: KVMVirtualization type: fullL1d cache: 32KL1i cache: 32KL2 cache: 4096KNUMA node0 CPU(s): 0-2[root@h1-nms ~]# getconf _NPROCESSORS_ONLN3[root@h1-nms ~]# cat /proc/$$/limitsLimit Soft Limit Hard Limit UnitsMax cpu time unlimited unlimited secondsMax file size unlimited unlimited bytesMax data size unlimited unlimited bytesMax stack size 10485760 unlimited bytesMax core file size unlimited unlimited bytesMax resident set unlimited unlimited bytesMax processes 32000 32000 processesMax open files 64000 64000 filesMax locked memory 65536000 65536000 bytesMax address space unlimited unlimited bytesMax file locks unlimited unlimited locksMax pending signals 191509 191509 signalsMax msgqueue size 819200 819200 bytesMax nice priority 0 0Max realtime priority 0 0Max realtime timeout unlimited unlimited us[root@h1-nms ~]# grep "" /sys/devices/system/cpu/cpu*/online/sys/devices/system/cpu/cpu1/online:1/sys/devices/system/cpu/cpu2/online:1[root@h1-nms ~]# uname -aLinux h1-nms 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 22 03:15:09 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux[root@h1-nms ~]# cat /etc/*-releaseCentOS release 6.5 (Final)CentOS release 6.5 (Final)CentOS release 6.5 (Final)[root@h1-nms ~]#
As indicated in Kusalananda ’s answer , nproc distinguishes between the number of CPUs available to the current process, and the overall number of CPUs. On Linux systems, the CPUs available to the current process, when OpenMP isn’t involved, is determined by the process’s affinity mask. To see that, run taskset : taskset -p $$ or schedtool : schedtool $$ ( taskset is part of the util-linux package, and should be installed by default; schedtool is its own package, and might need to be installed if you want to use it.) In your case this should show that your shell is limited to a single processor, which is why nproc outputs 1 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/427705", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52093/" ] }
427,711
Consider I've a large file containing information like (part of cat file ): 33.829037 -0.113737 1.15715333.830036 -0.113620 1.15715733.831036 -0.113495 1.15716933.832035 -0.113365 1.15719133.833035 -0.113242 1.15722833.834034 -0.113157 1.15727333.835033 -0.113071 1.157300 The first column contains float numbers in ascending order and suppose I want to remove all the line after 33.832035 so-that output should be: 33.829037 -0.113737 1.15715333.830036 -0.113620 1.15715733.831036 -0.113495 1.15716933.832035 -0.113365 1.157191 How do I do that with sed or appropriate text-processing tool? I've tried Deleting all lines after first occurrence of a string in a line but haven't succeed in implementing in my case.
As indicated in Kusalananda ’s answer , nproc distinguishes between the number of CPUs available to the current process, and the overall number of CPUs. On Linux systems, the CPUs available to the current process, when OpenMP isn’t involved, is determined by the process’s affinity mask. To see that, run taskset : taskset -p $$ or schedtool : schedtool $$ ( taskset is part of the util-linux package, and should be installed by default; schedtool is its own package, and might need to be installed if you want to use it.) In your case this should show that your shell is limited to a single processor, which is why nproc outputs 1 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/427711", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66803/" ] }
427,768
I have just installed the 'vagrant-vmware-fusion' 5.0.4 official plug-in from HashiCorp, in Vagrant. I am using Vagrant with VMWare Fusion 10 running in High Sierra. However, when doing the vagrant up for a VM, I am having an error. As advised in the error message, I have done a reboot, however the error is still happening. What to do? $ vagrant up --provider vmware_fusionBringing machine 'default' up with 'vmware_fusion' provider...==> default: Box 'hashicorp/precise64' could not be found. Attempting to find and install... default: Box Provider: vmware_desktop, vmware_fusion, vmware_workstation default: Box Version: >= 0==> default: Loading metadata for box 'hashicorp/precise64' default: URL: https://vagrantcloud.com/hashicorp/precise64==> default: Adding box 'hashicorp/precise64' (v1.1.0) for provider: vmware_fusion default: Downloading: https://vagrantcloud.com/hashicorp/boxes/precise64/versions/1.1.0/providers/vmware_fusion.box==> default: Successfully added box 'hashicorp/precise64' (v1.1.0) for 'vmware_fusion'!==> default: Cloning VMware VM: 'hashicorp/precise64'. This can take some time...==> default: Checking if box 'hashicorp/precise64' is up to date...==> default: Verifying vmnet devices are healthy...The VMware "vmnet" devices are failing to start. The most commonreason for this is collisions with existing network services. Forexample, if a hostonly network space collides with another hostonlynetwork (such as with VirtualBox), it will fail to start. Likewise,if forwarded ports collide with other listening ports, it willfail to start.Vagrant does its best to fix these issues, but in some cases itcannot determine the root cause of these failures.Please verify you have no other colliding network services running.As a last resort, restarting your computer often fixes this issue.
After rebooting a couple of times, I decided to try my luck launching "VMware Fusion" before invoking the vagrant up command. It indeed works; vagrant up does not start VMWare fusion, and so, you have to be running it for vagrant up to be able to deploy a VM. Ultimately, the error message could be more elucidative.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/427768", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138261/" ] }
427,781
I have a list of files (see below) I need to merge into separate files ( merge1 , merge2 ). merge1 would contain file1a_1 and file1a_2 ; merge2 would contain file2a_1 , file2a_2 . I tried find . -name "file*_*" -exec cat {} \; >> mergefile* i.e. Files: file1a_1X1a_2file1a_3file1b_1file1b_2file1b_3 The shell script puts all the files into merge file and doesn't separate them out individually. Any assistance would be appreciated
After rebooting a couple of times, I decided to try my luck launching "VMware Fusion" before invoking the vagrant up command. It indeed works; vagrant up does not start VMWare fusion, and so, you have to be running it for vagrant up to be able to deploy a VM. Ultimately, the error message could be more elucidative.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/427781", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/278677/" ] }
427,885
How can I check if a file is an archive and then extract it with 7z ? I understand that I could check it by file command but it won't work in scripts because of its output. I can't predict what type of archive it could be. I just want to do something like: Can I extract it by 7z? If yes, extract, if not, go further by bash sript.
filename=/tmp/foo.gzif 7z t $filename; then 7z e $filenameelse echo $filename not an archive.fi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/427885", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/278763/" ] }
427,940
AFAIC both sed and awk are general purpose text processing utilities with whom a user can get quite similar results, in a slightly different syntax: With both, a user could add, replace/translate and delete content in a file. What is the main difference between these two general purpose text processing utilities and the tr text processing utility? I assume that tr 's functionality is included in both sed and awk so it is just narrowed to the specific context of replacing one string in another, but I'm not sure I'm accurate here or translating it to another.
Yes, tr is a "simple" tool compared to awk and sed , and both awk and sed can easily mimic most of its basic behaviour, but neither of sed or awk has " tr built in" in the sense that there is some single thing in them that exactly does all the things that tr does. tr works on characters, not strings, and converts characters from one set to characters in another (as in tr 'A-Z' 'a-z' to lowercase input). Its name is short from "translate" or "transliterate". It does have some nice features, like being able to delete multiple consecutive characters that are the same, which may be a bit fiddly to implement with a single sed expression. For example, tr -s '\n' will squeeze all consecutive newlines from the input into single newlines. To characterize the three tools crudely: tr works on characters (changes or deletes them). sed works on lines (modifies words or other parts of lines, or inserts or deletes lines). awk work on records with fields (by default whitespace separated fields on a line, but this may be changed by setting FS and RS ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/427940", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273994/" ] }
428,175
In Ubuntu 16.04 I have a Bash file containing a few different functions for automating various common tasks on my system. I have sourced that file in bashrc so I could comfortably call each function from anywhere in the terminal in time of need, hence we can say that "the functions themselves are sourced". Sometimes I need to use one of these sourced functions from inside a script , and I need to prim this action with: export -f myFunc_0 myFunc_1 myFunc_2 ... otherwise, I won't be able to use these functions. How could I do that priming to all functions in the file, without noting specific functions?
If you use set -a either in your .bashrc or within the function file itself it will mark all functions to be exported. 4.3.1 The Set Builtin -a Each variable or function that is created or modified is given the export attribute and marked for export to the environment of subsequent commands. This may cause some undesirable results if you are setting variables that you don't want exported, but you could add something like this to your .bashrc : set -asource ~/my_funcsset +a
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/428175", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273994/" ] }
428,217
I am trying to create a raspberry pi spy cam bug. I am trying to make it so the new file created for the various processes come up with NOW=`date '+%F_%H:%M:%S'`; which works fine.but it requires an echo to update the time $NOW is also in the /home/pi/.bashrc fileSame issue, does not update wo . ~/.bashrc I found on this forum here and it works: #! /bin/bashNOW=`date '+%F_%H:%M:%S'`;filename="/home/pi/gets/$NOW.jpg"raspistill -n -v -t 500 -o $NOW.jpg;echo $filename; I don't get how it works bc it's before o the output of raspistil and in quotes. Thank you all in advance!!!
When you do NOW=`date '+%F_%H:%M:%S'` or, using more modern syntax, NOW=$( date '+%F_%H:%M:%S' ) the variable NOW will be set to the output of the date command at the time when that line is executed. If you do this in ~/.bashrc , then $NOW will be a timestamp that tells you when you started the current interactive shell session. You could also set the variable's value with printf -v NOW '%(%F_%H:%M:%S)T' -1 if you're using bash release 4.2 or later. This prints the timestamp directly into the variable without calling date . In the script that you are showing, the variable NOW is being set when the script is run (this is what you want). When the assignment filename="/home/pi/gets/$NOW.jpg" is carried out, the shell will expand the variable in the string. It does this even though it is in double quotes. Single quotes stops the shell from expanding embedded variables (this is not what you want in this case). Note that you don't seem to actually use the filename variable in the call to raspistill though, so I'm not certain why you set its value, unless you just want it outputted by echo at the end. In the rest of the code, you should double quote the $NOW variable expansion (and $filename ). If you don't, and later change how you define NOW so that it includes spaces or wildcards (filename globbing patterns), the commands that use $NOW may fail to parse their command line properly. Compare, e.g., string="hello * there"printf 'the string is "%s"\n' $string with string="hello * there"printf 'the string is "%s"\n' "$string" Related things: About backticks in command substitutions: Have backticks (i.e. `cmd`) in *sh shells been deprecated? About quoting variable expansions: Security implications of forgetting to quote a variable in bash/POSIX shells and Why does my shell script choke on whitespace or other special characters?
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/428217", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/232342/" ] }
428,233
Is there an existing tool, which can be used to download big files over a bad connection? I have to regularly download a relatively small file: 300 MB, but the slow (80-120 KBytes/sec) TCP connection randomly breaks after 10-120 seconds. (It's a big company's network. We contacted their admins (working from India) multiple times, but they can't or don't want to do anything.)The problem might be with their reverse proxies / load balancers. Up until now I used a modified version of pcurl: https://github.com/brunoborges/pcurl I changed this line: curl -s --range ${START_SEG}-${END_SEG} -o ${FILENAME}.part${i} ${URL} & to this: curl -s --retry 9999 --retry-delay 3 --speed-limit 2048 --speed-time 10 \ --retry-max-time 0 -C - --range ${START_SEG}-${END_SEG} -o ${FILENAME}.part${i} ${URL} & I had to add --speed-limit 2048 --speed-time 10 because the connection mostly just hangs for minutes when it fails. But recently even this script can't complete. One problem is that it seems to ignore the -C - part, so it doesn't "continue" the segment after a retry. It seems to truncate the relating temp file, and start from the beginning after each fail. (I think the --range and the -C options cannot be used together.) The other problem is that this script downloads all segments at the same time. It cannot have 300 segments, of which only are 10 being downloaded at a time. I was thinking of writing a download tool in C# for this specific purpose, but if there's an existing tool, or if the curl command could work properly with different parameters, then I could spare some time. UPDATE 1: Additional info: The parallel download functionality should not be removed, because they have a bandwidth limit (80-120 Kbytes / sec, mostly 80) per connection, so 10 connections can cause a 10 times speedup. I have to finish the file download in 1 hour, because the file is generated hourly.
lftp ( Wikipedia ) is good for that. It supports a number of protocols, can download files using several concurrent parallel connections (useful where there's a lot of packet loss not caused by congestion), and can automatically resume downloads. It's also scriptable. Here including the fine-tuning you came up with (credits to you): lftp -c 'set net:idle 10 set net:max-retries 0 set net:reconnect-interval-base 3 set net:reconnect-interval-max 3 pget -n 10 -c "https://host/file.tar.gz"'
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/428233", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/195913/" ] }
428,243
I often find myself searching for a file in the current folder and subfolders based on part of its name. It seems to me that in this case find + grep requires less typing than only find . E.g. find . | grep user #19 chars typed has to be written with only find: find . -path "user*" #21 chars typed It feels kind of silly to type more characters when using a single command that was meant to find files... then using two of them in combination. Is there any way of making the use of only find to be more efficient in terms of characters typed?
Yes, ff () { find . -path "*$1*"} This function is invoked as ff user and will return all pathnames (of files, directories etc.) in or beneath the current directory that contain the given string. The function definition should go into your ~/.bashrc file (or the corresponding shell initialization file that is used by your shell) and will be usable in the next interactive shell that you start. The following variation only considers the filename portion of the pathname: ff () { find . -name "*$1*"} If you also want to restrict the results to only regular files , then add -type f to the find invocation: ff () { find . -type f -name "*$1*"} Note that your command find . -path "user*" will never output anything. This is because every considered pathname will start with . . And finally a word of caution: I'm assuming this will be used interactively and that you will use your eyes to look at the result. If you're planning to use it in a script or for doing any looping over filenames returned by the function, please see " Why is looping over find's output bad practice? ".
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/428243", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/279011/" ] }
428,263
I'm using a keycloak server, when I run this command: standalone.sh This command launchs the server and I'm not able to stop it until I execute Ctrl-C command. I though about runing an instruction like this: standalone.sh && jboss-cli.sh -c --commands=shutdown or standalone.sh ; jboss-cli.sh -c --commands=shutdown Based on this question What are the shell's control and redirection operators? I found that ; Will run one command after another has finished, irrespective of the outcome of the first. And && Used to build AND lists, it allows you to run one command only if another exited successfully. But in my case the first task did not exit and still executing. Is there any solution to run another task which will stop the first?
Yes, ff () { find . -path "*$1*"} This function is invoked as ff user and will return all pathnames (of files, directories etc.) in or beneath the current directory that contain the given string. The function definition should go into your ~/.bashrc file (or the corresponding shell initialization file that is used by your shell) and will be usable in the next interactive shell that you start. The following variation only considers the filename portion of the pathname: ff () { find . -name "*$1*"} If you also want to restrict the results to only regular files , then add -type f to the find invocation: ff () { find . -type f -name "*$1*"} Note that your command find . -path "user*" will never output anything. This is because every considered pathname will start with . . And finally a word of caution: I'm assuming this will be used interactively and that you will use your eyes to look at the result. If you're planning to use it in a script or for doing any looping over filenames returned by the function, please see " Why is looping over find's output bad practice? ".
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/428263", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/279024/" ] }