source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
421,867
I'm using ASUS-Chromebook-Flip-C302CA / Google Chrome OS - Version 65.0.3325.35 (Official Build) dev (64-bit) and I'm trying to follow Visual Studio Code for Chromebooks and Raspberry Pi , yet failing to execute last step with following error: chronos@localhost ~ $ . <( wget -O - https://code.headmelted.com/installers/chromebook.sh )bash: wget: command not foundchronos@localhost ~ $ wget : chronos@localhost ~ $ whereis wgetwget:chronos@localhost ~ $ which wgetwhich: no wget in (/usr/local/bin:/usr/bin:/bin:/opt/bin)chronos@localhost ~ $ find / -name wget >/dev/null 2>&1chronos@localhost ~ $
alternative: curl - transfer a URL . <( curl --silent https://code.headmelted.com/installers/chromebook.sh )
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421867", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1187/" ] }
421,891
Say I have archive.zip that contains a directory named myDir (it's actually the only directory inside it) and that this directory contains a file named x . unzip archive.zip -d ~/ brings: ~/myDir/x mv ~/myDir ~/myRenamedDir , brings: ~/myRenamedDir/x Is there a way to rename my myDir with the extraction, so the outcome would be ~/myRenamedDir/x , directly, without needing mv ?
Consider using unzip ’s -j option: -j junk paths.  The archive’s directory structure is not recreated; all files are deposited in the extraction directory (by default, the current one). Source: unzip(1) unzip -d ~/myRenameDir/ -j <FILE>
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421891", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273994/" ] }
421,898
I know that some shells accept this kind of test: t() { [[ $var == *$'\n'* ]] && res=yes || res=no printf '%s ' "$res"; }var='abcd'tvar='abcd'techo on execution: $ bash ./scriptyes no What is the POSIX (dash) working equivalent Is the following a reliable way to test? nl=''t() { case "$var" in *$nl* ) res=yes ;; * ) res=no ;; esac printf '%s ' "$res" }var='abcd'tvar='abcd'techo
You can put a hard newline in a variable, and pattern match with case . $ cat nl.sh#!/bin/shnl=''case "$1" in *$nl*) echo has newline ;; *) echo no newline ;;esac$ dash nl.sh $'foo\nbar'has newline$ dash nl.sh $'foobar'no newline The alternative way to make the newline is something like this: nl=$(printf "\nx"); nl=${nl%x} The obvious command substitution doesn't work because trailing newlines are removed by the substitution.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/421898", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
421,918
I prefer the breeze-dark KDE theme. Unfortunately, some gnome apps (such as Firefox) are problematic. In particular, in text input fields in Firefox, one ends up with white text on a white background or dark text on a dark background. I have: Settings -> colors -> options -> apply colors to non-Qt applications enabledSettings -> Application Style -> Gnome Application Style (GTK): GTK3 Theme: Adwaita-dark Icon theme: Adwaita I am not aware of any Adwaita-dark icon theme after extensive searching. To enable breeze-dark for KDE I have: Settings->Application Style-> Widget Style: BreezeSettings->Workspace Theme -> Desktop Theme: Breeze Dark It should not be necessary, but I have also installed https://addons.mozilla.org/en-Us/firefox/addon/breeze-dark/ . cat ~/.config/gtk-3.0/settings.ini [Settings]gtk-application-prefer-dark-theme=truegtk-button-images=1gtk-cursor-theme-name=ComixCursors-Opaque-Orangegtk-fallback-icon-theme=Numix-Circlegtk-font-name=Liberation Sans Regular 11gtk-icon-theme-name=Adwaitagtk-menu-images=1gtk-primary-button-warps-slider=1gtk-theme-name=Adwaita-darkgtk-toolbar-style=GTK_TOOLBAR_ICONS After all of this, Firefox text input fields still have either white text on a white background or dark text on a dark background, making them impossible to read. (Temporary workaround is to highlight the text in a field so I can see what was entered, but that is very clumsy.) The following question claim this closely related issue was a bug that was fixed: KDE - Problem with dark themes However, I a running Arch Linux with the latest KDE Plasma5 and what appears to be the same issue still exists. However, I notice it mainly in text input fields, not necessarily drop down combo boxes.
Tested in Plasma 5.12 (Kubuntu 18.04) and 5.14 (18.10 upgraded to backports). I can select "Breeze Dark" for the GTK themes. I have breeze-gtk-theme installed. Also, look for other dark themes under "Settings -> Application Style -> Gnome Application Style (GTK)" - "Get new themes". I have seen the Firefox problem when in the past I used dark Kvantum themes, but only in Firefox. Not happening now though.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421918", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15010/" ] }
421,919
I want to let my Linux device synchronize its local time with the local time of another Linux server without any other references. I don't care about the actual time. I want my fake time. Therefore I don't need any other ntp servers for reference. For example I just set the local time of my Linux server 1.2.3.4 to 2033-02-23 15:23:10 . I want to let my Linux device synchronize this time. The config file /etc/ntp.conf on the Linux server is: restrict 127.0.0.1 restrict ::1server 127.127.1.1fudge 127.127.1.1 stratum 8disable monitorlogfile /var/log/ntp/ntp.logpidfile /var/run/ntpd.piddriftfile /var/lib/ntp/driftleapfile /etc/ntp.leapseconds On the Linux client, I type ntpdate 1.2.3.4 , and then it gives me error: 5 Feb 08:26:39 ntpdate[31059]: no server suitable for synchronization found Why is that? I have tested the -d parameter, i.e. ./ntpdate -d 1.2.3.4 , and it says: 5 Feb 08:40:54 ntpdate[22958]: ntpdate [email protected] Mon Feb 5 10:02:23 UTC 2018 (1)Looking for host 1.2.3.4 and service ntphost found : 1.2.3.4transmit(1.2.3.4)receive(1.2.3.4)transmit(1.2.3.4)receive(1.2.3.4)transmit(1.2.3.4)receive(1.2.3.4)transmit(1.2.3.4)receive(1.2.3.4)1.2.3.4: Server dropped: strata too highserver 1.2.3.4, port 123stratum 16, precision -23, leap 11, trust 000refid [1.2.3.4], delay 0.02573, dispersion 0.00000transmitted 4, in filter 4reference time: 00000000.00000000 Thu, Feb 7 2036 14:28:16.000originate timestamp: de22269c.ab72b039 Mon, Feb 5 2018 8:41:00.669transmit timestamp: de22269c.ab0e1fba Mon, Feb 5 2018 8:41:00.668filter delay: 0.02579 0.02574 0.02573 0.02574 0.00000 0.00000 0.00000 0.00000 filter offset: 0.001443 0.001416 0.001417 0.001418 0.000000 0.000000 0.000000 0.000000delay 0.02573, dispersion 0.00000offset 0.001417 5 Feb 08:41:00 ntpdate[22958]: no server suitable for synchronization found What is wrong with it? How can I solve this problem? P.S. version information: server: [xxxx@xxxx:~]$ /usr/sbin/ntpd --versionntpd [email protected] Mon Feb 5 10:02:23 UTC 2018 (1) client: [xxxx@xxxx:~]$ ./ntpdate -v 5 Feb 08:36:40 ntpdate[15840]: ntpdate [email protected] Mon Feb 5 10:02:23 UTC 2018 (1) 5 Feb 08:36:40 ntpdate[15840]: no servers can be used, exiting
Tested in Plasma 5.12 (Kubuntu 18.04) and 5.14 (18.10 upgraded to backports). I can select "Breeze Dark" for the GTK themes. I have breeze-gtk-theme installed. Also, look for other dark themes under "Settings -> Application Style -> Gnome Application Style (GTK)" - "Get new themes". I have seen the Firefox problem when in the past I used dark Kvantum themes, but only in Firefox. Not happening now though.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421919", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60999/" ] }
421,922
I have this read operation: read -p "Please enter your name:" username How could I verify the users name, in one line? If it's not possible in a sane way in one line, maybe a Bash function putted inside a variable is a decent solution? Name is just an example, it could be a password or any other common form value. Verifying means here: Requesting the user to insert the name twice and to ensure the two values are the same.
That the user typed (or, possibly, copied and pasted...) the same thing twice is usually done with two read calls, two variables, and a comparison. read -p "Please enter foo" bar1read -p "Please enter foo again" bar2if [ "$bar1" != "$bar2" ]; then echo >&2 "foos did not match" exit 1fi This could instead be done with a while loop and condition variable that repeats the prompts-and-checks until a match is made, or possibly abstracted into a function call if there are going to be a lot of prompts for input.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421922", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273994/" ] }
421,934
I would like to move a directory from one location to another but when I do I can see that the timestamp gets changed. Is there any way to retain the timestamp as original? Have looked at the man page of mv but couldn't find any existing options.
Use cp as following, mv doesn't do. cp -r -p /path/to/sourceDirectory /path/to/destination/ from man cp: -p same as --preserve=mode,ownership,timestamps--preserve[=ATTR_LIST] preserve the specified attributes (default: mode,ownership,timestamps), if possible additional attributes: context, links, xattr, all then after copy done, delete the sourceDirectory .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421934", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274240/" ] }
421,969
I need to download a SSL cert in PEM format from a HTTPS website, https://api.paczkomaty.pl . So I am using OpenSSL to do that: openssl s_client -connect api.paczkomaty.pl:443 > myfileopenssl x509 -in myfile -text Here's the result: Certificate: Data: Version: 3 (0x2) Serial Number: 0d:5a:87:30:7e:43:96:05:5e:20:f3:2f:14:a4:d9:47 Signature Algorithm: sha256WithRSAEncryption Issuer: C = US, O = GeoTrust Inc., CN = RapidSSL SHA256 CA Validity Not Before: Mar 11 00:00:00 2017 GMT Not After : Apr 10 23:59:59 2018 GMT Subject: CN = *.grupainteger.pl(...) However, when I visit the website via a browser (Chrome or Firefox) and inspect its certificate, it shows me a different one; its Serial Number is different, and its validity is from 15/1/2018 to 1/9/2018. Why is OpenSSL fetching a different certificate?
Why is OpenSSL fetching a different certificate? s_client by default does not send SNI (Server Name Indication) data but a browser does. The server may choose to respond with a different certificate based on the contents of that SNI - or if no SNI is present then it will serve a default certificate. Try adding -servername api.paczkomaty.pl to your s_client command line
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421969", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34039/" ] }
421,977
My os: debian9. The filesystem on my disk: $ sudo blkid | awk '{print $1 ,$3}'/dev/sda2: TYPE="ext4"/dev/sda1: TYPE="vfat"/dev/sda3: TYPE="ext4"/dev/sda4: TYPE="ext4"/dev/sda5: TYPE="swap" Now to chattr +i for my /etc/resolv.conf : sudo chattr +i /etc/resolv.confchattr: Operation not supported while reading flags on /etc/resolv.confls -al /etc/resolv.conflrwxrwxrwx 1 root root 31 Jan 8 15:08 /etc/resolv.conf -> /etc/resolvconf/run/resolv.confsudo mount -o remount,acl /sudo chattr +i /etc/resolvconf/run/resolv.confchattr: Inappropriate ioctl for device while reading flags on /etc/resolvconf/run/resolv.conf How to set chattr +i for my /etc/resolve.conf ? /dev/sda1 is empty for windows. My debian is installed on /dev/sda2 $ df Filesystem 1K-blocks Used Available Use% Mounted onudev 1948840 0 1948840 0% /devtmpfs 392020 5848 386172 2% /run/dev/sda2 95596964 49052804 41644988 55% / acl is installed. $ dpkg -l acl Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Architecture Description +++-==============-============-============-=================================ii acl 2.2.52-3+b1 amd64 Access control list utilities No output info from these findmnt commands: sudo findmnt -fn / | grep -E "acl|user_xattr"sudo findmnt -fn / | grep vfatsudo findmnt -fn $(dirname $(realpath /etc/resolv.conf)) | grep tmpfs
Your /etc/resolv.conf is probably a symlink.See this explanation for further information. You could try: chattr +i "$(realpath /etc/resolv.conf)" Does the root mountpoint support Access Control Lists (acl) or Extended Attributes ? Check it via: findmnt -fn / | grep -E "acl|user_xattr" || echo "acl or user_xattr mount option not set for mountpoint /" Is your root partition of the type 'VFAT'?I believe 'VFAT' does not support ACLs . Check it via: findmnt -fn / | grep vfat Or maybe your symlink target directory is a tmpfs ? ACLs are lost on tmpfs Test it: findmnt -fn $(dirname $(realpath /etc/resolv.conf)) | grep tmpfs && echo $(dirname $(realpath /etc/resolv.conf)) is tmpfs cheers
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421977", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/243284/" ] }
421,985
I'm getting an invalid signature error when I try to apt-get update : Ign:1 http://dl.google.com/linux/chrome/deb stable InReleaseHit:2 http://dl.google.com/linux/chrome/deb stable Release Hit:4 https://download.sublimetext.com apt/dev/ InRelease Hit:5 http://deb.i2p2.no unstable InRelease Get:6 http://ftp.yzu.edu.tw/Linux/kali kali-rolling InRelease [30.5 kB]Err:6 http://ftp.yzu.edu.tw/Linux/kali kali-rolling InRelease The following signatures were invalid: EXPKEYSIG ED444FF07D8D0BF6 Kali Linux Repository <[email protected]>Reading package lists... DoneW: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://ftp.yzu.edu.tw/Linux/kali kali-rolling InRelease: The following signatures were invalid: EXPKEYSIG ED444FF07D8D0BF6 Kali Linux Repository <[email protected]>W: Failed to fetch http://http.kali.org/kali/dists/kali-rolling/InRelease The following signatures were invalid: EXPKEYSIG ED444FF07D8D0BF6 Kali Linux Repository <[email protected]>W: Some index files failed to download. They have been ignored, or old ones used instead. Why is this happening? How can I fix it?
Per: https://twitter.com/kalilinux/status/959515084157538304 , your archive-keyring package is outdated. You need to do this (as root): wget -q -O - https://archive.kali.org/archive-key.asc | apt-key add
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/421985", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274283/" ] }
422,005
I have this cat then sed operation: cat ${location_x}/file_1 > ${location_y}/file2sed -i "s/0/1/g" /${location_y}/file2 Can this be done in a single line? I might miss such a way here , but that explanation seems to me to deal with the opposite of that, and I fail too unite the above cat and sed into one operation without double ampersand ( && ) or semicolon ( ; ) and getting closer to assume it's not possible, but it's important for me to ask here because I might be wrong. Not all readers are English speakers are familiar with the terms "ampersand" or "semicolon" so I elaborated on these.
Yes: sed 's/0/1/g' "$location_x/file_1" >"$location_y/file2" Your code first makes a copy of the first file and then changes the copy using inline editing with sed -i . The code above reads from the original file, does the changes, and writes the result to the new file. There is no need for cat here. If you're using GNU sed and if the $location_x could contain a leading - , you will need to make sure that the path is not interpreted as a command line flag: sed -e 's/0/1/g' -- "$location_x/file_1" >"$location_y/file2" The double dash marks the end of command line options. The alternative is to use < to redirect the contents of the file into sed . Other sed implementation (BSD sed ) stops parsing the command line for options at the first non-option argument whereas GNU sed (like some other GNU software) rearranges the command line in its parsing of it. For this particular editing operation (changing all zeros to ones), the following will also work: tr '0' '1' <"$location_x/file_1" >"$location_y/file2" Note that tr always reads from standard input, hence the < to redirect the input from the original file. I notice that on your sed command line, you try to access /${location_y}/file2 , which is different from the path on the line above it (the / at the start of the path). This may be a typo.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/422005", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273994/" ] }
422,040
On the X desktop, I ocassionally used gksudo or just sudo somegui to launch GUI applications as another user, including root. I recently discovered that this is not possible on contemporary (early 2018) Wayland desktops. All applications must launch as the current desktop user, and are limited to the privileges of that user. Is this a permanent feature of Wayland (there by design), or is su-type usage an enhancement that has not yet been implemented? I'm looking for a documented statement (roadmap, design page...), not preference or opinion.
Is this a permanent feature of Wayland (there by design) No. This has nothing to do with the wayland protocol. It is rather a question of environment setup. Wayland uses a socket, its name is stored in WAYLAND_DISPLAY . It is located in XDG_RUNTIME_DIR that is normally set up for user access only. But root can access it, too. (Some applications also regard XDG_SESSION_TYPE which can have values wayland or x11 to decide whether to use X or Wayland.) sudo deletes most environment variables including XDG_RUNTIME_DIR and WAYLAND_DISPLAY . You can run wayland applications as root with: sudo env XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR WAYLAND_SOCKET=$WAYLAND_SOCKET waylandapplication or shorter with -EH to preserve almost all environment variables (but setting HOME to /root ). This will include DISPLAY and XAUTHORITY for Xwayland access, too: sudo -EH application Though, if the application running as root writes anything in XDG_RUNTIME_DIR , it can cause file permission issues for user applications. However, running graphical applications as root in wayland is much less a security issue than it is in X11. To avoid using X11 accidently, you can run without DISPLAY : sudo -EH env DISPLAY= waylandapplication I'm looking for a documented statement (roadmap, design page...), not preference or opinion. The Wayland documentation mentions WAYLAND_DISPLAY , but I do not find anything about XDG_RUNTIME_DIR . Though, all wayland compositors including the reference implementation weston depend on XDG_RUNTIME_DIR . If WAYLAND_DISPLAY would be at another location, it would not be a problem to run applications from arbitrary users on the same wayland display. But XDG_RUNTIME_DIR ist specified to be restricted on the logged-in user and should contain user related sockets: $XDG_RUNTIME_DIR defines the base directory relative to which user-specific non-essential runtime files and other file objects (such as sockets, named pipes, ...) should be stored. The directory MUST be owned by the user, and he MUST be the only one having read and write access to it. Its Unix access mode MUST be 0700. The issues with running another user or root on wayland are rather related to the XDG_RUNTIME_DIR specification than to wayland itself. If you specify a custom XDG_RUNTIME_DIR in /tmp with arbitrary access (thus breaking its specification), all users can use the wayland display. There is some hope for the future not to need XDG_RUNTIME_DIR , but it depends on the implementation: Wayland docu chap.4 : Beginning in Wayland 1.15, implementations can optionally support server socket endpoints located at arbitrary locations in the filesystem by setting WAYLAND_DISPLAY to the absolute path at which the server endpoint listens.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/422040", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269/" ] }
422,042
I would like to know more about how DST works in Debian (and probably Debian-based distros). Imagine the following: A computer running the newest version of debian is using the timezone Europe/Zurich . Daylight savings time will start on 2018-03-25 02:00 , meaning it will change from UTC+1 to UTC+2 . What would happen on that day if the computer has no network connection (no NTP servers basically)? Would it still alter the UTC offset to +2 or is it absolutely dependent on NTP? Why?
The kernel's idea of time doesn't track daylight savings; usually we let the kernel record time in UTC. This has the advantage that it's unambiguous and simple. User-facing programs have a timezone, normally passed in the TZ environment variable so that they have consistent view. This is used for converting the internal representation to or from a form that users are happy with. You can see this in effect if you do TZ=Europe/Zurich ls -logdTZ=Australia/Perth ls -logd The same timestamp is presented two different ways. The daylight savings adjustment is kept in the timezone database that's installed on your system. As long as that is correct, times in the summer will convert differently from UTC than do times in the winter (in regions with DST). And the timezone database knows all about the rules in the past and has a good idea about the future, except when politicians start meddling. You only need an external connection if the rules change and you have to install an updated timezone database.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/422042", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150636/" ] }
422,050
Could someone perhaps give their advice on this. I am wanting to take an output file from a mysql query ran: $code $IP123456 192.168.26.17610051 192.168.20.80234567 192.168.26.178 and run it in command: rsync -rvp *.$code.extension root@$IP:/path/of/dest I am trying this: while read -r line ; do echo "$SOURCE_TRANSMIT_DIR"*."$code".class.json "$DEST_HOST"@"$IP":"$DEST_TRANSMIT_DIR" ; done Output I get is this: /opt/file/outgoing/*.12345610051234567.class.json [email protected]:/opt/file/incoming/ Where I would like it to read like this in separate rsync commands: rsync -rvp *.123456.extension [email protected]:/path/of/destrsync -rvp *.234567.extension [email protected]:/path/of/destrsync -rvp *.345678.extension [email protected]:/path/of/dest Hopefully this explains better, sorry for the terrible explanation.
The kernel's idea of time doesn't track daylight savings; usually we let the kernel record time in UTC. This has the advantage that it's unambiguous and simple. User-facing programs have a timezone, normally passed in the TZ environment variable so that they have consistent view. This is used for converting the internal representation to or from a form that users are happy with. You can see this in effect if you do TZ=Europe/Zurich ls -logdTZ=Australia/Perth ls -logd The same timestamp is presented two different ways. The daylight savings adjustment is kept in the timezone database that's installed on your system. As long as that is correct, times in the summer will convert differently from UTC than do times in the winter (in regions with DST). And the timezone database knows all about the rules in the past and has a good idea about the future, except when politicians start meddling. You only need an external connection if the rules change and you have to install an updated timezone database.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/422050", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274321/" ] }
422,062
I have an httpd server running on port 80 and shiny-server running on port 3838. When I try curl 127.0.0.1:3838 I get the index file being served on the shiny-server. But when I try curl localhost:3838 curl times out without retrieving any content. Why? Here are the contents of my /etc/hosts file: 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6<my-ipv4-address> www.<mywebsite>.com <mywebsite>.com<my-ipv6-address> www.<mywebsite>.com <mywebsite>.com and the results of getent ahosts localhost ::1 STREAM localhost::1 DGRAM ::1 RAW 127.0.0.1 STREAM 127.0.0.1 DGRAM 127.0.0.1 RAW
As you can see from getent ahosts localhost , the IPv6 entries for localhost take priority over the IPv4 entries. (See man getent and man nss if you want to know why this command helps). Curl is dual stack and can resolve both IPv6 and IPv4 addresses, so it uses the IPv6 address. But shiny server doesn't work with IPv6, so it times out, as verified when using the IPv6 address directly. OTOH, if you use 127.0.0.1 , this is an IPv4 address, so it succeeds.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/422062", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89204/" ] }
422,101
today my nas debian 9 based started to write out this error in the startup phase, the same is reported in red by calling journalctl -xe : ACPI Error: SMBus/IPMI/Generic write requires Buffer of length 66, found length 32 (20160831/exfield-427)ACPI Error: Method parse/execution failed [\SB.PMIO._PMM] (Node ffff8a71878aeaf0), AE_AML_BUFFER_LIMIT (20160831/psparse-543)ACPI Exception: AE_AML_BUFFER_LIMIT, Evaluating _PMM (20160831/power_meter-338) I have a double raid1 ( sda/sdb and sdc/sdd ) inside this nas, could it be that one of the disk is going to be defective ? Should I be worried ? What could have caused this error and how can I fix it? Could it be an error given by the fact that I pressed the power off sometimes instead of login and write manually shutdown -h now ? Thanks
ACPI is the subsystem that uses information from the BIOS to control hardware, mostly for power management, temperature sensing, and related issues. SMBus is a simple two-wire communications protocol, used as side channel to access temperature sensors and other hardware. So your BIOS contains sloppy ACPI data that specifies the wrong buffer size for a write action on that channel. _PMM seems to indicate that it is related to some chip that measures something power related. Which means it probably fails to initialize some chip that monitors voltage levels somewhere. Which is usually not a problem (unless you want to measure voltage levels, and shut down your computer if there's something odd, which is a feature you have to install and set up, and is usually only used on servers). You can investigate that by looking at the ACPI data, but that requires a bit of expertise. Sloppy BIOS data is nothing unusual (unfortunately), vendors suck at setting up the BIOS properly, as they only test with the pre-installed Windows drivers which may work even with faulty data.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/422101", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143935/" ] }
422,111
In what file are the keyboard shortcuts saved in Linux Mint Cinnamon 18 ? I want to backup the shortcuts so if I know the file where the shortcuts are saved, I can simply create a symlink to the shortcut file after reinstalling the OS.
You can utilize the following to export your keyboard shortcuts to a file: $ dconf dump /org/cinnamon/desktop/keybindings/ > dconf-settings.conf This requires the dconf-cli package to be installed. Then, to import the file after making any desired keybinding changes: $ dconf load /org/cinnamon/desktop/keybindings/ < dconf-settings.conf
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/422111", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274388/" ] }
422,147
Is there a command that can be used to figure out which packages are installed system-wide in NixOS? For instance, I can list packages installed for my current user with nix-env -q . I don't know of any way to list packages installed on the whole system from /etx/nixos/configuration.nix . There are two separate instances I would want to use this: Let's say I add a package to /etc/nixos/configuration.nix in environment.systemPackages , but I forget whether I have run nixos-rebuild switch yet. It would be nice if there was a command I could run to check whether the package is in the system environment. I have programs.bash.enableCompletion set to true in /etc/nixos/configuration.nix . Without looking at the option in nixpkgs , I would guess that this option would set the bash-completion package to be installed. It would be nice if there was a command that I could run that checked whether the bash-completion package actually was in the system environment.
There's no specific tool for this. You may like the system.copySystemConfiguration option (see the docs for "caveats"). You'll get relatively close with nix-store -q --references /run/current-system/sw – the list of nix store paths directly contained in systemPackages , but note that various NixOS options may add packages in there.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/422147", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80019/" ] }
422,183
When looking at Unix, I always find the number of terminal commands to be a little overwhelming. TinyCoreLinux, by example my favorite distribution, has over 300 commands. I can't tell how necessary a lot of those commands are. How many commands did the original Unix box have? I'm essentially hoping that, by going to the original box, we can dwindle down the number of commands to newcomers. Yes, I understand you don't have to learn all the commands, but I know I definitely feel a sense of completion when I have learned all the commands for a distribution (which hasn't exactly happened yet).
The first edition of Unix had 60-odd commands, as documented in the manual (also available as a web site ): ar ed rklas find rm/usr/b/rc (the B compiler) for rmdirbas form roffbcd hup sdateboot lbppt shcat ld statchdir ln stripcheck ls suchmod mail sumchown mesg tapcmp mkdir tmcp mkfs ttydate mount typedb mv umountdbppt nm undc od wcdf pr whodsw rew writedtf rkddu rkf There were a few more commands, such as /etc/glob , which were documented in another command’s manual page ( sh in /etc/glob ’s case); but the list above gives a good idea. Many of these have survived and are still relevant; others have gone the way of the dodo (thankfully, in dsw ’s case!). It’s easy enough to read all the Unix V1 manual; I’m not sure it’s worth doing anything like that for a modern distribution. The POSIX specification itself is now over 3,000 pages, and that “only” documents a common core, with 160 commands (many of which are optional) and a few shell built-ins ; modern distributions contain thousands of commands, which no single person can learn exhaustively. The last full system manual I read cover to cover was the Coherent manual... If you want to experience V1 Unix, check out Jim Huang’s V1 repository : you’ll find source code, documentation and instructions to build and run a V1-2 hybrid using SIMH ’s PDP-11 simulation. (Thanks to Guy for the suggestion.) Warren Toomey’s PDP-7 Unix repository is also interesting. (Thanks as always to Stéphane for his multiple suggestions.)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/422183", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4930/" ] }
422,188
I have ssh'd into a ubuntu AWS box via terminal on mac. I have successfully setup the process I want to run in the box. How do can exit out of terminal without killing the process running? Can not run the below command because terminal is running the script and not allowing me to even copy and paste the below command into terminal: nohup long-running-process & Thank you in advance. P.S New to linux and terminal on mac
ersonally I use screen to get in/out of the system while keeping the processes running. $ sudo apt install screen To create a new screen: $ screen -S screen_name Then do something in your screen, for example running a program, editing files, downloading file with wget, etc. Later if you want to exit the terminal without killing the running process, simply press Ctrl+A+D . The process will kept running in the background inside the screen To reconnect to the screen: $ screen -R screen_name
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/422188", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274443/" ] }
422,198
Some file copying programs like rsync and curl have the ability to resume failed transfers/copies. Noting that there can be many causes of these failures, in some cases the program can do "cleanup" some cases the program can't. When these programs resume, they seem to just calculate the size of the file/data that was transferred successfully and just start reading the next byte from the source and appending on to the file fragment. e.g the size of the file fragment that "made it" to the destination is 1378 bytes, so they just start reading from byte 1379 on the original and adding to the fragment. My question is, knowing that bytes are made up of bits and not all files have their data segmented in clean byte sized chunks, how do these programs know they the point they have chosen to start adding data to is correct? When writing the destination file is some kind of buffering or "transactions" similar to SQL databases occurring, either at the program, kernel or filesystem level to ensure that only clean, well formed bytes make it to the underlying block device? Or do the programs assume the latest byte would be potentially incomplete, so they delete it on the assumption its bad, recopy the byte and start the appending from there? knowing that not all data is represented as bytes, these guesses seem incorrect. When these programs "resume" how do they know they are starting at the right place?
For clarity's sake - the real mechanics is more complicated to give even better security - you can imagine the write-to-disk operation like this: application writes bytes (1) the kernel (and/or the file system IOSS) buffers them once the buffer is full, it gets flushed to the file system: the block is allocated (2) the block is written (3) the file and block information is updated (4) If the process gets interrupted at (1), you don't get anything on the disk, the file is intact and truncated at the previous block. You sent 5000 bytes, only 4096 are on the disk, you restart transfer at offset 4096. If at (2), nothing happens except in memory. Same as (1).If at (3), the data is written but nobody remembers about it . You sent 9000 bytes, 4096 got written, 4096 got written and lost , the rest just got lost. Transfer resumes at offset 4096. If at (4), the data should now have been committed on disk. The next bytes in the stream may be lost. You sent 9000 bytes, 8192 get written, the rest is lost, transfer resumes at offset 8192. This is a simplified take. For example, each "logical" write in stages 3-4 is not "atomic", but gives rise to another sequence (let's number it #5) whereby the block, subdivided into sub-blocks suitable for the destination device (e.g. hard disk) is sent to the device's host controller, which also has a caching mechanism , and finally stored on the magnetic platter. This sub-sequence is not always completely under the system's control, so having sent data to the hard disk is not a guarantee that it has been actually written and will be readable back. Several file systems implement journaling , to make sure that the most vulnerable point, (4), is not actually vulnerable, by writing meta-data in, you guessed it, transactions that will work consistently whatever happens in stage (5). If the system gets reset in the middle of a transaction, it can resume its way to the nearest intact checkpoint. Data written is still lost, same as case (1), but resumption will take care of that. No information actually gets lost.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/422198", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106525/" ] }
422,207
Background: I need to clone ext4 partitions from an eMMC using uboot (and if neccessary custom bare metal code). I copied the whole thing using mmc read and found that most of the partition is empty, but there are some blocks of data like inode tables spread across the partition. This would mean I need to copy the whole partition (which is too slow, I need to do this a lot) or identify what parts of the partition are relevant. Most similar Q&A to this problem suggest to use dd creating a sparse image or piping to gzip , but I have no operating system running, so I need to understand the file system layout. Can I use those bitmap blocks to identify what is used and what is free? Documentation of ext4 seems to refer the linux kernel code as soon as it comes to details. Preferably I'd do it with uboot code, but I could as well write some bare metal code I can execute from uboot. One more border condition: The targets to where the partition gets clone are not empty, so if there are blocks of only zeros on the origin, which are required to be zero, I need to overwrite those blocks with zeros on the target.
For clarity's sake - the real mechanics is more complicated to give even better security - you can imagine the write-to-disk operation like this: application writes bytes (1) the kernel (and/or the file system IOSS) buffers them once the buffer is full, it gets flushed to the file system: the block is allocated (2) the block is written (3) the file and block information is updated (4) If the process gets interrupted at (1), you don't get anything on the disk, the file is intact and truncated at the previous block. You sent 5000 bytes, only 4096 are on the disk, you restart transfer at offset 4096. If at (2), nothing happens except in memory. Same as (1).If at (3), the data is written but nobody remembers about it . You sent 9000 bytes, 4096 got written, 4096 got written and lost , the rest just got lost. Transfer resumes at offset 4096. If at (4), the data should now have been committed on disk. The next bytes in the stream may be lost. You sent 9000 bytes, 8192 get written, the rest is lost, transfer resumes at offset 8192. This is a simplified take. For example, each "logical" write in stages 3-4 is not "atomic", but gives rise to another sequence (let's number it #5) whereby the block, subdivided into sub-blocks suitable for the destination device (e.g. hard disk) is sent to the device's host controller, which also has a caching mechanism , and finally stored on the magnetic platter. This sub-sequence is not always completely under the system's control, so having sent data to the hard disk is not a guarantee that it has been actually written and will be readable back. Several file systems implement journaling , to make sure that the most vulnerable point, (4), is not actually vulnerable, by writing meta-data in, you guessed it, transactions that will work consistently whatever happens in stage (5). If the system gets reset in the middle of a transaction, it can resume its way to the nearest intact checkpoint. Data written is still lost, same as case (1), but resumption will take care of that. No information actually gets lost.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/422207", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216004/" ] }
422,213
I'm looking for a way, to simply print the last X lines from a systemctl service in Debian. I would like to install this code into a script, which uses the printed and latest log entries. I've found this post but I wasn't able to modify it for my purposes. Currently I'm using this code, which is just giving me a small snippet of the log files: journalctl --unit=my.service --since "1 hour ago" -p err To give an example of what the result should look like, simply type in the command above for any service and scroll until the end of the log. Then copy the last 300 lines starting from the bottom. My idea is to use egrep ex. egrep -m 700 . but I had no luck since now.
journalctl --unit=my.service -n 100 --no-pager
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/422213", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157946/" ] }
422,257
Is there a way to list all files and directories including . and .. but without listing hidden files in this folder?
First ensure that dotglob is off: shopt -u dotglob Then just ask ls for those two directories and everything else: ls -ld . .. *
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/422257", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124985/" ] }
422,306
I am currently doing with curl: curl https://example.com/file -o file If I already have a file called file in my directory, it will be overwritten by this command. Instead, I would like to return an error message telling that the file already exists. Is it possible to do so only using curl? I have not seen a flag to do it and I am not running the command in a bash script thus I cannot use a comparison operator before running the command.
Testing for the existence of a name in a directory may be done with the -e test: if [ -e "filename" ]; then echo 'File already exists' >&2 exit 1ficurl -o "filename" "URL" If you don't want to terminate the script at that point: if [ -e "filename" ]; then echo 'File already exists' >&2else curl -o "filename" "URL"fi The test will be true if the name exists, regardless of whether the name is that of a regular file, directory, named pipe or other type of filesystem object. See man test on your system.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/422306", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/121835/" ] }
422,348
I want to search for lines with 'word1' XOR 'word2' in a text file. So it should output lines with word1, word2 but not the lines with both of these words. I wanted to use the XOR but I do not know how to write that in linux command line. I tried: grep 'word1\|word2' text.txtgrep word1 word2 text.txtgrep word1 text.txt | grep word2grep 'word1\^word2' text.txt and many more, but could not get sucess.
With GNU awk : $ printf '%s\n' {foo,bar}{bar,foo} neither | gawk 'xor(/foo/,/bar/)'foofoobarbar Or portably: awk '((/foo/) + (/bar/)) % 2' With a grep with support for -P (PCRE): grep -P '^((?=.*foo)(?!.*bar)|(?=.*bar)(?!.*foo))' With sed : sed ' /foo/{ /bar/d b } /bar/!d' If you want to consider whole words only (that there is neither foo nor bar in foobar or barbar for instance), you'd need to decide how those words are delimited. If it's by any character other than letters, digits and underscore like the -w option of many grep implementation does, then you'd change those to: gawk 'xor(/\<foo\>/,/\<bar\>/)'awk '((/(^|[^[:alnum:]_)foo([^[:alnum:]_]|$)/) + \ (/(^|[^[:alnum:]_)bar([^[:alnum:]_]|$)/)) % 2'grep -P '^((?=.*\bfoo\b)(?!.*\bbar\b)|(?=.*\bbar\b)(?!.*\bfoo\b))' For sed that becomes a bit complicated unless you have a sed implementation like GNU sed that supports \< / \> as word boundaries like GNU awk does.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/422348", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274576/" ] }
422,392
I need to delete all folders inside a folder using a daily script. The folder for that day needs to be left. Folder 'myfolder' has 3 sub folder: 'test1', 'test2' and 'test3'I need to delete all except 'test2'. I am trying to match exact name here: find /home/myfolder -type d ! -name 'test2' | xargs rm -rf OR find /home/myfolder -type d ! -name 'test2' -delete This command always tries to delete the main folder 'myfolder' also !Is there a way to avoid this ?
This will delete all folders inside ./myfolder except that ./myfolder/test2 and all its contents will be preserved: find ./myfolder -mindepth 1 ! -regex '^./myfolder/test2\(/.*\)?' -delete How it works find starts a find command. ./myfolder tells find to start with the directory ./myfolder and its contents. -mindepth 1 not to match ./myfolder itself, just the files and directories under it. ! -regex '^./myfolder/test2\(/.*\)?' tells find to exclude ( ! ) any file or directory matching the regular expression ^./myfolder/test2\(/.*\)? . ^ matches the start of the path name. The expression (/.*\)? matches either (a) a slash followed by anything or (b) nothing at all. -delete tells find to delete the matching (that is, non-excluded) files. Example Consider a directory structure that looks like; $ find ./myfolder./myfolder./myfolder/test1./myfolder/test1/dir1./myfolder/test1/dir1/test2./myfolder/test1/dir1/test2/file4./myfolder/test1/file1./myfolder/test3./myfolder/test3/file3./myfolder/test2./myfolder/test2/file2./myfolder/test2/dir2 We can run the find command (without -delete ) to see what it matches: $ find ./myfolder -mindepth 1 ! -regex '^./myfolder/test2\(/.*\)?'./myfolder/test1./myfolder/test1/dir1./myfolder/test1/dir1/test2./myfolder/test1/dir1/test2/file4./myfolder/test1/file1./myfolder/test3./myfolder/test3/file3 We can verify that this worked by looking at the files which remain: $ find ./myfolder./myfolder./myfolder/test2./myfolder/test2/file2./myfolder/test2/dir2
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/422392", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271206/" ] }
422,405
I want an alias on my local machine that will ssh to the target system, execute sudo using the password stored on my local machine, but leave the shell open on the remote system. The reason is I'm lazy and every time I log into a server I don't want to type my very long password. I'm aware that this is not the safest of practices. Right now it works if I do the following: ssh -q -t $1 "echo $mypword|base64 -d|sudo -S ls; bash -l" $1 being the host name of the remote system. mypword is my encoded password stored on my local system. This works and leaves my shell open. I can then do anything with sudo because it is now cached for that shell. The problem I have is if you do a ps and grep for my account you will see the encoded string containing the password in the process list. Can't have that. Is there a way to accomplish this without having the password showing in the process list? I have tried: echo $mypword|ssh -q -t $1 "base64 -d|sudo -S ls -l /root;bash -l" The ls goes off but the shell does not remain open.
This will delete all folders inside ./myfolder except that ./myfolder/test2 and all its contents will be preserved: find ./myfolder -mindepth 1 ! -regex '^./myfolder/test2\(/.*\)?' -delete How it works find starts a find command. ./myfolder tells find to start with the directory ./myfolder and its contents. -mindepth 1 not to match ./myfolder itself, just the files and directories under it. ! -regex '^./myfolder/test2\(/.*\)?' tells find to exclude ( ! ) any file or directory matching the regular expression ^./myfolder/test2\(/.*\)? . ^ matches the start of the path name. The expression (/.*\)? matches either (a) a slash followed by anything or (b) nothing at all. -delete tells find to delete the matching (that is, non-excluded) files. Example Consider a directory structure that looks like; $ find ./myfolder./myfolder./myfolder/test1./myfolder/test1/dir1./myfolder/test1/dir1/test2./myfolder/test1/dir1/test2/file4./myfolder/test1/file1./myfolder/test3./myfolder/test3/file3./myfolder/test2./myfolder/test2/file2./myfolder/test2/dir2 We can run the find command (without -delete ) to see what it matches: $ find ./myfolder -mindepth 1 ! -regex '^./myfolder/test2\(/.*\)?'./myfolder/test1./myfolder/test1/dir1./myfolder/test1/dir1/test2./myfolder/test1/dir1/test2/file4./myfolder/test1/file1./myfolder/test3./myfolder/test3/file3 We can verify that this worked by looking at the files which remain: $ find ./myfolder./myfolder./myfolder/test2./myfolder/test2/file2./myfolder/test2/dir2
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/422405", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/179761/" ] }
422,411
Command pamtester -v auth pknopf authenticatepamtester: invoking pam_start(auth, pknopf, ...)pamtester: performing operation - authenticatePassword:pamtester: Authentication failure journctl Feb 06 13:22:17 PAULS-ARCH unix_chkpwd[31998]: check pass; user unknownFeb 06 13:22:17 PAULS-ARCH unix_chkpwd[31998]: password check failed for user (pknopf)Feb 06 13:22:17 PAULS-ARCH pamtester[31997]: pam_unix(auth:auth): authentication failure; logname= uid=1000 euid=1000 tty= ruser= rhost= user=pknopf As it stands right now, every lock screen will prevent me from "unlocking" (KDE lock screen, i3lock , etc). If I start i3lock as sudo , I can then properly type in the root password to unlock the screen. However, if I run it as normal user, and I can't use normal user or root password to unlock. Here is my PAM config for i3lock . ## PAM configuration file for the i3lock screen locker. By default, it includes# the 'system-auth' configuration file (see /etc/pam.d/login)#auth include system-auth Running ls -l /etc/passwd /etc/shadow /etc/group shows -rw-r--r-- 1 root root 803 Feb 6 14:16 /etc/group-rw-r--r-- 1 root root 1005 Feb 6 14:16 /etc/passwd-rw------- 1 root root 713 Feb 6 14:16 /etc/shadow This is a fresh install of Arch, so I don't think the configuration is too wonky. What should I be looking for to debug this? Running ls -l /sbin/unix_chkpwd shows -rwxr-xr-x 1 root root 31392 Jun 9 2016 /sbin/unix_chkpwd
Your system installation appears to be broken. For some reason, the file /sbin/unix_chkpwd has lost the privilege bits I would expect to see. Fix the permissions by running the following command as root: chmod u+s /sbin/unix_chkpwd And verify the permissions are now as follows (see the s bit in the user permissions): -rwsr-xr-x 1 root root 31392 Jun 9 2016 /sbin/unix_chkpwd On my Raspbian distribution the permissions are set slightly differently (and more restrictively). If the change described above does not work, carefully change the permissions on these two files and see if this helps (the group name does not matter too much as long as it's the same in both cases): -rw-r----- 1 root shadow 1354 Dec 6 13:02 /etc/shadow-rwxr-sr-x 1 root shadow 30424 Mar 27 2017 /sbin/unix_chkpwd
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/422411", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87529/" ] }
422,426
I am running a headless raspberry pi system, that will eventually be a generative music player. I am now trying to get jackd to run at startup but not entirely sure how. I can run the command jackd -R -dalsa And jack runs fine. However this stops me from being able to run any more commands in the console, with the last few lines being ALSA: final selected sample format for capture: 32bit integer little-endianALSA: use 2 periods for captureALSA: final selected sample format for playback: 32bit integer little-endianALSA: use 2 periods for playback I have put the jackd in an init.d script also, however the same problem appears. What I would like is a way for jackd to start up in a seperate process, or a way for it to hand "control" back to other startup scripts or the user. M problem is different that the commented one in that I would like to start a daemon (I did not know this before, but now it seems like the sensible option)
Your system installation appears to be broken. For some reason, the file /sbin/unix_chkpwd has lost the privilege bits I would expect to see. Fix the permissions by running the following command as root: chmod u+s /sbin/unix_chkpwd And verify the permissions are now as follows (see the s bit in the user permissions): -rwsr-xr-x 1 root root 31392 Jun 9 2016 /sbin/unix_chkpwd On my Raspbian distribution the permissions are set slightly differently (and more restrictively). If the change described above does not work, carefully change the permissions on these two files and see if this helps (the group name does not matter too much as long as it's the same in both cases): -rw-r----- 1 root shadow 1354 Dec 6 13:02 /etc/shadow-rwxr-sr-x 1 root shadow 30424 Mar 27 2017 /sbin/unix_chkpwd
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/422426", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274637/" ] }
422,454
I'm experiencing system freezes and looking in the journal I see kernel ( 4.14.15-1-MANJARO ) errors such as: kernel: tpm_crb MSFT0101:00: [Firmware Bug]: ACPI region does not cover the entire command/response buffer. [mem 0xfed40000-0xfed4087f flags 0x201] vs fed40080 f80kernel: tpm_crb MSFT0101:00: [Firmware Bug]: ACPI region does not cover the entire command/response buffer. [mem 0xfed40000-0xfed4087f flags 0x201] vs fed40080 f80 (Yes the message is repeated, with exactly the same timestamp) A bit later, I get: tpm tpm0: A TPM error (379) occurred attempting get random I'm running the latest version of firmware (v3.05) for my Asus UX330. My kernel is: 4.16.0-1-MANJARO #1 SMP PREEMPT Wed Mar 21 09:02:49 UTC 2018 x86_64 GNU/Linux Is there any workaround besides praying for an updated UEFI / BIOS firmware from Asus?
I emailed Asus support and they say that the laptop only supports Windows. You could consider disabling TPM if it is not being used - please comment if you work out how to do this.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/422454", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
422,470
I am running a Thinkpad T470. I often dock this, meaning I have 3 mouse input devices (external Microsoft mouse, Trackpoint, and Trackpad). I'm running Debian 10 (Testing - Buster) using Gnome under Wayland. In Gnome settings (below) I can change the sensitivity of the "Mouse" by configuring "Mouse Sensitivity". However, this changes the sensitivity of both the external mouse and the trackpoint. I like to have my trackpoint on low sensitivity, and the mouse on high. Under Xorg, I could make a simple script to set device specific configuration settings. How would I achieve this in Wayland?
Wayland expects all mice motion to have been normalised , so there is only one global changeable configuration. You may have to edit your hwdb entry for one of your devices to correct it if it is wrong, or just make it fit in with your preferences. Alternatively, you may be able to use libevdev-tweak-device from the package libevdev-tools (or libevdev-utils ). It says it can alter the definition of an evdev device dynamically. You would do something like sudo libevdev-tweak-device --abs ABS_X --res 99 /dev/input/event99sudo libevdev-tweak-device --abs ABS_Y --res 99 /dev/input/event99 where you need to replace the 99 by the resolution you want, and event99 by the input device. You can find the input device from, eg: $ ls -l /dev/input/by-id/lrwxrwxrwx ... usb-Logitech_USB_Optical_Mouse-event-mouse -> ../event5 To find the current resolution try sudo evemu-describe from the evemu-tools package, or use mouse-dpi-tool to try to choose a good value.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/422470", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145854/" ] }
422,499
I recently came up to an easy fix for a crontab logging issue and I am wondering what are the pro's and con's of using this specific fix (running a script with a "login shell flag"), as: #!/bin/bash -l
[The following assumes that your unspecified "logging issue" was related to missing environment setup, normally inherited from your profile.] The -l option tells bash to read all the various "profile" scripts, from /etc and from your home directory. Bash normally only does this for interactive sessions (in which bash is run without any command line parameters). Normal scripts have no business reading the profile; they're supposed to run in the environment they were given. That said, you might want to do this for personal scripts, maybe, if they're tightly bound to your environment and you plan to run them outside of a normal session. A crontab is one example of running a script outside your session, so yes, go do it! If the script is purely for the use of the crontab then adding -l to the shebang is fine. If you might use the script other ways then consider fixing the environment problem in the crontab itself: 0 * * * * bash -l hourly.sh
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/422499", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48428/" ] }
422,548
What command do I have to use to delete all directories that begin with graphene-80 under the directory /tmp ? What can I add to the rm command as option?
To delete the directories matching the pattern graphene-80* directly under /tmp , use rm -rf /tmp/graphene-80*/ Here, the trailing / ensures that only directories whose names match the graphene-80* pattern are deleted (or symbolic links to directories), and not files etc. To find the matching directories elsewhere under /tmp and delete them wherever they may be, use find /tmp -type d -name 'graphene-80*' -prune -exec rm -rf {} + To additionally see the names of the directories as they are deleted, insert -print before -exec . The two tests -type d and -name 'graphene-80*' tests for directories with the names that we're looking for. The -prune removes the found directory from the search path (we don't want to look inside these directories as they are being deleted), and the -exec , finally, does the actual removal by means of calling rm . Addressing question in comments: "How would you delete all files within that directory (not the directory itself)?" One of the below: Recreate the directories after deleting them: find /tmp -type d -name 'graphene-80*' -prune \ -exec rm -rf {} \; -exec mkdir {} + or more efficiently, doing both operations in batches, find /tmp -type d -name 'graphene-80*' -prune \ -exec sh -c 'rm -rf "$@"; mkdir "$@"' sh {} + The benefit of this is the simplicity of the command (compared to the command below). Use a small in-line shell script to only delete the names (files and directories) within the found directories: find /tmp -type d -name 'graphene-80*' -prune -exec bash -O nullglob -O dotglob -c ' for dirpath do rm -rf "$dirpath"/* done' bash {} + This executes a small in-line bash script with batches of found directory pathnames. The shell is started with the nullglob and dotglob shell options set. The nullglob option is used for expanding non-matching shell patterns to nothing rather than leaving them unexpanded, and the dotglob shell option is used for matching hidden names ( * does not match hidden names by default). The script iterates over the given directory paths and empties each directory recursively. The benefit of doing it this way is that it adequately deals with found directory paths that happen to be mount points. The downside is the complexity of the command and the fact that I had to write all this text to describe what it's doing (which means it will potentially be difficult for the next person to take over maintenance of the script).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/422548", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274758/" ] }
422,557
I have this while loop and here-document combo which I run in Bash 4.3.48(1) and I don't understand its logic at all. while read file; do source ~/unwe/"$file"done <<-EOF x.sh y.shEOF My question is comprised of these parts: What does the read do here (I always use read to declare a variable and assign its value interactively , but I'm missing what it's supposed to do here). What is the meaning of while read ? Where does the concept of while come in here? If the here-document itself comes after the loop, how is it even affected by the loop? I mean, it comes after done , and not inside the loop, so what's the actual association between these two structures? Why does this fail? while read file; do source ~/unwe/"$file" done <<-EOF x.sh y.shEOF I mean, done is done ... So why does it matter if done <<-EOF is on the same line as the loop? If I recall correctly, I did have a case in which a for loop was one-liner and still worked.
The read command reads from its standard input stream and assigns what's read to the variable file (it's a bit more compicated than that, see long discussion here ). The standard input stream is coming from the here-document redirected into the loop after done . If not given data from anywhere , it will read from the terminal, interactively. In this case though, the shell has arranged to connect its input stream to the here-document. while read will cause the loop to iterate until the read command returns a non-zero exit status. This will happen if there are any errors, or (most commonly) when there is no more data to be read (its input stream is in an end-of-file state). The convention is that any utility that wishes to signal an error or "false" or "no" to the calling shell does so by returning a non-zero exit status. A zero exit status signals "true" or "yes" or "no error". This status, would you wish to inspect it, is available in $? (only from the last executed utility). The exit status may be used in if statements and while loops or anywhere where a test is required. For example if grep -q 'pattern' file; then ...; fi A here-document is a form of redirection . In this case, it's a redirection into the loop. Anything inside the loop could read from it but in this case it's only the read command that does. Do read up on here-documents. If the input was coming from an ordinary file, the last line would have been done <filename Seeing the loop as one single command may make this more intuitive: while ...; do ...; done <filename which is one case of somecommand <filename Some shells also supports "here-strings" with <<<"string" : cat <<<"This is the here-string" DavidFoerster points out that if any of the two scripts x.sh and y.sh reads from standard input, without explicitly being given data to read from a file or from elsewhere, the data read will actually come from the here-document. With a x.sh that contains only read a , this would make the variable a contain the string y.sh , and the y.sh script would never run. This is due to the fact that the standard input is redirected for all commands in the while loop (and also "inherited" by any invoked script or command) and the second line is "consumed" by x.sh before the while loop's read can read it. If this behaviour is unwanted, it can be avoided, but it's a bit tricky . It fails because there is no ; or newline before done . Without ; or newline before done , the word done will be taken as an argument of source , and the loop will additionally not be properly closed (this is a syntax error). It is almost true that any ; may be replaced by a newline (at least when it's a command delimiter). It signals the end of a command, as does | , & , && and || (and probably others that I have forgotten).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/422557", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273994/" ] }
422,685
I made an alias ff and sourced it from ~/.zsh/aliases.zsh . The aliases run well themselves: alias ffff='firefox --safe-mode' and it runs as expected. But when I try to run it under gdb I get: > gdb ffGNU gdb (Debian 7.12-6+b1) 7.12.0.20161007-gitCopyright (C) 2016 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>...For help, type "help".Type "apropos word" to search for commands related to "word"...ff: No such file or directory.(gdb) quit I tried using gdb firefox --safe-mode but that wouldn't run. Can somebody identify what is wrong?
Aliases are a feature of the shell. Defining an alias creates a new shell command name. It's recognized only by the shell, and only when it appears as a command name. For example, if you type > ff at a shell prompt, it will invoke your alias, but if you type > echo ff the ff is just an argument, not a command. (At least in bash, you can play some tricks if the alias definition ends with a space. See Stéphane Chazelas's answer for a possible solution if you're determined to use shell aliases.) You typed > gdb ff so the shell invoked gdb , passing it the string ff as an argument. You can pass arguments to the debugged program via the gdb command line, but you have to use the --args option. For example: > gdb firefox --safe-mode tries (and fails) to treat --safe-mode as an option to gdb . To run the command with an argument, you can do it manually: > gdb firefox...(gdb) run --safe-mode or, as thrig's answer reminds me, you can use --args : > gdb --args firefox --safe-mode...(gdb) run (The first argument following --args is the command name; all remaining arguments are passed to the invoked command.) It's possible to extract the arguments from a shell alias, but I'd recommend just defining a separate alias: alias ff='firefox --safe-mode'alias gdbff='gdb --args firefox --safe-mode' Or, better, use shell functions, which are much more versatile. The bash manual says: For almost every purpose, shell functions are preferred over aliases.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/422685", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
422,698
The question is pretty straight forward. What I would have used under X [xdotool] obviously does not work moving forward, and no obvious new solutions have arisen given the relative new adoption of wayland. Solutions which require programming are acceptable.
You can use uinput ( linux/uinput.h ). It works across X as well as Wayland. The documentation page above has an example that includes creating a virtual device that behaves as a mouse: #include <linux/uinput.h>void emit(int fd, int type, int code, int val){ struct input_event ie; ie.type = type; ie.code = code; ie.value = val; /* timestamp values below are ignored */ ie.time.tv_sec = 0; ie.time.tv_usec = 0; write(fd, &ie, sizeof(ie));}int main(void){ struct uinput_setup usetup; int i = 50; int fd = open("/dev/uinput", O_WRONLY | O_NONBLOCK); /* enable mouse button left and relative events */ ioctl(fd, UI_SET_EVBIT, EV_KEY); ioctl(fd, UI_SET_KEYBIT, BTN_LEFT); ioctl(fd, UI_SET_EVBIT, EV_REL); ioctl(fd, UI_SET_RELBIT, REL_X); ioctl(fd, UI_SET_RELBIT, REL_Y); memset(&usetup, 0, sizeof(usetup)); usetup.id.bustype = BUS_USB; usetup.id.vendor = 0x1234; /* sample vendor */ usetup.id.product = 0x5678; /* sample product */ strcpy(usetup.name, "Example device"); ioctl(fd, UI_DEV_SETUP, &usetup); ioctl(fd, UI_DEV_CREATE); /* * On UI_DEV_CREATE the kernel will create the device node for this * device. We are inserting a pause here so that userspace has time * to detect, initialize the new device, and can start listening to * the event, otherwise it will not notice the event we are about * to send. This pause is only needed in our example code! */ sleep(1); /* Move the mouse diagonally, 5 units per axis */ while (i--) { emit(fd, EV_REL, REL_X, 5); emit(fd, EV_REL, REL_Y, 5); emit(fd, EV_SYN, SYN_REPORT, 0); usleep(15000); } /* * Give userspace some time to read the events before we destroy the * device with UI_DEV_DESTOY. */ sleep(1); ioctl(fd, UI_DEV_DESTROY); close(fd); return 0;}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/422698", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79280/" ] }
422,725
$ ssh mykey.pem [email protected] -vOpenSSH_7.3p1, OpenSSL 1.0.2j 26 Sep 2016debug1: Reading configuration data /c/Users/works/.ssh/configdebug1: Reading configuration data /etc/ssh/ssh_configdebug1: Connecting to 10.128.2.7 [10.128.2.7] port 22.debug1: Connection established.debug1: key_load_public: No such file or directorydebug1: identity file /c/Users/works/Documents/interface setup/ifx_key.pem type -1debug1: key_load_public: No such file or directorydebug1: identity file /c/Users/works/Documents/interface setup/ifx_key.pem-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_7.3debug1: Remote protocol version 2.0, remote software version OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8debug1: match: OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8 pat OpenSSH_6.6.1* compat 0x04000000debug1: Authenticating to 10.128.2.7:22 as 'ubuntu'debug1: SSH2_MSG_KEXINIT sentdebug1: SSH2_MSG_KEXINIT receiveddebug1: kex: algorithm: [email protected]: kex: host key algorithm: ecdsa-sha2-nistp256debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: nonedebug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: nonedebug1: expecting SSH2_MSG_KEX_ECDH_REPLYdebug1: Server host key: ecdsa-sha2-nistp256 SHA256:R+d2ELtCJyoeyHMfivCsGKk98GOIfxxsTEPAFmKkSOIdebug1: Host '10.128.2.7' is known and matches the ECDSA host key.debug1: Found key in /c/Users/works/.ssh/known_hosts:1debug1: rekey after 134217728 blocksdebug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug1: rekey after 134217728 blocksdebug1: SSH2_MSG_NEWKEYS receiveddebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug1: Authentications that can continue: publickeydebug1: Next authentication method: publickeydebug1: Trying private key: /c/Users/works/Documents/interface setup/ifx_key.pemdebug1: Authentications that can continue: publickeydebug1: No more authentication methods to try.Permission denied (publickey). I used to be able to ssh into this machine until yesterday.Is there a way to login into it?
You can use uinput ( linux/uinput.h ). It works across X as well as Wayland. The documentation page above has an example that includes creating a virtual device that behaves as a mouse: #include <linux/uinput.h>void emit(int fd, int type, int code, int val){ struct input_event ie; ie.type = type; ie.code = code; ie.value = val; /* timestamp values below are ignored */ ie.time.tv_sec = 0; ie.time.tv_usec = 0; write(fd, &ie, sizeof(ie));}int main(void){ struct uinput_setup usetup; int i = 50; int fd = open("/dev/uinput", O_WRONLY | O_NONBLOCK); /* enable mouse button left and relative events */ ioctl(fd, UI_SET_EVBIT, EV_KEY); ioctl(fd, UI_SET_KEYBIT, BTN_LEFT); ioctl(fd, UI_SET_EVBIT, EV_REL); ioctl(fd, UI_SET_RELBIT, REL_X); ioctl(fd, UI_SET_RELBIT, REL_Y); memset(&usetup, 0, sizeof(usetup)); usetup.id.bustype = BUS_USB; usetup.id.vendor = 0x1234; /* sample vendor */ usetup.id.product = 0x5678; /* sample product */ strcpy(usetup.name, "Example device"); ioctl(fd, UI_DEV_SETUP, &usetup); ioctl(fd, UI_DEV_CREATE); /* * On UI_DEV_CREATE the kernel will create the device node for this * device. We are inserting a pause here so that userspace has time * to detect, initialize the new device, and can start listening to * the event, otherwise it will not notice the event we are about * to send. This pause is only needed in our example code! */ sleep(1); /* Move the mouse diagonally, 5 units per axis */ while (i--) { emit(fd, EV_REL, REL_X, 5); emit(fd, EV_REL, REL_Y, 5); emit(fd, EV_SYN, SYN_REPORT, 0); usleep(15000); } /* * Give userspace some time to read the events before we destroy the * device with UI_DEV_DESTOY. */ sleep(1); ioctl(fd, UI_DEV_DESTROY); close(fd); return 0;}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/422725", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164522/" ] }
422,761
When I consecutively issue multiple commands which create a new shell, e.g. zshscreensu user2mcsudo sumc Is there a command to show the “call stack”, i.e. a list of the commands which have not finished but created a new shell? I might have issued some other commands among them, so the shell history won’t help. Moreover, I might have switched users and shells as shown in the above example. I know I can find this information using the tree view in htop but can I get it directly using a command?
You can use pstree (from PSmisc ) for this: pstree -s $$ The -s option shows the parents of the specified process identifier, and $$ is the current process’s identifier. pstree also shows the children of the specified process identifier, so you’ll end up with something along the lines of systemd───systemd───gnome-terminal-───zsh───pstree (with screen , sudo , su , mc etc. in your case).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/422761", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54675/" ] }
422,783
I faced some issue, where OR operator doesn't work for $1.Eg: if [[ "$1" != '1' || '2' ]] then echo "BAD" else echo "OK" fi When I run this test no matter what $1 is, BAD always appears. How can I tie $1 to be 1 or 2 only?
With standard sh syntax (so not only recognised by bash but all other POSIX compatible shells): case $1 in (1 | 2) echo OK;; (*) echo BAD;;esac Or: if [ "$1" = 1 ] || [ "$1" = 2 ]; then echo OKelse echo BADfi (note that it's a byte-to-byte comparison of strings. 01, 1e0, 1.0, 20/20, 0x1 would also be considered as BAD even though numerically they could be regarded as being the same as 1 ). Note that the = / != operators in bash 's [[...]] construct (copied from ksh ) are actually pattern matching operators as opposed to equality operators, so in this special case where you want to match a single character, you could also use [[ $1 != [12] ]] like you could do case $1 in ([12]) in standard sh syntax.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/422783", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/267199/" ] }
422,825
I lost my private.key file, which is necessary for the proper operation of my SSL certificate. I have only the certificate.crt file. Is it possible to generate private.key file from certificate.crt file? From what I've read, I do not think so. If I'm wrong, I don't know how to do it. What can I do to submit the certificate.crt so that it works? Apache requires all of: certificate.crt intermediate.pem private.key files; for example: SSLCertificateFile /etc/ssl/crt/certificate.crtSSLCertificateChainFile /etc/ssl/crt/intermediate.pemSSLCertificateKeyFile /etc/ssl/crt/private.key
No, it is not possible to generate the private.key file from the certificate.crt file. You will need to generate a new key and a new certificate, if the below does not apply to you. You may ask your certificate provider if they're willing to re-generate your certificate, a few companies offer the possibility.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/422825", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274989/" ] }
422,826
I set up a pipe returning the name of a package I'd like to install using apt-get : ... | xargs -I _ sudo apt install _ However, apt-get can't read from stdin in this case and exits with: Do you want to continue? [Y/n] Abort. I know about the -y flag to install the package without user confirmation, but I'd like to actually see the confirmation. Is there a way to forward the package name to apt-get while still allowing it to read from stdin? Putting apt-get on the left side of the whole command is not an option since I don't want apt-get to be executed if an ealier command in the pipe is aborted, using the set -o pipefail option.
No, it is not possible to generate the private.key file from the certificate.crt file. You will need to generate a new key and a new certificate, if the below does not apply to you. You may ask your certificate provider if they're willing to re-generate your certificate, a few companies offer the possibility.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/422826", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99676/" ] }
422,841
The regex is -?([0-9]|([1-9][0-9])) . The number is -2231 and it's being matched. From my understanding, it should be a single digit or double digits. Why is this number matched with this regex?
The regular expression is not anchored, so it's free to match the first 1 or two numbers and "succeed", leaving the trailing numbers (successfully) unmatched. If you require 1 or 2 digit numbers, anchor the regex: '^-?([0-9]|([1-9][0-9]))$' Some examples: $ seq -100 -99 | grep -E '^-?([0-9]|[1-9][0-9])$'-99$ seq 99 100 | grep -E '^-?([0-9]|[1-9][0-9])$'99$ seq -9 9 | grep -E '^-?([0-9]|[1-9][0-9])$'-9-8-7-6-5-4-3-2-10123456789$ seq -2231 -100 | grep -E '^-?([0-9]|[1-9][0-9])$'(empty)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/422841", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274999/" ] }
422,912
I am making a nice presentation of ARM assembly code execution and I would need GDB to step the code every 1 second infinitely long (well until I press CTRL + C ). Has anyone got solution? I don't want to keep on standing next to the keyboard and stepping the program when visitors come visit my stall.
Gdb's CLI supports a while loop. There's no builtin sleep command, but you can either call out to the shell to run the sleep program, or use gdb's builtin python interpreter, if it has one. It's interruptible with Control-C. Method 1: (gdb) while (1) > step > shell sleep 1 > end Method 2: (gdb) python import time (gdb) while (1) > step > python time.sleep(1) > end Method 3 (define a macro): (gdb) define runslowly Type commands for definition of "runslowly".End with a line saying just "end".> python import time > while (1) > step > python time.sleep(1) > end > end (gdb) document runslowly Type documentation for "runslowly".End with a line saying just "end".> step a line at a time, every 1 second > end (gdb) runslowly
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/422912", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9135/" ] }
422,922
I was looking through the linux manual and on this page , the manual for find, specifically in the section about the test "-size", it states (erroneously) that a kilobyte is 1024 bytes . This is, as far as I learned, false. A kilobyte is 1000 bytes, and a kibibyte is 1024 bytes. So, what units does it actually use? Does it say "kilobytes" and mean "1000 bytes", or does it mean "1024 bytes" and incorrectly wrote "kilobytes"?
Well spotted! The explicit explanation is correct. 1k means kibibytes (1024 bytes). I tested it by creating a range of sizes and seeing which were identified. $ for i in 999 1000 1001 1023 1024 1025; do dd if=/dev/urandom of=$i bs=1 count=$i; done$ find . -size 1k../1024./1023./1001./1000./999 You can see that the 1024 bytes file is found (and not the 1025 bytes file). (You might think of filing a bug report, if you like.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/422922", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144316/" ] }
422,949
So I have 2 terminals open in front of me; /dev/pts/1 - 'the controller' /dev/pts/2 - 'the receiver' I am currently using ttyecho to execute commands in /pts2 from /pts1. I can list the screens - ttyecho -n /dev/pts/2 screen -ls fine from /pts1 and see the results in /pts2. I can attach to a screen ttyecho -n /dev/pts/2 screen -x [blah] from /pts1 on /pts2 fine.. But what I can't do, is when attached to a screen then detach from it. So if /dev/pts/2 is then inside a screen, I am trying to detach from it by executing a command using ttyecho from /dev/pts1 I've tried sending... ttyecho -n /dev/pts/2 ^a+d ttyecho -n /dev/pts/2 screen -d -r ttyecho -n /dev/pts/2 screen -D -RRttyecho -n /dev/pts/2 screen -d -rttyecho -n /dev/pts/2 screen -DRittyecho -n /dev/pts/2 Ctrl+a+dttyecho -n /dev/pts/2 Ctrl+a dttyecho -n /dev/pts/2 CTRL + Attyecho -n /dev/pts/2 control+a So I guess what I need is either: A command I can send that will detach the screen OR Someway to send some kind of pseudo keyboard commands via ttyecho to that other screen to detach it. Any help most appreciated.
You can do $ screen -ls This will list all the screen sessions like this. There are screens on: 8365.pts-6.vm2 (Attached) 7317.pts-1.vm2 (Attached)2 Sockets in /var/run/screen/S-root. Then you can detach any screen session with the help of screen id. For eg: $ screen -d 8365[8365.pts-6.vm2 detached.]
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/422949", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/275094/" ] }
423,006
I want to assign the path and file name to a variable: /path/to/myfile/file.txt For example MYFILE=$(pwd)$(basename) How can i do it ?
To answer the question as it is stated: This is a simple string concatenation. somedirpath='/some/path' # for example $PWD or $(pwd)somefilepath='/the/path/to/file.txt'newfilepath="$somedirpath"/"$( basename "$somefilepath" )" You most likely would want to include a / between the two path elements when concatenating the strings, and basename takes an argument which is a path (this was missing in the question). Reading your other answer, it looks like you are looking for the bash script path and name. This is available in BASH_SOURCE , which is an array. It's only element (unless you are in a function) will be what you want. In the general case, it's the last element in the array that you want to look at. In bash 4.4, this is ${BASH_SOURCE[-1]} .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/423006", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/201418/" ] }
423,012
I roughly know about the files located under /dev. I know there are two types (character/block), accessing these files communicates with a driver in the kernel. I want to know what happens if I delete one -- specifically for both types of file. If I delete a block device file, say /dev/sda , what effect -- if any -- does this have? Have I just unmounted the disk? Similarly, what if I delete /dev/mouse/mouse0 -- what happens? Does the mouse stop working? Does it automatically replace itself? Can I even delete these files? If I had a VM set up, I'd try it.
Those are simply (special) files. They only serve as "pointers" to the actual device. (i.e. the driver module inside the kernel.) If some command/service already opened that file, it already has a handle to the device and will continue working. If some command/service tries to open a new connection, it will try to access that file and fail because of "file not found". Usually those files are populated by udev , which automatically creates them at system startup and on special events like plugging in a USB device, but you could also manually create those using mknod .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/423012", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274927/" ] }
423,121
I know that macOS is a UNIX operating system , but I don't know whether macOS could be called a UNIX distribution in the same way Gentoo or Debian are GNU/Linux distributions . Is macOS a UNIX distribution? If it isn't, how could one correctly refer to macOS' membership in the UNIX operating system family and compliance to Single UNIX Specification (i.e., is it a Unix variant , a Unix version , a Unix flavor , etc.)? Also, this question applies to Solaris, HP-UX and other unices (are they all UNIX distributions?). Furthermore, is the word "distribution" restricted to GNU(/Linux, /Hurd, /kFreeBSD, /etc) operating systems, or may it be used in other cases? EDIT: I've realized that the UNIX' official website uses "UNIX implementations" and "UNIX operating systems" for referring to the family of Unix operating systems, i.e., the ones which implement the Single Unix Standard.
What is UNIX at all ? Short answer: UNIX is a specification/standard nowadays. At the time of writing, to quote the official sources , "UNIX® is a registered trademark of The Open Group", the company which among many things provides UNIX certification : "UNIX®, an open standard owned and managed by The Open Group, is an enabler of key technologies and delivers reduced total cost of ownership, increased IT agility, stability, and interoperability in hetero¬geneous environments enabling business and market innovation across the globe." The same page specifically states which specification defines UNIX: The latest version of the certification standard is UNIX V7, aligned with the Single UNIX Specification Version 4, 2013 Edition Details of those specs can be found here . Curiously enough the latest standard listed on their website is UNIX 03, and to quote another source , "UNIX® 03 - the mark for systems conforming to version 3 of the Single UNIX Specification". To quote the About Us page with my own emphasis in bold: The success of the UNIX approach led to a large number of “look-alike” operating systems, often divergent in compatibility and interoperability. To address this, vendors and users joined together in the 1980s to create the POSIX® standard and later the Single UNIX Specification . So what this suggests (or at least so is my interpretation), is that when an OS conforms to the POSIX standard and Single UNIX Specifications, it is compatible in behavior with Unix as an OS that once existed at one point in time in history. Please note that this does not mention the presence of any traces of the original Unix source code, nor does it mention the kernel in any way (this will become important later). As for the AT&T and System V Unix developed by Ritchie and Thompson, nowadays we can say it has ceased to exist. Based on the above sources, it seems UNIX nowadays is not that specific OS, but rather a standard derived out of the best possible generalization for how operating systems in Unix family behave. Where does macOS X stand in the *nix world ? In a very specific definition, macOS version 10.13 High Sierra on Intel-based hardware is compliant to the UNIX 03 standard and to quote the pdf certificate , "Apple Inc. has entered into a Trademark License Agreement with X/Open Company Limited." Side note: I hesitate to question what it would means for macOS 10.13 on non-Intel hardware to be treated as, but considering that hardware is mentioned for other OS, the hardware is significant. Example: "Hewlett Packard Enterprise: HP-UX 11i V3 Release B.11.31 or later on HP 9000 Servers with Precision Architecture" (from the register page ). Let’s return to previous section of my answer. Since this particular version of OS conforms to interoperability and compatibility standard, it means the OS is as close in behavior and system implementation as possible to original Unix as an Operating System. At the very least, it will be close in behavior and in environment. The closer it gets to system level and kernel level, the more specific and shadier the area will get, but at least fundamental mechanics and behavior that were present in Unix should be present in an OS that aims to be compatible. macOS X should be very close to that aim. What is a distribution ? To quote Wikipedia : A Linux distribution (often abbreviated as distro) is an operating system made from a software collection, which is based upon the Linux kernel and, often, a package management system. Let's remember for a second that Linux as in the Linux Kernel is supposed to be distributable software, with modifications, or at least in accordance with GPL v2 . If we consider a package manager and kernel, Ubuntu and Red Hat being distributions makes sense. macOS X has a different kernel than the original AT&T Unix - therefore calling macOS X a Unix distribution doesn't make sense. People suggest that macOS X kernel is based on FreeBSD, but to quote FreeBSD Wiki : The XNU kernel used on OS X includes a few subsystems from (older versions of) FreeBSD, but is mostly an independent implementation Some people mistakenly call the OS X kernel Darwin. To quote Apple's Kernel Programming Guide : The kernel, along with other core parts of OS X are collectively referred to as Darwin. Darwin is a complete operating system based on many of the same technologies that underlie OS X. And to quote the same page: Darwin technology is based on BSD, Mach 3.0, and Apple technologies. Based on everything above we can confidently say, OS X is not a distribution , in the sense of Linux distribution. Similarly, other mentioned OSs are POSIX compliant and are certified Unix systems, but again they differ in kernels and variations on underlying system calls (which is why there exist books on Solaris system programming and it's a worthy subject in its own right). Therefore, they aren't distributions in the sense Linux distributions are - a common core with variations on utilities. In case of Linux, you see books on Linux system programming or Linux kernel programming, not system programming specific to distribution, because there's nothing system-specific about a particular distribution. Confirmation of what we see here can be found in official documentation. For instance, article on developerWorks by IBM which addressed difference between UNIX OS types and Linux distributions states (emphasis added): Most modern UNIX variants known today are licensed versions of one of the original UNIX editions . Sun's Solaris, Hewlett-Packard's HP-UX, and IBM's AIX® are all flavors of UNIX that have their own unique elements and foundations . In other words, they are based on the same foundation, but they don't share exactly same one in the sense Linux distros share the kernel. Considerations Note that the word distribution appears to be mostly used when referencing operating systems which have the Linux kernel at its core. Take for instance the BSD type of Operating Systems: there's GhostBSD , which is based on the kernel and uses some of the utilities of FreeBSD , but I've never seen it to be referred to as a BSD distribution; every BSD OS only mentions what it is based on and usually an operating system is mentioned as an OS in its own right. Sure, BSD stands for Berkeley Software Distribution, but...that's it. To quote this answer on our site in response to the question whether different BSD versions use same kernels: No, although there are similarities due to the historic forks. Each project evolved separately. They are not distributions in the sense of Linux distributions. Consider the copyright notice from this document : Portions of this product may be derived from the UNIX® and Berkeley 4.3 BSD systems Notes the before mentioned POSIX standard is also referenced as IEEE standard (where IEEE is Institute of Electrical and Electronics Engineers, which handles among other things IT types of things). to quote Wikipedia : "In 2016, with the release of macOS 10.12 Sierra, the name was changed from OS X to macOS to streamline it with the branding of Apple's other primary operating systems: iOS, watchOS, and tvOS.[56]" Mac OS X history answer conceptual difference between Linux and BSD kernel In conclusion: macOS X can be referred to as either Unix-like OS, Unix-like system, Unix implementation, POSIX compliant-OS when you want to relate it to the original AT&T Unix; "Unix version" wouldn't be the appropriate term because macOS X is vastly different from the original AT&T Unix, and as mentioned before there's no more Unix in the sense of software, and it is now a more of an industry standard; Probably the word "distribution" fits only within the Linux world. The true problem is that you (the reader) and I have way too much time to argue about the topic which lawyers should be arguing about. Maybe we should be like Linux Torvalds and use terminology and OSs that just allows us to move on with the life and do the things we honestly care about and are supposed to care about.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/423121", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233964/" ] }
423,154
I wrote a C++ watchdog that runs a set of scripts to determine whether there is a problem on that system. The code is a bit hairy so I won't show it here, but it is equivalent to a system call as follow: int const r(system("/bin/sh /path/to/script/test-health")); Only, r is 0 when the script fails because a command is missing in an if statement. There is the offensive bit of the script: set -e[...]if unknown_command arg1 arg2then[...] The unknown_command obviously fails since... it is unknown. At that point the script ends because I have the set -e at the start. The exit code, though, is going to be 0 in that situation. Would there be a way for me to get an exit code of 1 in such a situation? i.e. the question is detecting the error without having to add a test to know whether unknown_command exists. I know how to do that: if ! test -x unknown_commandthen exit 1fi My point is that when I write that script, I expect unknown_command to exist as I install it myself, but if something goes wrong or someone copies the script on another system without installing everything, I'd like to know that I got an error executing the script.
From the POSIX standard, regarding set -e : The -e setting shall be ignored when executing the compound list following the while , until , if , or elif reserved word, a pipeline beginning with the ! reserved word, or any command of an AND-OR list other than the last. This means that executing an unknown command in an if statement will not cause the script to terminate when running under set -e . Or rather, set -e will not cause it to terminate. User command -v to test whether a utility exists in the current PATH , unless you use full paths to the utilities that you invoke in which case a -x test would be sufficient, as in your question. See also: Why not use "which"? What to use then?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/423154", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57773/" ] }
423,186
I need to diff two files ignoring all whitespaces and empty/whitespace lines but for some reasons diff options I found does not do all well, it keeps showing the empty line in file1... $ cat file12 nodes configured13 resources configured$ cat file22 nodes configured23 resources configured$ diff -ywBEZb -W 200 --suppress-blank-empty --suppress-common-lines file1 file213 resources configured | 23 resources configured <$ od -bc file10000000 062 040 156 157 144 145 163 040 143 157 156 146 151 147 165 162 2 n o d e s c o n f i g u r0000020 145 144 012 061 063 040 162 145 163 157 165 162 143 145 163 040 e d \n 1 3 r e s o u r c e s0000040 143 157 156 146 151 147 165 162 145 144 012 012 c o n f i g u r e d \n \n0000054$ od -bc file20000000 062 040 156 157 144 145 163 040 143 157 156 146 151 147 165 162 2 n o d e s c o n f i g u r0000020 145 144 012 062 063 040 162 145 163 157 165 162 143 145 163 040 e d \n 2 3 r e s o u r c e s0000040 143 157 156 146 151 147 165 162 145 144 012 c o n f i g u r e d \n0000053$ diff -ywBEZb -W 200 --suppress-blank-empty --suppress-common-lines file1 file2 | od -bc -0000000 061 063 040 162 145 163 157 165 162 143 145 163 040 143 157 156 1 3 r e s o u r c e s c o n0000020 146 151 147 165 162 145 144 011 011 011 011 011 011 011 011 011 f i g u r e d \t \t \t \t \t \t \t \t \t0000040 011 040 040 040 174 011 062 063 040 162 145 163 157 165 162 143 \t | \t 2 3 r e s o u r c0000060 145 163 040 143 157 156 146 151 147 165 162 145 144 012 011 011 e s c o n f i g u r e d \n \t \t0000100 011 011 011 011 011 011 011 011 011 011 040 040 040 074 012 \t \t \t \t \t \t \t \t \t \t < \n0000117$
Use the -B switch: -B --ignore-blank-lines Ignore changes whose lines are all blank. To ignore whitespaces, use the -b and -w switches: -b --ignore-space-change Ignore changes in the amount of white space.-w --ignore-all-space Ignore all white space. Or simply RTM . EDIT: As -B (and some other diff switches) seems to be not working (I didn't find any information whether it's reported as bug), you need to use a different way to ignore blank lines and white spaces. I would suggest something like this: [my@pc ~]$ cat file1.txt2 nodes configured13 resources configured[my@pc ~]$ cat file2.txt2 nodes configured23 resources configured[my@pc ~]$ diff <(grep -vE '^\s*$' file1.txt) <(grep -vE '^\s*$' file2.txt)2c2< 13 resources configured---> 23 resources configured
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/423186", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274554/" ] }
423,205
I have two servers Server1 -> Static IP1 Server2 -> Static IP2 Server2 's firewall allows access only from Static IP1 I can connect to Server1 via ssh from anywhere. How can I connect to Server2 from my PC which is behind a dynamic IP via ssh in one step instead of connecting via ssh to Server1 and then doing another ssh to Server2 from within Server1 s shell.
If you have OpenSSH 7.3p1 or later, you can tell it to use server1 as a jump host in a single command: ssh -J server1 server2 See fcbsd ’s answer for older versions.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/423205", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/275310/" ] }
423,282
I use Bash 4.3.48(1) and I have a .sh file containing about 20 variables right under the shebang. The file contains only variables. This is the pattern: x="1"y="2"... I need to export all these variables in a DRY way: For example, one export to all vars, instead say 20 export for 20 vars. What's the most elegant way (shortest, most efficient by all means) to do that inside that file? A for loop? An array ? maybe something simpler than these (some kind of collection sugar syntax)?
Use set -a (or the equivalent set -o allexport ) at the top of your file to enable the allexport shell option. Then use set +a (or set +o allexport ) at the end of the file (wherever appropriate) to disable the allexport shell option. Using enabling the allexport shell option will have the following effect (from the bash manual): Each variable or function that is created or modified is given the export attribute and marked for export to the environment of subsequent commands. This shell option, set with either set -a or set -o allexport , is defined in POSIX (and should therefore be available in all sh -like shells) as When this option is on, the export attribute shall be set for each variable to which an assignment is performed; [...] If the assignment precedes a utility name in a command, the export attribute shall not persist in the current execution environment after the utility completes, with the exception that preceding one of the special built-in utilities causes the export attribute to persist after the built-in has completed. If the assignment does not precede a utility name in the command, or if the assignment is a result of the operation of the getopts or read utilities, the export attribute shall persist until the variable is unset. The variables set while this option is enabled will be exported, i.e. made into environment variables. These environment variables will be available to the current shell environment and to any subsequently created child process environments, as per usual. This means that you will have to either source this file (using . , or source in shells that have this command), or start the processes that should have access to the variables from this file.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/423282", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273994/" ] }
423,294
Are the files in /etc/sudoers.d read in a particular order? If so, what is the convention for that ordering?
From man sudoers , the exact position found with this command: $ LESS='+/sudo will suspend processing' man sudoers Files are parsed in sorted lexical order. That is, /etc/sudoers.d/01_first will be parsed before /etc/sudoers.d/10_second . Be aware that because the sorting is lexical, not numeric, /etc/sudoers.d/1_whoops would be loaded after /etc/sudoers.d/10_second . A consistent number of leading zeroes in the file names can avoid such problems. That's under the title: Including other files from within sudoers $ LESS='+/Including other files from within sudoers' man sudoers Lexical order is also called "dictionary order" as given by the values defined by the environment variable LC_COLLATE when the locale is C (numbers then Uppercae then lowercase letters). That's the same order as given by LC_COLLATE=C ls /etc/sudoers.d/ . The list of files included and the specific order in which they are loaded could be exposed with: $ visudo -c/etc/sudoers: parsed OK/etc/sudoers.d/README: parsed OK/etc/sudoers.d/me: parsed OK/etc/dirtest/10-defaults: parsed OK/etc/dirtest/1one: parsed OK/etc/dirtest/2one: parsed OK/etc/dirtest/30-alias: parsed OK/etc/dirtest/50-users: parsed OK/etc/dirtest/Aone: parsed OK/etc/dirtest/Bone: parsed OK/etc/dirtest/aone: parsed OK/etc/dirtest/bone: parsed OK/etc/dirtest/zone: parsed OK/etc/dirtest/~one: parsed OK/etc/dirtest/éone: parsed OK/etc/dirtest/ÿone: parsed OK Note that the order is not UNICODE but C.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/423294", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15010/" ] }
423,301
I've been using the default configuration of vim for a while and want to make a few changes. However, if I edit ~/.vimrc it seems to overwrite all other configuration settings of /etc/vimrc and such, e.g. now there is no syntax highlighting. Here is what vim loads: :scriptnames/etc/vimrc/usr/share/vim/vimfiles/archlinux.vim~/.vimrc/usr/share/vim/vim80/plugin/... <there are a few> In other words I want to keep whatever there is configured in vim, but simply make minor adjustments for my shell user. What do I need to do to somehow weave ~/.vimrc into the existing configuration or what do I need to put into ~/.vimrc so it loads the default configuration? EDIT: My intended content of ~/.vimrc : set expandtabset shiftwidth=2set softtabstop=2
You can source the global Vim configuration file into your local ~/.vimrc : unlet! skip_defaults_vimsource $VIMRUNTIME/defaults.vimset mouse-=a See :help defaults.vim and :help defaults.vim-explained for details.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/423301", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153505/" ] }
423,365
I am currently studying for Comptia Linux+ exam and I am at the Shared Library chapter. Among all, it says that the /etc/ld.so.cache file is a binary file, but in my case it is not. It is a regular file, whose content I can easily view and fair enough it contains libraries locations. ls -l /etc/ld.so.cache -rw-r--r--. 1 root root 154135 Feb 11 11:17 /etc/ld.so.cache I've seen it in several materials that the cache file is a binary one and I am curious why this mismatch? Is that file's type distro-dependent?I am using Fedora Workstation 27
You are mixing up the definitions of a binary file, and an executable (binary) file. The book is right mentioning /etc/ld.so.cache is a binary file (a data file). As you can see running file /etc/ld.so.cache $ file /etc/ld.so.cache /etc/ld.so.cache: data From man ld.so : When resolving shared object dependencies, the dynamic linker first inspects each dependency string to see if it contains a slash (this can occur if a shared object pathname containing slashes was specified at link time). If a slash is found, then the dependency string is interpreted as a (relative or absolute) pathname, and the shared object is loaded using that pathname. If a shared object dependency does not contain a slash, then it is searched for in the following order: ..... From the cache file /etc/ld.so.cache, which contains a compiled list of candidate shared objects previously found in the augmented library path. If, however, the binary was linked with the -z nodeflib linker option, shared objects in the default paths are skipped. Shared objects installed in hardware capability directories (see below) are preferred to other shared objects. From man ldconfig /etc/ld.so.cache File containing an ordered list of libraries found in the directories specified in /etc/ld.so.conf, as well as those found in /lib and /usr/lib. Furthermore, /etc/ld.so.cache is regenerated upon running ldconfig . See Relationship between ldconfig and ld.so.cache Double checking it is indeed a list of library files: $ strings /etc/ld.so.cache | head -5ld.so-1.7.0glibc-ld.so.cache1.1libz.so.1/lib/x86_64-linux-gnu/libz.so.1libxtables.so.7 Or again, using ldconfig -p : $ ldconfig -p | head -5227 libs found in cache `/etc/ld.so.cache' libz.so.1 (libc6,x86-64) => /lib/x86_64-linux-gnu/libz.so.1 libxtables.so.7 (libc6,x86-64) => /lib/libxtables.so.7 libxml2.so.2 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libxml2.so.2 libxml-security-c.so.17 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libxml-security-c.so.17
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/423365", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/275429/" ] }
423,376
Is it possible to expand an or choice in the shell when reading a file for example. What I mean by this is that, for instance, grep supports syntax like (A|B) to match A or B in a file. Similarly, if I have these files: file1.txtfile2.txtfile3.txtfile4.txtfile5.txt I could do cat file{1..5}.txt in bash , as it expands the range. Is there an equivalent way to do this for just a couple of files? E.g. cat file(1|5).txt and only print those 2?
The standard file name globbing pattern to match a digit is [0-9] . This matches a single digit: cat file[0-9].txt To select only two of these: cat file[25].txt For larger numbers than 9, brace expansion will be useful (but see note below for the difference between globbing patterns and brace expansions): cat file{25..60}.txt Again, brace expansion allows for individual numbers as well: cat file{12,45,900,xyz}.txt (note that in the example above, the brace expansion does not involve an arithmetic loop, but just generates names based on the strings provided). In bash , with the extglob shell option enabled ( shopt -s extglob ), the following will also work: cat file@(12|45|490|foo).txt The @(...) pattern will match any one of the included | -delimited patterns. The difference between globbing patterns as [...] and @(...) and brace expansions, is that a brace expansion is generated on the command line and may not actually match any existing names in the current directory. A filename globbing pattern will match names, but the shell won't complain if not all possible name exist. If no matching name exists, the pattern will remain be unexpanded, unless also the nullglob shell option is set, in which case the pattern is removed. Example: touch file1ls file[0-9] Here, only the file listing for file1 will be shown. With ls file{0..9} , ls would complain about not finding file0 , file2 etc. In the following example, the first command will only touch existing names that matches the given pattern, while the second line will create files that does not already exist: touch file[0-9]touch file{0..9}
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/423376", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139523/" ] }
423,389
Here is my code; I want to compare $COUNTER to various multiple times. if [ "$COUNTER" = "5" ]; then It's okay, but I want it do it for dynamic times like 5,10,15,20 etc.
Conclusion of the various comments seems to be that the simplest answer to the original question is if ! (( $COUNTER % 5 )) ; then
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/423389", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157517/" ] }
423,392
I'd like to print a man style usage message to describe a shell function like this output man find : NAME find - search for files in a directory hierarchySYNOPSIS find [-H] [-L] [-P] [-D debugopts] [-Olevel] [starting-point...] [expression]DESCRIPTION This manual page documents the GNU version of find. GNU find searches the directory tree rooted at each given starting-point by evaluating the given expression from left to right, according to the rules of precedence (see section OPERATORS), until the outcome is known (the left hand side is false for and opera‐ tions, true for or), at which point find moves on to the next file name. If no starting-point is speci‐ fied, `.' is assumed.OPTIONS I am facing an error message on the ` character. Following simple script shows the error: ~$ cat <<EOF`.'EOFbash: bad substitution: no closing "`" in `.' I though heredoc was a cool way to echo strings by pasting them without having to escape its content such a quotes, etc... I assume I was wrong :/ Can someone explain this behavior please? Can heredoc accept ` character? Edit 2 : I accepted the answer of quoted here-document <<'END_HELP' , but I finally won't use it for this kind of complete manual output as kusalananda does suggests Edit 1 : (For future reads) the limit with using quoted here-document is that is prevents to use tput in the here-document . To do so, I did the following: unquoted here-document , for tput commands to be executed prevent the "bad substitution" error by escaping the backtick instead use tput within the here-document Example: normal=$( tput sgr0 ) ;bold=$(tput bold) ;cat <<END_HELP # here-document not quoted${bold}NAME${normal} find - search for files in a directory hierarchy${bold}SYNOPSIS${normal} find [-H] [-L] [-P] [-D debugopts] [-Olevel] [starting-point...] [expression]${bold}DESCRIPTION${normal} This manual page documents the GNU version of find. GNU find searches the directory tree rooted at each given starting-point by evaluating the given expression from left to right, according to the rules of precedence (see section OPERATORS), until the outcome is known (the left hand side is false for and opera‐ tions, true for or), at which point find moves on to the next file name. If no starting-point is speci‐ fied, \`.' is assumed.END_HELPunset normal ;unset bold ; Here, note the escaped backtick that was source of error: \`.'
The backtick introduces a command substitution. Since the here-document is not quoted, this will be interpreted by the shell. The shell complains since the command substitution has no ending backtick. To quote a here-document, use cat <<'END_HELP'something something helpEND_HELP or cat <<\END_HELPsomething something helpEND_HELP Regarding your comments on the resolution of this issue: Utilities seldom output a complete manual by themselves but may offer a synopsis or basic usage information. This is seldom, if ever, colorized (since its output may not be directed to a terminal or pager like less ). The real manual is often typesetted using groff or a dedicated man-page formatter like mandoc and is handled completely separate from the code.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/423392", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/168003/" ] }
423,453
How to convert value from scientific notation to decimal in shell? (preferred C shell) Also I'd want to convert it from e-12 to e-9 and then shell 42.53e-12 to 0.04253. I have to do this for a list.
printf will do this for you, from shell. $ FOO=42.53e-12 $ BAR=$(printf "%.14f" $FOO) $ echo $BAR 0.00000000004253 $ In ancient+arcane C-shell, this would be. $ set FOO=42.53e-12$ set BAR=`printf "%.14f" $FOO`$ echo $BAR0.00000000004253$
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/423453", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/275476/" ] }
423,478
I have ~30k files. Each file contains ~100k lines. A line contains no spaces. The lines within an individual file are sorted and duplicate free. My goal: I want to find all all duplicate lines across two or more files and also the names of the files that contained duplicated entries. A simple solution would be this: cat *.words | sort | uniq -c | grep -v -F '1 ' And then I would run: grep 'duplicated entry' *.words Do you see a more efficient way?
Since all input files are already sorted, we may bypass the actual sorting step and just use sort -m for merging the files together. On some Unix systems (to my knowledge only Linux), it may be enough to do sort -m *.words | uniq -d >dupes.txt to get the duplicated lines written to the file dupes.txt . To find what files these lines came from, you may then do grep -Fx -f dupes.txt *.words This will instruct grep to treat the lines in dupes.txt ( -f dupes.txt ) as fixed string patterns ( -F ). grep will also require that the whole line matches perfectly from start to finish ( -x ). It will print the file name and the line to the terminal. Non-Linux Unices (or even more files) On some Unix systems, 30000 file names will expand to a string that is too long to pass to a single utility (meaning sort -m *.words will fail with Argument list too long , which it does on my OpenBSD system). Even Linux will complain about this if the number of files are much larger. Finding the dupes This means that in the general case (this will also work with many more than just 30000 files), one has to "chunk" the sorting: rm -f tmpfilefind . -type f -name '*.words' -print0 |xargs -0 sh -c ' if [ -f tmpfile ]; then sort -o tmpfile -m tmpfile "$@" else sort -o tmpfile -m "$@" fi' sh Alternatively, creating tmpfile without xargs : rm -f tmpfilefind . -type f -name '*.words' -exec sh -c ' if [ -f tmpfile ]; then sort -o tmpfile -m tmpfile "$@" else sort -o tmpfile -m "$@" fi' sh {} + This will find all files in the current directory (or below) whose names matches *.words . For an appropriately sized chunk of these names at a time, the size of which is determined by xargs / find , it merges them together into the sorted tmpfile file. If tmpfile already exists (for all but the first chunk), this file is also merged with the other files in the current chunk. Depending on the length of your filenames, and the maximum allowed length of a command line, this may require more or much more than 10 individual runs of the internal script ( find / xargs will do this automatically). The "internal" sh script, if [ -f tmpfile ]; then sort -o tmpfile -m tmpfile "$@"else sort -o tmpfile -m "$@"fi uses sort -o tmpfile to output to tmpfile (this won't overwrite tmpfile even if this is also an input to sort ) and -m for doing the merge. In both branches, "$@" will expand to a list of individually quoted filenames passed to the script from find or xargs . Then, just run uniq -d on tmpfile to get all line that are duplicated: uniq -d tmpfile >dupes.txt If you like the "DRY" principle ("Don't Repeat Yourself"), you may write the internal script as if [ -f tmpfile ]; then t=tmpfileelse t=/dev/nullfisort -o tmpfile -m "$t" "$@" or t=tmpfile[ ! -f "$t" ] && t=/dev/nullsort -o tmpfile -m "$t" "$@" Where did they come from? For the same reasons as above, we can't use grep -Fx -f dupes.txt *.words to find where these duplications came from, so instead we use find again: find . -type f -name '*.words' \ -exec grep -Fx -f dupes.txt {} + Since there is no "complicated" processing to be done, we may invoke grep directly from -exec . The -exec option takes a utility command and will place the found names in {} . With + at the end, find will place as many arguments in place of {} as the current shell supports in each invocation of the utility. To be totally correct, one may want to use either find . -type f -name '*.words' \ -exec grep -H -Fx -f dupes.txt {} + or find . -type f -name '*.words' \ -exec grep -Fx -f dupes.txt /dev/null {} + to be sure that filenames are always included in the output from grep . The first variation uses grep -H to always output matching filenames. The last variation uses the fact that grep will include the name of the matching file if more than one file is given on the command line. This matters since the last chunk of filenames sent to grep from find may actually only contain a single filename, in which case grep would not mention it in its results. Bonus material: Dissecting the find + xargs + sh command: find . -type f -name '*.words' -print0 |xargs -0 sh -c ' if [ -f tmpfile ]; then sort -o tmpfile -m tmpfile "$@" else sort -o tmpfile -m "$@" fi' sh find . -type f -name '*.words' will simply generate a list of pathnames from the current directory (or below) where each pathnames is that of a regular file ( -type f ) and that has a filename component at the end that matches *.words . If only the current directory is to be searched, one may add -maxdepth 1 after the . , before -type f . -print0 will ensure that all found pathnames are outputted with a \0 ( nul ) character as delimiter. This is a character that is not valid in a Unix path and it enables us to process pathnames even if they contain newline characters (or other weird things). find pipes its output to xargs . xargs -0 will read the \0 -delimited list of pathnames and will execute the given utility repeatedly with chunks of these, ensuring that the utility is executed with just enough arguments to not cause the shell to complain about a too long argument list, until there is no more input from find . The utility invoked by xargs is sh with a script given on the command line as a string using its -c flag. When invoking sh -c '...some script...' with arguments following, the arguments will be available to the script in $@ , except for the first argument , which will be placed in $0 (this is the "command name" that you may spot in e.g. top if you are quick enough). This is why we insert the string sh as the first argument after the end of the actual script. The string sh is a dummy argument and could be any single word (some seem to prefer _ or sh-find ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/423478", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/275508/" ] }
423,534
when I try: $ ip -6 addr I get something like: inet6 fe80::d773:9cf0:b0fd:572d/64 scope link if I try to ping that from the machine itself: $ ping6 fe80::d773:9cf0:b0fd:572d/64unknown host$ ping6 fe80::d773:9cf0:b0fd:572dconnect: Invalid argument What am I doing wrong?
Any IPv6 address that starts with fe80: is the equivalent of IPv4 169.254.*.* address, i.e. it's a link-local address, reachable only in the network segment it's directly connected to, using the NIC that connects to that segment specifically. Unlike IPv4, however, it is perfectly normal for a NIC to have both the link-local IPv6 address and one or more global IPv6 addresses simultaneously. Since a fe80: IPv6 address is link-local, you must specify the network interface you want to use when pinging it. Example: $ ping6 fe80::beae:c5ff:febe:a742connect: Invalid argument$ ping6 -I eth0 fe80::beae:c5ff:febe:a742PING fe80::beae:c5ff:febe:a742(fe80::beae:c5ff:febe:a742) from fe80::beae:c5ff:febe:a742%eth0 eth0: 56 data bytes64 bytes from fe80::beae:c5ff:febe:a742%eth0: icmp_seq=1 ttl=64 time=0.182 ms64 bytes from fe80::beae:c5ff:febe:a742%eth0: icmp_seq=2 ttl=64 time=0.167 ms... You can also append the interface at the end of the address by using the % sign: ping6 fe80::beae:c5ff:febe:a742%eth0 . This requirement is only for link-local IPv6 addresses: you can ping globally routable IPv6 addresses without specifying the interface. $ ping6 2a00:1450:400f:80a::200e # that's ipv6.google.comPING 2a00:1450:400f:80a::200e(2a00:1450:400f:80a::200e) 56 data bytes64 bytes from 2a00:1450:400f:80a::200e: icmp_seq=1 ttl=55 time=17.6 ms64 bytes from 2a00:1450:400f:80a::200e: icmp_seq=2 ttl=55 time=19.6 ms...
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/423534", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80246/" ] }
423,543
when I use mouse to click and drag it doesn't drag but selects multiple files instead.I use latest stable arch with Wayland and Gnome.
I found the problem, I unknowingly enabled the experimental new views option. Disabling it fixed the issue.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/423543", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/275561/" ] }
423,550
I have a variable myVar in bash containing a long string that looks like this: -rw-rw-rw- root/root 16 2018-02-12 10:03 foo_tar/baz1234_ I want to delete everything in myVar before the last slash (including the last slash) , so that myVar ends up storing only baz1234_ . How can I remove everything and store the result in a variable? I have encountered solutions dealing with sed, but those tackle file handling, hence my question.
You should use bash Parameter expansion for this and use sub-string removal of type ${PARAMETER##PATTERN} From the beginning - ${PARAMETER##PATTERN} This form is to remove the described pattern trying to match it from the beginning of the string. The ## tries to do it with the longest text matching. Using for your example $ myVar='-rw-rw-rw- root/root 16 2018-02-12 10:03 foo_tar/baz1234_'$ echo "${myVar##*/}"baz1234_ As noted in the comments, the string in question seems to be output of ls command in a variable! which is not an ideal scripting way. Explain your requirement a bit more clearly.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/423550", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274966/" ] }
423,565
Hi I have installed KVM and Virt-Manager on three physical computers, and I'm thinking of using a fourth computer to save snapshots or copies of snapshots. Someone knows how to configure Virt-Manager, so that at the moment of making Snapshots, these are saved in another server (Server 4) and not in local. Or save it locally but make a copy on another server (Server 4). Thank you
You should use bash Parameter expansion for this and use sub-string removal of type ${PARAMETER##PATTERN} From the beginning - ${PARAMETER##PATTERN} This form is to remove the described pattern trying to match it from the beginning of the string. The ## tries to do it with the longest text matching. Using for your example $ myVar='-rw-rw-rw- root/root 16 2018-02-12 10:03 foo_tar/baz1234_'$ echo "${myVar##*/}"baz1234_ As noted in the comments, the string in question seems to be output of ls command in a variable! which is not an ideal scripting way. Explain your requirement a bit more clearly.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/423565", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/262796/" ] }
423,572
I have an HP laptop, the reference is Elitebook 840 G3. I'm running on Linux Mint 18 Sarah 32 bits.After several months of usage, everything works like a charm except the dual screen. Every time I plug in a new screen device, the computer completely crashes and I have to do a hard reboot. It happens no matter what display device I use, from typical monitors to overhead devices. The only workaround for me is to close the laptop's lid, wait for the computer to shut down and then plug the VGA cable. But every time I connect other displays while running, it crashes. Any ideas what it could come from?
You should use bash Parameter expansion for this and use sub-string removal of type ${PARAMETER##PATTERN} From the beginning - ${PARAMETER##PATTERN} This form is to remove the described pattern trying to match it from the beginning of the string. The ## tries to do it with the longest text matching. Using for your example $ myVar='-rw-rw-rw- root/root 16 2018-02-12 10:03 foo_tar/baz1234_'$ echo "${myVar##*/}"baz1234_ As noted in the comments, the string in question seems to be output of ls command in a variable! which is not an ideal scripting way. Explain your requirement a bit more clearly.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/423572", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/275586/" ] }
423,607
How can I remove all underscores in a string stored in a variable in bash? I have currently a variable myVar which contains a string foo1234_ . The underscore's position could however be anywhere else. I want to remove the underscore, and have tried myVar="${myVar//_}" ,but get Bad substitution output. What am I doing wrong?
Use the substitution type of Parameter Expansion: underscored=A_B_Cecho "${underscored//_}" // replaces all the occurrences; if you replace by the empty string, you can omit the final / .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/423607", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274966/" ] }
423,631
In yum when you do an update with yum update you normally get all success and error messages output on the CLI. Is there any way to have yum suppress all successful package installations and only print out when there was an error installing a package?
yum update -q -y The -q is quiet mode. The -y assumes yes to everything. This still prints errors.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/423631", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72302/" ] }
423,632
systemctl --user seems to be working fine for the desktop user: dev@dev-VirtualBox:~$ systemctl --user > /dev/nulldev@dev-VirtualBox:~$ echo $?0 But when running the same command under the www-data user I get an unexpected response dev@dev-VirtualBox:~$ sudo su www-data -s /bin/bashwww-data@dev-VirtualBox:~$ systemctl --user > /dev/null Failed to connect to bus: No such file or directory www-data@dev-VirtualBox:~$ echo $? 1 How to enable systemctl --user here? Running Ubuntu 16.04
The per-user instance of systemd is started by a hook into the login process, a pam_systemd PAM, for both ordinary virtual/real terminal login and remote login via SSH and otherwise. You are not logging in. You are augmenting the privileges of your existing login session with sudo su www-data . (This is redundant, by the way. sudo -u www-data will go straight to www-data without your running commands as the superuser.) You have not invoked the hook. Therefore www-data 's per-user instance of systemd has not been started, and systemctl --user finds nothing to talk to. You can start it manually: % sudo install -d -o www-data /run/user/`id -u www-data`% sudo systemctl start user@`id -u www-data` (If you have done these in the wrong order, then stop the service and do them in the right order. Doing them in the wrong order ends up in a state where the runtime directory is empty and lacks the D-Bus and other socket files, but the service is running and will never communicate with clients.) The one niggle is that your DBUS_SESSION_BUS_ADDRESS variable needs to be changed so that Desktop Bus client programs like systemctl talk to the other account's Desktop Bus broker when you are running them with the privileges of that other account: % sudo -u www-data DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/`id -u www-data`/bus systemctl --user This is the simple way. The more complex way is to adjust the PAM configuration so that sudo invokes the pam_systemd hook. However, this has side-effects, particularly with respect to the XDG_RUNTIME_DIR environment variable, that you probably do not desire. Only try this alternative if you are confident that you are alright with the effects of introducing pam_systemd into sudo . Further reading Jonathan de Boyne Pollard (2014). Don't abuse su for dropping user privileges . Frequently Given Answers. Lennart Poettering et al. (2017). pam_systemd . systemd manual pages . Freedesktop.org.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/423632", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52655/" ] }
423,699
true && echo foo will print foo . false && echo foo will not print foo . bash -c "exit $a" && echo foo will print foo depending on $a . Is there a more elegant way to write the last one? It seems a bit much having to start a shell simply to set the exit value. I am thinking something like: return $a && echo foo exit $a && echo foo except return only works in functions and exit will cause echo foo never to be run. Elegant in this context means: Easy to understand Portable Easy to read Easy to write Easy to remember Short High performance E.g. if true took the exit code as an argument (and this was a documented feature), it would be very elegant. So is there a more elegant way? Background One of the places this could be used is for: $ parallel 'setexitval {= $_%=3 =} || echo $?' ::: 0 1 2 3 Kusalananda's version is probably the best. It is clear what is going on. It is not long and cumbersome to write. It is easy to remember and portable. The only drawback is that it forks another shell (it costs ~0.5 ms): $ parallel '(exit {= $_%=3 =}) || echo $?' ::: 0 1 2 3 Another situation is when you want to simulate an error condition. Say, if you have a long running program, that sets different exit codes, and you are writing something that will react to the different exit codes. Instead of having to run the long running program, you can use (exit $a) to simulate the condition.
Use a subshell that exits with the specified value. This will output 5 : a=5; ( exit $a ) && echo foo; echo $? This will output foo and 0 (note, this zero is set by echo succeeding, not by the exit in the subshell): a=0; ( exit $a ) && echo foo; echo $? In your shortened form (without visibly setting a or investigating $? explicitly): ( exit $a ) && echo foo My answer is one that comes from just taking the question at face value and solving it as a curious quiz, "how may we set $? (using handy notation)". I make no attempt to search for any deeper meaning or function or even context in this case (because non was given). I additionally do not judge the validity, efficiency or usability of the solution. I sometimes, in my answers, do say things like "... but don't do that, because...", followed up with "instead, do this...". In this case, I'm choosing to interpret the question as simply a curious quiz. Further background was now added to the question. My take on the issue of forking a subshell is that scripts like GNU configure does this all the time , and that it's not terribly slow unless done in a really tight (long-running) loop. I can't say much about the use of this in/with GNU parallel as I'm not a frequent user of that software.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/423699", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2972/" ] }
423,711
When compiling, errors are often accompanied by a lengthy series of notes (cyan). Is there a g++ flag to disable this, only showing the error itself?
Use a subshell that exits with the specified value. This will output 5 : a=5; ( exit $a ) && echo foo; echo $? This will output foo and 0 (note, this zero is set by echo succeeding, not by the exit in the subshell): a=0; ( exit $a ) && echo foo; echo $? In your shortened form (without visibly setting a or investigating $? explicitly): ( exit $a ) && echo foo My answer is one that comes from just taking the question at face value and solving it as a curious quiz, "how may we set $? (using handy notation)". I make no attempt to search for any deeper meaning or function or even context in this case (because non was given). I additionally do not judge the validity, efficiency or usability of the solution. I sometimes, in my answers, do say things like "... but don't do that, because...", followed up with "instead, do this...". In this case, I'm choosing to interpret the question as simply a curious quiz. Further background was now added to the question. My take on the issue of forking a subshell is that scripts like GNU configure does this all the time , and that it's not terribly slow unless done in a really tight (long-running) loop. I can't say much about the use of this in/with GNU parallel as I'm not a frequent user of that software.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/423711", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/275672/" ] }
423,715
I would like to install Pi-Hole automatically inside Vagrant (VirtualBox). Therefore, in an automated script, it has to run to box start. Unfortunately, normally,you have to answer multiple installation questions to install Pi-Hole,e.g., IPv4 or 6, ...,and you need keyboard interaction with the setup (by the user). Is there any way or solution to install it without any interaction? How can I write it in a Bash script or Vagrantfile?
This discussion says you can create the configuration options in the file /etc/pihole/setupVars.conf (documented here ) and run with the --unattended flag, eg: curl -L https://install.pi-hole.net | bash /dev/stdin --unattended
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/423715", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/268281/" ] }
423,795
When creating a persistent reverse SSH tunnel, is autossh useful on a system running systemd? Typically, autossh is run by a service with the option -M set to zero which disables monitoring of the link. This means that ssh has to exit before autossh will restart it. From the man page: Setting the monitor port to 0 turns the monitoring function off, and autossh will only restart ssh upon ssh's exit. For example, if you are using a recent version of OpenSSH, you may wish to explore using the ServerAliveInterval and ServerAliveCountMax options to have the SSH client exit if it finds itself no longer connected to the server. In many ways this may be a better solution than the monitoring port. It seems that the systemd service itself is capable of doing this with a service file that contains these options: Type=simpleRestart=alwaysRestartSec=10 So is autossh redundant when run by a systemd service? Or is it doing other things that help to keep the SSH connection up? Thanks.
Thanks for a great question. I had a systemd service running with autossh -M 0. And just realized that using autossh along with systemd is redundant. Here is my new service without autossh. It is running fine and restarting even if I kill the ssh process myself. [Unit]Description=autosshWants=network-online.targetAfter=network-online.target[Service]Type=simpleExecStart=ExecStart=/usr/bin/ssh -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -o ExitOnForwardFailure=yes -R8023:localhost:22 sshtunnel@[address of my server] -N -p 22 -i /root/.ssh/id_rsa_insecureRestart=alwaysRestartSec=60[Install]WantedBy=multi-user.target HOW TO START THE SERVICE: Create sshtunnel user on the server (don't give root permissions) Put the unencrypted RSA key "id_rsa_insecure" here /root/.ssh/. The public part you should put on the server in /home/sshtunnel/.ssh/authorized_keys Make a file "autossh.service" with the code above and put it here /etc/systemd/system Run following commands sudo systemctl daemon-reloadsudo systemctl start autosshsudo systemctl enable autossh A few explanatory notes: ExitOnForwardFailure this is what I missed first time. Without this option if port forwarding fails for some reason (and it happens, believe me) the SSH tunnel will exists but it would be useless. So it needs to be killed and restarted. /root/.ssh/id_rsa_insecure As you can see from the name the key is not encrypted so it has to be a special key, and you have to restrict the user with this key from doing anything on the server side but creating a reverse channel. The straightforward way to do it is to restrict the behavior in the authorized_keys file on the server side. # /home/sshtunnel/.ssh/authorized_keyscommand="/bin/true" ssh-rsa [the public key] This prevents the "sshtunnel" user from launching the shell and performing any commands. Additional security: What I tried and it did not work: 1) on server side: change the shell in /etc/passwd to /bin/false for "sshtunnel" user 2) on server side: add permitopen=host:port in the authorized_keys file for sshtunnel id_rsa_insecure key What I did not try but I think it should work: you can restrict the "sshtunnel" user further (allowing only specific port forwarding) by configuring SELinux user profiles - but I don't have a code handy for that. Please let me know if anyone would have a code. I would love to hear any security faults in my current solution. Thanks
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/423795", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/122590/" ] }
423,797
I have an Alienware Aurora R7, running Arch Linux. On shutdown, the kernel panics, with something like this in the panic message (omitting timestamps): BUG: Unable to handle kernel NULL pointer dereference at (null)IP: i2c_dw_isr+0x3ef/0x6d0PGD 0 P4D 0Oops: 0000 [#1] PREEMPT SMP PTI From various sources ( 1 , 2 ), this seems to be related to the i2c-designware-core module, and the workaround is blacklisting it. However, with recent kernels (seems to be 4.10 and above), this doesn't seem to be built as a module: # uname -srv Linux 4.15.2-2-ARCH #1 SMP PREEMPT Thu Feb 8 18:54:52 UTC 2018# zgrep DESIGNWARE /proc/config.gzCONFIG_I2C_DESIGNWARE_CORE=yCONFIG_I2C_DESIGNWARE_PLATFORM=yCONFIG_I2C_DESIGNWARE_SLAVE=yCONFIG_I2C_DESIGNWARE_PCI=mCONFIG_I2C_DESIGNWARE_BAYTRAIL=yCONFIG_SPI_DESIGNWARE=mCONFIG_SND_DESIGNWARE_I2S=mCONFIG_SND_DESIGNWARE_PCM=y So I have resorted to making the kernel reboot on panic: # cat /proc/cmdlineroot=UUID=e5018f7e-5838-4a47-b146-fc1614673356 rw initrd=/intel-ucode.img initrd=/initramfs-linux.img panic=10 sysrq_always_enabled=1 printk.devkmsg=on (The odd paths in the /proc/cmdline are because I boot directly from UEFI, with entries created using efibootmgr . The paths are rooted at /boot , where my ESP is mounted.) This seems to be something for touchpads, but I don't have a touchpad and won't get one. What can I do to disable this thing? Do I have to build a custom kernel ? Since linux-lts is also newer than 4.10, (4.14, currently), there doesn't seem to be an easy way to install an older kernel either, where blacklisting might presumably work. Using nolapic as a kernel parameter solves the shutdown panic problem, but it causes the system to freeze a few minutes after boot, so I can't use it.
After reading kernel sources, I found a function we need to blacklist! Thanks to Stephen Kitt for the hint about initcall_blacklist . Add initcall_blacklist=dw_i2c_init_driver to the kernel command line. This works for me on kernel 4.15.0. For anyone else who'll find this answer. You can do it by editing /etc/default/grub : Run in the terminal: sudo -H gedit /etc/default/grub . Append blacklist string to the GRUB_CMDLINE_LINUX_DEFAULT : GRUB_CMDLINE_LINUX_DEFAULT="… initcall_blacklist=dw_i2c_init_driver" . Save the file, close the editor. Run in the terminal: sudo update-grub . Reboot and test!
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/423797", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70524/" ] }
423,805
I have a basic understanding of how to get a job in foreground switch to background and vice-versa but I am trying to come up with a way so that I can run multiple jobs in the background.I tried to put multiple jobs in the background but only one of which was in running state.I want to have a scenario where I can run multiple jobs in the background. I came across this website where I see multiple jobs running in the background.Can someone please break it down for me as to how can I run multiple jobs in the background?
You can use the & to start multiple background jobs. Example to run sequentially: (command1 ; command2) & Or run multiple jobs in parallel command1 & command2 & This will start multiple jobs running in the background. If you want to keep a job running in the background, once you exit the terminal you can use nohup . This will ensure that SIGHUP , is not sent to a process once you exit the terminal. example: nohup command &
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/423805", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/275751/" ] }
423,854
I use Ubuntu 16.04 with the native Bash on it. I'm not sure if executing #!/bin/bashmyFunc() { export myVar="myVal"}myFunc equals in any sense, to just executing export myVar="myVal" . Of course, a global variable should usually be declared outside of a function (a matter of convention I assume, even if technically possible) but I do wonder about the more exotic cases where one writes some very general function and wants a variable inside it to still be available to everything, anywhere. Would export of a variable inside a function, be identical to exporting it globally, directly in the CLI, making it available to everything in the shell (all subshells, and functions inside them)?
Your script creates an environment variable, myVar , in the environment of the script. The script, as it is currently presented, is functionally exactly equivalent to #!/bin/bashexport myVar="myVal" The fact that the export happens in the function body is not relevant to the scope of the environment variable (in this case). It will start to exist as soon as the function is called. The variable will be available in the script's environment and in the environment of any other process started from the script after the function call. The variable will not exist in the parent process' environment (the interactive shell that you run the script from), unless the script is sourced (with . or source ) in which case the whole script will be executing in the interactive shell's environment (which is the purpose of "sourcing" a shell file). Without the function call itself: myFunc() { export myVar="myVal"} Sourcing this file would place myFunc in the environment of the calling shell. Calling the function would then create the environment variable. See also the question What scopes can shell variables have?
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/423854", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273994/" ] }
423,859
Can someone explain what this means in a shell script? while read -r linedo if [ "${line#*'Caused By'}" != "$line" ]; then echo "Yes" fidone
${line#*'Caused By'} is a specific instance of the variable substitution ${parameter#word} (as it's written in the bash manual, and also in the POSIX standard for the sh shell). In ${parameter#word} , the pattern word will be removed from the beginning of the value of $parameter . It's called "Remove Smallest Prefix Pattern" because it will remove the shortest matching prefix string that matches the pattern in word (with ## in place of # it removes the longest matching prefix string). It this specific example, the string Caused by (and anything before it, thanks to the * ) is, if it exists, removed from the value of $line . The single quotes around the string are redundant. By comparing the result of the substitution with the value of the variable itself, the test determines whether the value of $line contains the text Caused by , and prints Yes if it does. This has the same effect as if [[ "$line" == *'Caused by'* ]]; then echo 'Yes'fi in bash , ksh93 or zsh , or case "$line" in *'Caused by'*) echo 'Yes'esac in any sh shell. The loop in the question reads "lines" from standard input. See the question " Understanding "IFS= read -r line" " for a discussion about this.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/423859", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/275775/" ] }
423,894
What *nix command would cause the hard drive arm to rapidly switch between the centre and the edge of the platter? In theory it should soon cause a mechanical failure. It is for an experiment with old hard drives.
hdparm --read-sector N will issue a low-level read of sector N bypassing the block layer abstraction. Use -I to get the device's number of sectors.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/423894", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
423,900
How can I do {$several_commands} | less and have less considere it as several files and enable navigation using :n and :p . That may not be the clearer explanation, so let us consider an example. I currently have a function svndiff () { for a in `svn status | \grep ^M | sed 's/M //'`; do svn diff $a | less; done} The purpose obviously is to see with less the difference of all my modified files. But with this syntax, I have to use key Q to close one "file" and open the next one. I would like to be able to navigate between files with the less commands :n (next file) and :p (previous file). How can I do that ?
You could use process substitution: less -f <(svn diff this) <(svn diff that) But that's hard to use in a loop. Probably best to just use temporary files: #!/bin/bashdir=$(mktemp -d)outfiles=()IFS=$'\n'set -f for file in $(svn status | \grep ^M | sed 's/M //') ; do outfile=${file#.} # remove leading dot (if any) outfile=${outfile//\//__} # replace slashes (if any) with __ svn diff "$file" > "$dir/$outfile"; outfiles+=("$dir/$outfile") # collect the filenames to an arraydoneless "${outfiles[@]}"rm -r "$dir" The above tries to keep the filenames visible in the names of the temp files, with some cleanup for slashes and leading dots. (In case you get paths like ./foo/bar . I can't remember how svn outputs the file names, but anyway...) The array is there to keep the order, though as @Kusalananda said, we could just do "$dir"/* instead, if the order doesn't matter. set -f and IFS=$'\n' in case someone creates file names with glob characters or white space. Of course we could simplify the script a bit and create, say numbered temp files instead.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/423900", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165967/" ] }
423,935
I use Ubuntu 16.04 with Nginx and a few WordPress sites. Sometimes I don't visit a site for a long time (>=1 month) and it might be that the site is down. I'm looking for a small utility that will email my Gmail account, if one of my Nginx-WordPress sites is down (without mentioning a reason). Approaches considered so far 1. Creating a tool from scratch Creating the whole non-default configuration for my SMTP server. Adding anc configuring DNS recors at the hosting providers DNS management tool. Adding a weekly cron task with curl -l -L on each domain and save it's output into a file. Adding a weekly cron task of say one hour later, to check each file and email myself if the status code isn't 200. This might seem simple, but is actually quite complex (though not necessarily complicated), and it also might be a bit fragile. A dedicated, communal, maintained utility might be better for me. 2. Third party tools I don't want to use some grandiose, third-party network-monitoring service like Nagios, Icinga, Zabbix, Shinken, etc, and they all seem an overkill per this particular cause. 3. Postfix add-on I've already installed Postfix with the internet-site configuration so that tool might utilize Postfix. I just use the Postfix defaults, some default conf I could add on top of internet-site , maybe without adding and configuring DNS records. A utility which is an interactive program to re-configure Postfix might ease my pain; I wouldn't have to fill my Ubuntu-Nginx-WordPress-Environment installation-script with much SMTP configuration data. Maybe I'll just have to set some DNS records after that, and that's it. Anything that would ease the process this way or another is also an option for me. 4. Handling the spam filter Even if Gmail would mistakenly move my first email (or the first series of email) to spam, I could put it into a whitelist. My question Is there a utility I could use to have this behavior?
Best bet is to use a service like uptime robot . Free tier will cover less than 50 sites, pro plan is quite cheap. It'll do a simple ping check or even HTTP status code check The upshot of this is that you're not adding an additional point of failure (that you can control). You've no longer got to maintain and update a monitoring service
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/423935", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273994/" ] }
423,955
I want to write a simple script that does the following: Reads metadata from an audio file (WAV, FLAC, MP3 and AIFF) Returns an error message if the bitrate is below a thershold Renames the file to be in a specific format eg artist-title-year-etc Moves it to another folder I have very limited shell scripting experience, but I wanted to know if anyone could point me in the right direction, particularly for reading the metadata. If someone can propose an alternative way to writing a shell script that would also be useful!
I like your attitude because you aren't asking anyone to 'do your homework' and spoon-feed an answer. You will want to use a program such as exiftool which reads and outputs a file's metadata. In the case of exiftool you can select which metadata tags to output, eg exiftool -maxbitrate filename . First run the program on a sample file without any options in order to browse the available tags, and then select what's of interest to you. Note that although tags may display capitalized and with embedded spaces, you use them programmatically without spaces and case-insensitive, eg. the metadata tag "Max BitRate" would get specified as command line option -maxbitrate . If you do choose exiftool , you can save many steps if you take advantage of its option -printFormat to customize the output to help you get the metadata elements you want, in the format you want, for renaming the file. An example usage of this feature is exiftool -Bitdepth -MaxBitRate -p 'blah $Bitdepth blah $Maxbitrate' your_file.mp3 . Read the man page for details. For other metadata programs, you may need to parse your results using a second program such as awk to get only the data field of interest, in your case the bit rate, and use your shell's arithmetic comparison tests, such as -lt or -gt to flag an error. As for the rest of your script's requirements, the rename and move operation can probably be done in one step. The challenge will be to get the information you're looking for. Again, your chosen metadata program (eg. exiftool ) will get you that information.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/423955", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/275838/" ] }
423,958
The question " What is the purpose of .bashrc and how does it work? " sought the purpose and use of .bashrc . Another file with a similar name is .bash_logout . Should this file exist in the first place? If so, what is the role of this file?
The .bash_logout file does not have to exist. Its contents is sourced by bash when a bash login shell exits. The file makes it possible to do, for example, various forms of cleanup when logging out from a terminal session. It may be used to execute any shell code, but may be used to e.g. clear the screen if logins are done in a non-GUI environment. Some may also find it useful for explicitly terminating programs that were started from .bash_login or .bash_profile (if, for example, fetchmail or some similar process is started as a user daemon or in the background, it may be good to terminate it in .bash_logout ). The csh shell has a similar file called .logout and the corresponding file for the zsh shell is called .zlogout . The ksh shell has to my knowledge no similar functionality. See also the tangentally related question Difference between Login Shell and Non-Login Shell?
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/423958", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147432/" ] }
423,962
Edit: This is a duplicate of https://stackoverflow.com/questions/998626/meaning-of-tilde-in-linux-bash-not-home-directory/ . I don't have the reputation to close this question as duplicate. I'm not referring to ~ as in the home directory but rather this: $ ls ~foo/bar/some/mount/point/foo/bar However if I attempt it with a different mount point, e.g.: $ mount | ag "/dev "devfs on /dev (devfs, local, nobrowse)$ ls /dev/stdin/dev/stdin$ ls ~stdinzsh: no such user or named directory: stdin . # bash has a similar error message: ls: ~stdin: No such file or directory What is the ~ called in this context? How does it work? Edit:More information based on some of the comments below: I can attest that foo is not a username on my system. When attempting to autocomplete ls -lah ~ not all options are shown. i.e. I'm able to cd ~qux , when qux doesn't show up in the autocomplete. Again qux is not a user in my system. If it matters /some/mount/point is a network share. All of the details suggest some named path muckery, a Z shell feature of pathname expansion, but this works in bash as well, which apparently doesn't support things like the Z shell's named paths.
What is ~foo Quote from bash manual (with added emphasis): If a word begins with an unquoted tilde character (`~'), all of the characters preceding the first unquoted slash (or all characters, if there is no unquoted slash) are considered a tilde-prefix.If none of the characters in the tilde-prefix are quoted, the characters in the tilde-prefix following the tilde are treated as a possible login name. ~foo expands to foo user's home directory exactly as specified in /etc/passwd . Note, that this can include system usernames; it doesn't necessarily mean human users or that they can actually log in locally ( they can log in via SSH keys for instance). In fact, as noted in the comments , bash will use getpwnam function. That function itself is specified by POSIX standard, hence should exist on most Unix-like systems, including macOS X . This function isn't limited to /etc/passwd only and searches other databases, such as LDAP and NIS. Particular excerpt from bash source code , tilde.c file, starting at line 394: /* No preexpansion hook, or the preexpansion hook failed. Look in the password database. */ dirname = (char *)NULL;#if defined (HAVE_GETPWNAM) user_entry = getpwnam (username);#else user_entry = 0; Practical example Below you can see tests with system usernames on my system. Pay attention to corresponding passwd entry and result of ls ~username $ grep '_apt' /etc/passwd_apt:x:104:65534::/nonexistent:/bin/false$ ls ~_aptls: cannot access '/nonexistent': No such file or directory$ grep '^lp' /etc/passwdlp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin$ ls ~lpls: cannot access '/var/spool/lpd': No such file or directory Even if for instance _apt account is locked as suggested by output of passwd -S apt it is still showing up as possible login name: _apt L 11/29/2017 0 99999 7 -1 Please note: This is not macOS specific feature, but rather shell-specific feature.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/423962", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34275/" ] }
423,964
I would like to use find to return files with a specific filename, in this case: Files that do not begin with _ . I learned I could use find . ! '(' -name '_*' ')' But, I want it to return the filenames without the preceding pathname. Is there any way I could use cmd basename to do this in bash? I also welcome any other command, like ls that can do this.
What is ~foo Quote from bash manual (with added emphasis): If a word begins with an unquoted tilde character (`~'), all of the characters preceding the first unquoted slash (or all characters, if there is no unquoted slash) are considered a tilde-prefix.If none of the characters in the tilde-prefix are quoted, the characters in the tilde-prefix following the tilde are treated as a possible login name. ~foo expands to foo user's home directory exactly as specified in /etc/passwd . Note, that this can include system usernames; it doesn't necessarily mean human users or that they can actually log in locally ( they can log in via SSH keys for instance). In fact, as noted in the comments , bash will use getpwnam function. That function itself is specified by POSIX standard, hence should exist on most Unix-like systems, including macOS X . This function isn't limited to /etc/passwd only and searches other databases, such as LDAP and NIS. Particular excerpt from bash source code , tilde.c file, starting at line 394: /* No preexpansion hook, or the preexpansion hook failed. Look in the password database. */ dirname = (char *)NULL;#if defined (HAVE_GETPWNAM) user_entry = getpwnam (username);#else user_entry = 0; Practical example Below you can see tests with system usernames on my system. Pay attention to corresponding passwd entry and result of ls ~username $ grep '_apt' /etc/passwd_apt:x:104:65534::/nonexistent:/bin/false$ ls ~_aptls: cannot access '/nonexistent': No such file or directory$ grep '^lp' /etc/passwdlp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin$ ls ~lpls: cannot access '/var/spool/lpd': No such file or directory Even if for instance _apt account is locked as suggested by output of passwd -S apt it is still showing up as possible login name: _apt L 11/29/2017 0 99999 7 -1 Please note: This is not macOS specific feature, but rather shell-specific feature.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/423964", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274884/" ] }
423,965
I'd like to create a filesystem image as a non-root user, the problem I'm facing is that the way I'm currently doing it, it needs a mount and as a non-root can't do a mount. This is what I'm now doing: #!/bin/shIMG="app.ext4"LBL="PIHU_APP"MNT="/mnt/PIHU_APP"APP="/home/buildroot/board/arctura/PIHU_APP/application"#64MB:dd if=/dev/zero of=$IMG bs=4096 count=16384mkfs.ext4 $IMGe2label $IMG $LBLmkdir -p $MNTmount -o loop $IMG $MNTcp -r $APP $MNTsyncumount $MNT I have full root access, so I can setup/prepare anything, but the script will be executed from a non-root account. However, I have a feeling there may be a better way that might not even need mounting, but I'm not sure..
mke2fs -d minimal runnable example without sudo mke2fs is part of the e2fsprogs package. It is written by the famous Linux kernel filesystem developer Theodore Ts'o who is at Google as of 2018, and the source upstream is under kernel.org at: https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs Therefore, that repository can be considered a reference userland implementation of ext file system operations: #!/usr/bin/env bashset -euroot_dir=rootimg_file=img.ext2# Create a test directory to convert to ext2.mkdir -p "$root_dir"echo asdf > "${root_dir}/qwer"# Create a 32M ext2 without sudo.# If 32M is not enough for the contents of the directory,# it will fail.rm -f "$img_file"mke2fs \ -L '' \ -N 0 \ -O ^64bit \ -d "$root_dir" \ -m 5 \ -r 1 \ -t ext2 \ "$img_file" \ 32M \;# Test the ext2 by mounting it with sudo.# sudo is only used for testing.mountpoint=mntmkdir -p "$mountpoint"sudo mount "$img_file" "$mountpoint"sudo ls -l "$mountpoint"sudo cmp "${mountpoint}/qwer" "${root_dir}/qwer"sudo umount "$mountpoint" GitHub upstream . The key option is -d , which selects which directory to use for the image , and it is a relatively new addition to v1.43 in commit 0d4deba22e2aa95ad958b44972dc933fd0ebbc59 Therefore it works on Ubuntu 18.04 out of the box, which has e2fsprogs 1.44.1-1, but not Ubuntu 16.04, which is at 1.42.13. However, we can do just like Buildroot and compile it from source easily on Ubuntu 16.04 with: git clone git://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.gitcd e2fsprogsgit checkout v1.44.4./configuremake -j`nproc`./misc/mke2fs -h If mke2fs fails with: __populate_fs: Operation not supported while setting xattrs for "qwer"mke2fs: Operation not supported while populating file system when add the option: -E no_copy_xattrs This is required for example when the root directory is in NFS or tmpfs instead of extX as those file systems don't seem to have extended properties . mke2fs is often symlinked to mkfs.extX , and man mke2fs says that if you use call if with such symlink then -t is implied. How I found this out and how to solve any future problems: Buildroot generates ext2 images without sudo as shown here , so I just ran the build with V=1 and extracted the commands from the image generation part that comes right at the end. Good old copy paste has never failed me. TODO: describe how to solve the following problems: create sudo owned files in the image. Buildroot does it. automatically calculate the minimal required size. Initial estimate with du for file size and find . | wc for directory structure, min that with 32Mb (smaller fails), then double until the command works, is likely a very decent approach. Buildroot used to do this, but stopped for some reason, but easy to implement ourselves. conveniently extract all files from the partition: How can I extract files from a disk image without mounting it? https://superuser.com/questions/826341/access-data-from-a-linux-partition-inside-an-image-without-root-privileges Multiple partitions in one image file See this: https://stackoverflow.com/questions/10949169/how-to-create-a-multi-partition-sd-image-without-root-privileges/52850819#52850819
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/423965", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/227344/" ] }
424,033
As we know, optional is one of the control value in PAM's configuration files. From linux-pam.org : optional: the success or failure of this module is only important if it is the only module in the stack associated with this service+type. I'm confused. Here's the /etc/pam.d/login : session required pam_selinux.so opensession required pam_namespace.sosession optional pam_keyinit.so force revokesession include system-authsession include postlogin-session optional pam_ck_connector.so I see two rules with optional control with just actions. I assume we only use optional for non-authenticated purpose rules. Is that right?
Important note: optional modules won't be ignored, they will be processed, their results will be ignored, i.e., even if they fail, the authentication process won't be aborted. There are many situations where you may want an action to be performed (a module to be executed) during authentication but, even in case of fail, you don't want that the authentication process get aborted. One practical example is if you want to use pam to automatically open a dm-crypt encrypted device during the login using the same password as the user's password: auth optional pam_exec.so expose_authtok quiet /usr/sbin/cryptsetup --allow-discards open UUID=... /home/username Note that if required is used instead of optional here, the first login will succeed as cryptsetup will return 0 as its exit code, but if the user logs out and then logs in again, the login would fail as the device is already open and cryptsetup will return a non-zero exit code. Yet, in this case you still would want the login to succeed. This is just one example that came to my mind because I actually use it, i.e., this is not a theoretical situation, but this is one among many many situations where you want that a failed module won't abort the authentication process.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/424033", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/238990/" ] }
424,056
I read that it is a best practice to double quote all of your variable expansions in Bash. I also read that one cannot use a shell glob (wildcard( * )) right after a double quoted variable expansion. These situations conflict when one wants use the value of a regex pattern, expanded out of a variable. The reason I would sometimes want to combine regex with a wildcard is to keep my regex minimal, neater, in my personal taste. My particular problem I downloaded phpmyadmin into my document root directory and unzipped it, but I fail to rename it with mv by a regex pattern I put in a variable, and available when expanding its variable. Here's the exact trace: userName@compName:/var/www/html# lltotal 11336drwxr-xr-x 3 root root 4096 Feb 14 07:04 ./drwxr-xr-x 3 root root 4096 Feb 14 06:56 ../-rw-r--r-- 1 root root 612 Feb 14 06:57 index.nginx-debian.htmldrwxr-xr-x 12 root root 4096 Dec 23 08:50 phpMyAdmin-4.7.7-all-languages/-rw-r--r-- 1 root root 11589684 Dec 23 14:08 phpMyAdmin-latest-all-languages.zipuserName@compName:/var/www/html# echo $pma[pP][hH][pP][mM][yY][aA][dD][mM][iI][nN]userName@compName:/var/www/html# mv "$pma"*/ phpmyadmin/mv: cannot stat '[pP][hH][pP][mM][yY][aA][dD][mM][iI][nN]*/': No such file or directory If I'll unquote the variable expansion to ${pma} I would indeed be able to combine the variable expansion and the regex, as in ${pma}* , but it is important to me to follow the best practice without exceptions, if I can. My question How could I keep the variable expansion double quoted, but still using the extracted value with a wildcard?
A filename globbing pattern kept in a variable will not glob filenames if double quoted. It is the double quoting that stops the filenames from being globbed, not the * wildcard at the end or the combination of quoting and * . We often tell users on the site to "quote their variables", and we do so because the values of unquoted variables undergo word splitting and file name globbing, and this is usually not what's wanted. For example, a password may be [hello] world* and on a command line containing -p $password that would do "interesting" things depending on what files were present in the current directory (and it may well not work at all due to the space). See also the question " Security implications of forgetting to quote a variable in bash/POSIX shells " What you want to do here is the opposite of what we usually want to avoid , namely invoking file name globbing using the file name globbing pattern in your variable. If you truly can not rely on the name of the extracted directory to remain stable, a better (as in "cleaner") solution would possibly be to make sure that it is the only thing in a temporary directory, and then just use * to move it into place (possibly changing its name in the process, so that you know for sure what it is called). This is better than simply removing the quotes around your variable, as the filename globbing pattern might match other names than the single one that you expect, depending on what other things are available in the directory. This is a variation of my answer to your previous question : #!/bin/sh -exdestdir="/var/www/html/phpmyadmin"tmpdir=$(mktemp -d)trap 'rm -rf "$tmpdir"' EXIT # remove temporary directory on terminationwget -O "$tmpdir/archive.zip" \ "https://www.phpmyadmin.net/downloads/phpMyAdmin-latest-all-languages.zip"cd "$tmpdir" && { unzip archive.zip rm -f archive.zip # The only thing in the current (temporary) directory now is the # directory with the unpacked zip file. mkdir -p "$destdir" mv ./* "$destdir"/phpmyAdmin} The above may be distilled into five steps: Create an empty directory (and change current working directory to it). Fetch the archive. Extract the archive. Remove the archive. Change the name of the lone folder now in the otherwise empty directory.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/424056", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273994/" ] }
424,090
I have been asked by a customer to provide a list of all URL's that my company's linux server connects to in order to complete a software update including individual TCP,UDP ports that these connections use. This is so that the customer can create a custom connection for our server to the internet. I have tried using netstat to get some kind of log but cannot get it to show full URL's, it will only show IP addresses. This is the command i have used so far. netstat -putwc Any help would be greatly appreciated. Thanks. All of the URL’s that the VOCOVO unit needs to connect.2. TCP, UDP ports that these connections will use.3. Bandwidth required for each transaction.
A filename globbing pattern kept in a variable will not glob filenames if double quoted. It is the double quoting that stops the filenames from being globbed, not the * wildcard at the end or the combination of quoting and * . We often tell users on the site to "quote their variables", and we do so because the values of unquoted variables undergo word splitting and file name globbing, and this is usually not what's wanted. For example, a password may be [hello] world* and on a command line containing -p $password that would do "interesting" things depending on what files were present in the current directory (and it may well not work at all due to the space). See also the question " Security implications of forgetting to quote a variable in bash/POSIX shells " What you want to do here is the opposite of what we usually want to avoid , namely invoking file name globbing using the file name globbing pattern in your variable. If you truly can not rely on the name of the extracted directory to remain stable, a better (as in "cleaner") solution would possibly be to make sure that it is the only thing in a temporary directory, and then just use * to move it into place (possibly changing its name in the process, so that you know for sure what it is called). This is better than simply removing the quotes around your variable, as the filename globbing pattern might match other names than the single one that you expect, depending on what other things are available in the directory. This is a variation of my answer to your previous question : #!/bin/sh -exdestdir="/var/www/html/phpmyadmin"tmpdir=$(mktemp -d)trap 'rm -rf "$tmpdir"' EXIT # remove temporary directory on terminationwget -O "$tmpdir/archive.zip" \ "https://www.phpmyadmin.net/downloads/phpMyAdmin-latest-all-languages.zip"cd "$tmpdir" && { unzip archive.zip rm -f archive.zip # The only thing in the current (temporary) directory now is the # directory with the unpacked zip file. mkdir -p "$destdir" mv ./* "$destdir"/phpmyAdmin} The above may be distilled into five steps: Create an empty directory (and change current working directory to it). Fetch the archive. Extract the archive. Remove the archive. Change the name of the lone folder now in the otherwise empty directory.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/424090", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166473/" ] }
424,091
I've made .sh file which send me the mail if disk usage is above a certain level. The script is working fine but the mail goes to spam instead of inbox! What should I do? Do I need to configure anything on the server? I'm new to Linux server. Here is the script: #!/bin/shcurrent_usage=$( df -h | grep '/var' | awk '{percent+=$4;} END{print percent}' | column -t )max_usage=50if [ $current_usage -ge $max_usage ]; then mailbody="Max usage exceeded. Your disk usage is at ${current_usage}." echo "Sending mail..."echo ${mailbody} | mail -s "Disk alert!" "[email protected]"elif [ ${current_usage%?} -lt ${max_usage%?} ]; then echo "No problems. Disk usage at ${current_usage}." > /dev/nullfi Mail looks like:
A filename globbing pattern kept in a variable will not glob filenames if double quoted. It is the double quoting that stops the filenames from being globbed, not the * wildcard at the end or the combination of quoting and * . We often tell users on the site to "quote their variables", and we do so because the values of unquoted variables undergo word splitting and file name globbing, and this is usually not what's wanted. For example, a password may be [hello] world* and on a command line containing -p $password that would do "interesting" things depending on what files were present in the current directory (and it may well not work at all due to the space). See also the question " Security implications of forgetting to quote a variable in bash/POSIX shells " What you want to do here is the opposite of what we usually want to avoid , namely invoking file name globbing using the file name globbing pattern in your variable. If you truly can not rely on the name of the extracted directory to remain stable, a better (as in "cleaner") solution would possibly be to make sure that it is the only thing in a temporary directory, and then just use * to move it into place (possibly changing its name in the process, so that you know for sure what it is called). This is better than simply removing the quotes around your variable, as the filename globbing pattern might match other names than the single one that you expect, depending on what other things are available in the directory. This is a variation of my answer to your previous question : #!/bin/sh -exdestdir="/var/www/html/phpmyadmin"tmpdir=$(mktemp -d)trap 'rm -rf "$tmpdir"' EXIT # remove temporary directory on terminationwget -O "$tmpdir/archive.zip" \ "https://www.phpmyadmin.net/downloads/phpMyAdmin-latest-all-languages.zip"cd "$tmpdir" && { unzip archive.zip rm -f archive.zip # The only thing in the current (temporary) directory now is the # directory with the unpacked zip file. mkdir -p "$destdir" mv ./* "$destdir"/phpmyAdmin} The above may be distilled into five steps: Create an empty directory (and change current working directory to it). Fetch the archive. Extract the archive. Remove the archive. Change the name of the lone folder now in the otherwise empty directory.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/424091", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/275929/" ] }
424,099
So I tried to install epel on a CentOS7 server. I ran sudo yum install epel-release as per these instructions . But now whenever I use yum for instance with yum repolist all I get an error: Repository epel is listed more than once in the configuration yum advises to disable the repository with : yum-config-manager --disable <repoid> /etc/yum.repos.d has : epel.repo , epel-testing.repo and localc7.repo if that's of any help
This error usually occurs when you have two repos with the same name. I think you may have named both the epel entries the same. Try going into /etc/yum.repos.d Look at both the epel files. Check the name is different. cd /etc/yum.repos.dcat epel.* Verify that they have different name. The line you are interested in is: name=SomeName If they share the same name just change it so they are different.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/424099", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/275943/" ] }
424,130
This is my Git repository: https://github.com/benqzq/ulcwe It has a dir named local and I want to change its name to another name (say, from local to xyz ). Changing it through GitHub GUI manually is a nightmare as I have to change the directory name for each file separately (GitHub has yet to include a "Directory rename" functionality, believe it or not). After installing Git, I tried this command: git remote https://github.com/benqzq/ulcwe && git mv local xyz && exit While I didn't get any prompt for my GitHub password, I did get this error: fatal: Not a git repository (or any parent up to mount point /mnt/c)Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). I know the whole point in Git is to download a project, change, test, and then push to the hosting provider (GitHub in this case), but to just change a directory, I desire a direct operation. Is it even possible with Git? Should I use another program maybe?
The fatal error message indicates you’re working from somewhere that’s not a clone of your git repository. So let’s start by cloning the git repository first: git clone https://github.com/benqzq/ulcwe.git Then enter it: cd ulcwe and rename the directory: git mv local xyz For the change to be shareable, you need to commit it: git commit -m "Rename local to xyz" Now you can push it to your remote git repository: git push and you’ll see the change in the GitHub interface.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/424130", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273994/" ] }
424,142
Some days ago I downloaded a .deb file that does not have a descriptive name and I want to know which version it is before executing dpkg -i . I do not know if the same package also comes in a repository, so I am looking to extract this information from the actual file, rather than querying the repository's database.
To get lots of information about the package use -I or --info : dpkg-deb -I package.debdpkg-deb --info package.deb To only get the version use, -f or --field : dpkg-deb -f package.deb Versiondpkg-deb --field package.deb Version
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/424142", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/275973/" ] }
424,183
I want to set up an alias in my config file that has the same result as this command: ssh -N devdb -L 1234:127.0.0.1:1234 My .ssh/config entry for devdb : Host devdbUser someuserHostName the_hostnameLocalForward 1234 127.0.0.1:1234 What do I put in the above config to not start a shell?
So in ssh.c for OpenSSH 7.6p1 we find case 'N': no_shell_flag = 1; options.request_tty = REQUEST_TTY_NO; so -N does two things: the no_shell_flag only appears in ssh.c and is only enabled for the -W or -N options, otherwise it appears in some logic blocks related to ControlPersist and sanity checking involving background forks. I do not see a way an option could directly set it. according to readconf.c the request_tty corresponds to the RequestTTY option detailed in ssh_config(5) . This leaves (apart from monkey patching OpenSSH and recompiling, or asking for a ssh_config option to toggle no_shell_flag with...) something like: Host devdb User someuser HostName the_hostname LocalForward 1234 127.0.0.1:1234 RequestTTY no RemoteCommand cat Which technically does start a shell but that shell should immediately replace itself with the cat program which should then block allowing the port forward to be used meanwhile. cat is portable, but will consume input (if there is any) or could fail (if standard input is closed). Another option would be to run something that just blocks .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/424183", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68394/" ] }
424,184
For example, I want to write a command that shows the system time&date. Then I want the output be like this The system time is Mon Jan 01 01:01:01 AST 2011. I know the command that shows the system time is date but how to add "The system time is" at the front of the output? should it be echo The system time is + %#%@^ + date stuff like that?
A simple way would be: printf "The system time is %s.\n" "$(date)" You could also use string interpolation: echo "The system time is $(date)."
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/424184", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273529/" ] }
424,216
Problem I have a script that accepts a few different (optional) command line arguments. For one particular argument, I'm getting the value "less" appear but I don't know why. Bash Code while getopts ":d:f:p:e:" o; do case "${o}" in d) SDOMAIN=${OPTARG} ;; f) FROM=${OPTARG} ;; p) PAGER=${OPTARG} ;; e) DESTEXT=${OPTARG} ;; *) show_usage ;; esacdonesource "./utils.sh"test #test includesecho "$SDOMAIN is the sdomain"echo "$FROM is the from"echo "$PAGER is the pager"echo "$DESTEXT is the extension"exit Output When I run my script, this is what I see: lab-1:/tmp/jj# bash mssp.sh -d testdomain.net Utils include worked! testdomain.net is the sdomain is the from less is the pager is the extension I can't see why I'm getting the "less" value in pager. I was hoping to see empty string. If you can see my bug, please let me know. I've been looking at this too long.
Your script never sets PAGER, but that variable is likely exported from your current environment; check with declare -p PAGER . I would recommend using a different variable name inside your script; this is why there's a general recommendation against using upper-case variables as your own.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/424216", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52684/" ] }
424,341
Instead of just mounting tmpfs on /var/log I want to use overlayfs. /var/log are writable tmpfs, but containing files were there beforetmpfs mount. This old files are not in memory of tmpfs but in lower layer. only changes are stored in tmpfs, while old and unmodified filesstored on SSD sometimes it should be possible to write changes to SSD, for examplevia cron. This should free up tmpfs memory So, result should be: logs written to RAM, old and new boot logs accesable via same path. Changes are written sometimes to disk, by script. Point is to speed up a little, and safe SSD from many writes. ( I saw similar thing in puppy linux, not for logs, but for all changes to root , but without installing it can't do the same, documentation not helps) I will do same for browser cookies/cache based on answer. But persistent write will be done on browser close. Can't turn off browser cache, need at least small cache to have same bugs in my web development as users can have because of cache.
Managed to make /var/log overlay, it shows SSD log files, and changes. All changes are kept in RAM. Later I'll do syncing, so changes become permanent every hour, by copying upper layer to lower. #prepare layerssudo mkdir -p /var/log.tmpfssudo mount -t tmpfs -o rw,nosuid,nodev,noexec,relatime,size=512m,mode=0775 tmpfs /var/log.tmpfssudo mkdir -p /var/log.tmpfs/uppersudo mkdir -p /var/log.tmpfs/worksudo chown -R root:syslog /var/log.tmpfssudo chmod -R u=rwX,g=rwX,o=rX /var/log.tmpfs#prepare overlaysudo mkdir -p /var/log.overlaysudo chown root:syslog /var/log.overlaysudo chmod u=rwX,g=rwX,o=rX /var/log.overlay#start overlaysudo mount -t overlay -o rw,lowerdir=/var/log,upperdir=/var/log.tmpfs/upper,workdir=/var/log.tmpfs/work overlay /var/log.overlaysudo mount --bind /var/log.overlay /var/log To make changes persistent, its needed to unmount bind /var/log, copy files, then bind again.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/424341", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273981/" ] }
424,412
I'm looking for a way to update thousands of .tbz archive files, so I'll be doing this with a shell script. I need to add one file to each. My question is, is there a faster way to do this without extracting each tbz's contents, then re-compressing with the new file included in the contained tar? What would the commands look like? Thanks
While tar can add files to an already existing archive, it cannot be compressed. You will have to bunzip2 the compressed archive, leaving a standard tarball. You can then use tar 's ability to add files to an existing archive, and then recompress with bzip2 . From the manual: -r Like -c, but new entries are appended to the archive. Note that this only works on uncompressed archives stored in regular files. The -f option is required.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/424412", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/276170/" ] }
424,417
How can one extract the exit code of a command and use it in a condition? For example look at the following script: i=4if [[ $i -eq 4 && $(command) -eq 0 ]] then echo "OK" else echo "KO" fi If both the conditions are satisfied the script should do something. In the second condition, I need the check on the exit status (and not the command output as is the case in the script given above), to know if the command was successful or not.
if [ "$i" -eq 4 ] && command1; then echo 'All is well'else echo 'i is not 4, or command1 returned non-zero'fi With $(command) -eq 0 (as in your code) you test the output of command rather than its exit code. Note: I've used command1 rather than command as command is the name of a standard utility.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/424417", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271131/" ] }
424,461
I am looking for a way to create a customized function in one of the setting files, so that when I open up a new session, the same customized function can be evaluated (or sourced), and I can use the function easily. I try to create a function to check if ERROR exists in my log file, so when I check the log files, I can just type the function name and the log file name. Now I am using grep: grep ERROR test.txt But I want to make it easier because I have a lot of these checks. So I add this line in .bashrc : ok(){grep ERROR $filename} and when I use the function, I expected to type: ok test.txt and it should give me the ERROR lines, if any. However, after I evaluated the .bashrc file, I got an error message: -bash: .bashrc: line 16: syntax error: unexpected end of file After I typed: ok test.txt , it provides: -bash: ok: command not found Can someone help me with this customized function? Or should I paste my code in another setting file like .bashrc-profile ? Thanks a lot in advance!
The shell is just really picky about the syntax and whitespace with the { ... } construct. These two ways to set up that function would work: ok() { grep ERROR $filename; }ok() { grep ERROR $filename} Regarding braces { .. } vs. parenthesis ( .. ) , Bash's manual states that : The semicolon (or newline) following list is required. and The braces are reserved words, so they must be separated from the list by blanks or other shell metacharacters. List refers to the commands inside the braces, and all of this applies where ever { ... } is used, but functions are probably the most common place. Also, if you want to be able to give the filename as a parameter to the function, use $1 inside it. i.e. ok() { grep ERROR "$1"; } could be used as ok test.txt .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/424461", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/276176/" ] }
424,489
How do I modify the df -h command to add the results of following command (in other words, add them all together and return the result)? $ df -h | awk '{print $2}'Size5.8G64G93M3.9G12G108G16G134G3.7T3.7T6.0T
The human-readable formatting of the numbers gets in the way, so you need to have consistent output first. Portably, you can use -P option to output in blocks of 1024 df -P | awk 'NR>2{sum+=$2}END{print sum}' If you use GNU df you can specify --blocksize option: df --block-size=1 | awk 'NR>2{sum+=$2}END{print sum}' NR>2 portion is to avoid dealing with the Size header line. As for formatting data back into human readable format, if you are on Linux you can use numfmt tool, otherwise - implement converter in awk . See the related answer . Note also that df outputs sizes for all filesystems, including virtual filesystems such as udev and tmpfs . You might consider filtering those out if you want the actual physical disks only. So if we consider only filesystems that have a device file represented in /dev/ filesystem, you could probably use something like this: df -P | awk 'NR>2 && /^\/dev\//{sum+=$2}END{print sum}' With GNU df you could consider using --local flag as well, to ignore remove filesystems.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/424489", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126111/" ] }