source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
520,675 | Consider this simple debian package: wolframscript.deb (to inspect you'll have to click the download link for Linux). After unpacking, it has the following file structure: ├── opt│ └── Wolfram│ └── WolframScript│ └── bin│ └── wolframscript└── usr ├── local │ └── share │ └── man │ └── man1 │ └── wolframscript.1 └── share ├── icons │ └── hicolor │ ├── 128x128 │ │ └── mimetypes │ │ └── application-vnd.wolfram.wls.png │ ├── 32x32 │ │ └── mimetypes │ │ └── application-vnd.wolfram.wls.png │ └── 64x64 │ └── mimetypes │ └── application-vnd.wolfram.wls.png └── mime └── packages └── application-vnd.wolfram.wls.xml The only relevant file is the opt/Wolfram/WolframScript/bin/wolframscript binary (I think). I tried executing this plainly but I get a bash: ./wolframscript: No such file or directory error. How do I make this binary/package usable in NixOS? EDIT: To answer @muru's question: $ file opt/Wolfram/WolframScript/bin/wolframscriptopt/Wolfram/WolframScript/bin/wolframscript: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked, interpreter /lib $ ldd opt/Wolfram/WolframScript/bin/wolframscript linux-vdso.so.1 (0x00007fff767c9000) libpthread.so.0 => /nix/store/7gx4kiv5m0i7d7qkixq2cwzbr10lvxwc-glibc-2.27/lib/libpthread.so.0 (0x00007f55b8525000) librt.so.1 => /nix/store/7gx4kiv5m0i7d7qkixq2cwzbr10lvxwc-glibc-2.27/lib/librt.so.1 (0x00007f55b831d000) libdl.so.2 => /nix/store/7gx4kiv5m0i7d7qkixq2cwzbr10lvxwc-glibc-2.27/lib/libdl.so.2 (0x00007f55b8119000) libstdc++.so.6 => not found libm.so.6 => /nix/store/7gx4kiv5m0i7d7qkixq2cwzbr10lvxwc-glibc-2.27/lib/libm.so.6 (0x00007f55b7d84000) libgcc_s.so.1 => /nix/store/7gx4kiv5m0i7d7qkixq2cwzbr10lvxwc-glibc-2.27/lib/libgcc_s.so.1 (0x00007f55b7b6e000) libc.so.6 => /nix/store/7gx4kiv5m0i7d7qkixq2cwzbr10lvxwc-glibc-2.27/lib/libc.so.6 (0x00007f55b77ba000) /lib64/ld-linux-x86-64.so.2 => /nix/store/7gx4kiv5m0i7d7qkixq2cwzbr10lvxwc-glibc-2.27/lib64/ld-linux-x86-64.so.2 (0x00007f55b874400 | I presented here a full list of methods to solve your problem, with example files. The two more efficient methods are autoPatchelfHook (prefered, as Vladimír Čunát was suggesting), or eventually steam-run (based on buildFHSUserEnv with lot's of default libraries) when you mostly want a quick-and-dirty-fix. Here is a quick summary: Proper method with autoPatchelfHook NixOs did for us a special "hook" autoPatchelfHook that automatically patches everything for you! You just need to specify it in (native)BuildInputs , and nix does the magic. Put in derivation.nix : { stdenv, dpkg, glibc, gcc-unwrapped, autoPatchelfHook }:let # Please keep the version x.y.0.z and do not update to x.y.76.z because the # source of the latter disappears much faster. version = "12.0.0"; src = ./WolframScript_12.0.0_LINUX64_amd64.deb;in stdenv.mkDerivation { name = "wolframscript-${version}"; system = "x86_64-linux"; inherit src; # Required for compilation nativeBuildInputs = [ autoPatchelfHook # Automatically setup the loader, and do the magic dpkg ]; # Required at running time buildInputs = [ glibc gcc-unwrapped ]; unpackPhase = "true"; # Extract and copy executable in $out/bin installPhase = '' mkdir -p $out dpkg -x $src $out cp -av $out/opt/Wolfram/WolframScript/* $out rm -rf $out/opt ''; meta = with stdenv.lib; { description = "Wolframscript"; homepage = https://www.wolfram.com/wolframscript/; license = licenses.mit; maintainers = with stdenv.lib.maintainers; [ ]; platforms = [ "x86_64-linux" ]; };} and in default.nix : { pkgs ? import <nixpkgs> {} }:pkgs.callPackage ./derivation.nix {} compile and run with nix-buildresult/bin/wolframscript Quicker method, with steam-run Nix provides buildFHSUserEnv that fakes a classic linux. You can use it directly and add the libraries to it, or if you prefer steam-run contains already lot's of libraries (despite the name it's independent of steam). Note that this method is heavier and requires a longer startup, so avoid it when it's possible. You just need to install steam-run (you need to allow unfree softwares, with { allowUnfree = true; } in ~/.config/nixpkgs/config.nix or if you use nixos-rebuild , use in your configuration.nix the line nixpkgs.config.allowUnfree = true; ), and then run: steam-run ./wolframscript Fore more details, see Different methods to run a non-nixos executable on Nixos | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/520675",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86396/"
]
} |
520,706 | For example whoami and date. I can do this that way whoami>/home/user/folder/filedate>>/home/user/folder/file But i'm sure it can be done in one line without typing path two times. I have tried using | but always first command is ignored. | Use a subshell (whoami; date) > ~user/directory/file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/520706",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/354485/"
]
} |
520,715 | When I was trying to install Docker on Deepin 15.10 via package repository, I had this error: Error: could not find a distribution template for Deepin/stable | Since the OP never moved their Answer out of the Question, here's what they had originally. Since Deepin 15.10, the base kernel is Debian stable, but in the Deepin distribution template is set as unstable; let's change that. sudo nano /usr/share/python-apt/templates/Deepin.info change Suite: unstable to Suite: stable and voilá, you now can add PPA without that problem above. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/520715",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/354484/"
]
} |
520,726 | How can I use ZScaler to access protected websites via Ubuntu without a dedicated Ubuntu client? | My company decided to drop our VPN for ZScaler and being the only person running Linux at my company I was left behind because ZScaler doesn't have a native Linux client. Let me start by saying that this workaround is extremely labyrinthine and equally fragile . It requires two full computers which makes it both expensive and impossible to do on the go so this is not a solution for laptops and remote / traveling employees. For the second computer I recommend picking up an old used computer on eBay to use as the dedicated Windows 10 machine. Otherwise you can repurpose almost any old laptop laying around. Anything with WiFi and an Ethernet port will work. Additionally, I hope that the product team at ZScaler sees the fragility of this workaround (a hack really) and is inspired to create a dedicated Linux client for us diehard Linux guys who just can't go back to macOS once we've switched. Here's what you're going to need for this workaround: A windows 10 laptop fully updated with WiFi and Ethernet capabilities The ZScaler client for Windows 10 A short Ethernet cable for connecting your Linux and Windows 10 machines A long Ethernet cable for connecting your Linux machine to the Internet A USB->Ethernet adapter for giving your Linux desktop a second Ethernet connection Step 1: Connecting to ZScaler on Windows 10 Install the ZScaler client for Windows 10. Login with your credentials and verify you can access internal and/or ZScaler protected websites as well as external websites and the broader Internet. Step 2: Verify all the necessary connections in Windows 10 In order for this to work your Windows 10 computer will need access to the outside internet (WiFi in this example), and the ZScaler adapter, and a local Ethernet connection to share ZScaler over. The below picture shows all of this. Step 3: Prepare your local Ethernet connection Part of the magic of this workaround is directly connecting your Windows 10 machine to your Linux machine via Ethernet and creating a private network between the two. In order to do this, you'll need to enter the properties of the local Ethernet connection's adapter and adjust the IPV4 settings to set a static IP address (very important) and also a subnet mask. I've chosen 192.168.137.1 and 255.255.255.0 respectively and it works great. Any valid internal IP address and subnet mask combination should work fine in theory. Step 4: Sharing the ZScaler connection This is one of the critical parts of the puzzle. Your Linux machine is going to get access to ZScaler via Windows 10 connection sharing. Right click on the ZScaler connection once it's connected and go to 'Properties'. Step 5: Share your ZScaler adapter to your Linux machine To do this, make sure your two computers are connected directly via Ethernet to Ethernet . It should be Linux <- Ethernet -> Windows 10. Then, go to the Sharing tab for the ZScaler adapter properties and share the ZScaler adapter with the Ethernet adapter which bridges your Linux machine to your Windows 10 machine. Step 6: Verify ZScaler access on Linux By now your Linux computer should be connected directly to your Windows 10 Machine and you should be able to resolve your internal website(s) on your Linux machine and nothing else . You should have no internet access. If you do, unplug your adapter(s) that give you internet connection. This is an extremely important step. Verify you can only access internal ZScaler-specific targets. If you're still having trouble with this step then try rebooting everything and starting over. Also, double check your static IP configuration on the Windows 10 machine as this doesn't tend to stick between reboots. Step 7: Get internet access In order to get Internet access you'll now need to use your USB->Ethernet adapter and plug it into your Linux machine. You should see services like Slack auto-login once your second Ethernet connection resolves and connects. Step 8: Restore access to ZScaler-protected websites Because plugging in a new internet connection changes your DNS and internet settings configuration at the Linux adapter level you need to restore access to ZScaler-protected assets via IP Tables in Linux. For this you need to know the IP address range of your protected assets, the static IP of your Windows 10 machine, and the device name itself for your internal private connection between Linux and Windows 10. For myself and my company the commands are: sudo ip route add 100.64.0.0/10 via 192.168.137.1 dev eno1 sudo ip route add 172.16.0.0/12 via 192.168.137.1 dev eno1 Where eno1 is the name of the network adapter that is directly connecting Windows 10 to Linux and 192.168.137.1 is the static IP address you configured in Windows 10 and 100.64.0.0/10 and 172.16.0.0/12 are the CIDR ranges for your ZScaler protected assets. You can find the name of the correct adapter using to route your ZScaler traffic through using ifconfig on your Linux machine and substitute in your device hardware name for eno1 . Step 9: Enable access to future ZScaler-protected websites Right now you can only access websites that you've already requested from ZScaler before plugging in your internet connection. This is a DNS issue. In order to fix this, you need to set the Windows 10 machine as your default DNS server so that when you request access to internal websites by name internal.mycompany.com then ZScaler can be used to resolve those hostnames. Once the hostname is resolved and the IP address shows a valid internal range you configured previously via IPTables then the traffic should be routed to the eno1 or equivalent adapter and then to ZScaler to load. You should see two wired connections now in Linux when you're done. Go ahead and edit the 'PCI Ethernet Connected' connection because that's the one we get internet from via our USB->Ethernet adapter. Now we need the static IP address that we chose for our Windows 10 machine on the private network that exists between Linux <-> Windows 10. This is why setting a static IP address is important. We want to hard code this IP address as our DNS server. And that's it! And this is how it works. All DNS requests are sent to ZScaler due to the DNS entry when configuring your network adapter. When a public IP is returned, your regular USB Ethernet connection resolves it successfully. When a private IP is returned, the IPTables forward the request to the adapter you specified when you executed sudo ip route add... . This allows the Windows 10 / ZScaler machine to load the website's content and send it back to you via Windows 10 connection sharing. This is essentially a split connection where all DNS requests are handled by ZScaler (since it is the only one who can resolve and load internal hostnames) but public content is loaded via your Linux USB Ethernet adapter and private content is loaded via your Ethernet<->Ethernet shared connection to Windows 10. What breaks this 'workaround'? Any reboot to the Windows 10 machine Power outage. See above. Changes in network topography on the Windows 10 side causing a new network / internet connection Changes in DHCP lease timing / renewing What is sub-optimal about this workaround? All DNS requests go through the ZScaler machine so your once hyper-fast wired connection is now as slow as WiFi for DNS requests at least. You can never turn off your W10 computer ever again otherwise you must perform this ritual every time to get your internet working again for both internal and external hosts. It takes a lot of practice to get this setup working reliably on a day-to-day basis. It took me about two months to master the workflow and resolve issues quickly when they come up. I'll finish by saying that this is an absolute last-resort and that any company looking to switch to ZScaler from a VPN solution should consider the lack of a dedicated Linux client and how that may or may not affect their engineers' ability to get work done. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/520726",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117633/"
]
} |
520,733 | I have file with following format : QLA D1 102 213 224 55 9 I need to skip lines which starts with 'Q' or 'L' and delete lines where second column has value greater than 10 and save the entire thing in new file. Example output files : Output file 1 Q L A D 1 10 4 5 5 9 Output file 2145 Code : while read -r line; if [[ $line == "A" ]] ||[[ $line == "Q" ]]||[[ $line == "L" ]] ; then awk '$2 < "11" { print $0 }' test.txtawk '$2 < "11" { print $1 }' test1.txtdone < input.file | My company decided to drop our VPN for ZScaler and being the only person running Linux at my company I was left behind because ZScaler doesn't have a native Linux client. Let me start by saying that this workaround is extremely labyrinthine and equally fragile . It requires two full computers which makes it both expensive and impossible to do on the go so this is not a solution for laptops and remote / traveling employees. For the second computer I recommend picking up an old used computer on eBay to use as the dedicated Windows 10 machine. Otherwise you can repurpose almost any old laptop laying around. Anything with WiFi and an Ethernet port will work. Additionally, I hope that the product team at ZScaler sees the fragility of this workaround (a hack really) and is inspired to create a dedicated Linux client for us diehard Linux guys who just can't go back to macOS once we've switched. Here's what you're going to need for this workaround: A windows 10 laptop fully updated with WiFi and Ethernet capabilities The ZScaler client for Windows 10 A short Ethernet cable for connecting your Linux and Windows 10 machines A long Ethernet cable for connecting your Linux machine to the Internet A USB->Ethernet adapter for giving your Linux desktop a second Ethernet connection Step 1: Connecting to ZScaler on Windows 10 Install the ZScaler client for Windows 10. Login with your credentials and verify you can access internal and/or ZScaler protected websites as well as external websites and the broader Internet. Step 2: Verify all the necessary connections in Windows 10 In order for this to work your Windows 10 computer will need access to the outside internet (WiFi in this example), and the ZScaler adapter, and a local Ethernet connection to share ZScaler over. The below picture shows all of this. Step 3: Prepare your local Ethernet connection Part of the magic of this workaround is directly connecting your Windows 10 machine to your Linux machine via Ethernet and creating a private network between the two. In order to do this, you'll need to enter the properties of the local Ethernet connection's adapter and adjust the IPV4 settings to set a static IP address (very important) and also a subnet mask. I've chosen 192.168.137.1 and 255.255.255.0 respectively and it works great. Any valid internal IP address and subnet mask combination should work fine in theory. Step 4: Sharing the ZScaler connection This is one of the critical parts of the puzzle. Your Linux machine is going to get access to ZScaler via Windows 10 connection sharing. Right click on the ZScaler connection once it's connected and go to 'Properties'. Step 5: Share your ZScaler adapter to your Linux machine To do this, make sure your two computers are connected directly via Ethernet to Ethernet . It should be Linux <- Ethernet -> Windows 10. Then, go to the Sharing tab for the ZScaler adapter properties and share the ZScaler adapter with the Ethernet adapter which bridges your Linux machine to your Windows 10 machine. Step 6: Verify ZScaler access on Linux By now your Linux computer should be connected directly to your Windows 10 Machine and you should be able to resolve your internal website(s) on your Linux machine and nothing else . You should have no internet access. If you do, unplug your adapter(s) that give you internet connection. This is an extremely important step. Verify you can only access internal ZScaler-specific targets. If you're still having trouble with this step then try rebooting everything and starting over. Also, double check your static IP configuration on the Windows 10 machine as this doesn't tend to stick between reboots. Step 7: Get internet access In order to get Internet access you'll now need to use your USB->Ethernet adapter and plug it into your Linux machine. You should see services like Slack auto-login once your second Ethernet connection resolves and connects. Step 8: Restore access to ZScaler-protected websites Because plugging in a new internet connection changes your DNS and internet settings configuration at the Linux adapter level you need to restore access to ZScaler-protected assets via IP Tables in Linux. For this you need to know the IP address range of your protected assets, the static IP of your Windows 10 machine, and the device name itself for your internal private connection between Linux and Windows 10. For myself and my company the commands are: sudo ip route add 100.64.0.0/10 via 192.168.137.1 dev eno1 sudo ip route add 172.16.0.0/12 via 192.168.137.1 dev eno1 Where eno1 is the name of the network adapter that is directly connecting Windows 10 to Linux and 192.168.137.1 is the static IP address you configured in Windows 10 and 100.64.0.0/10 and 172.16.0.0/12 are the CIDR ranges for your ZScaler protected assets. You can find the name of the correct adapter using to route your ZScaler traffic through using ifconfig on your Linux machine and substitute in your device hardware name for eno1 . Step 9: Enable access to future ZScaler-protected websites Right now you can only access websites that you've already requested from ZScaler before plugging in your internet connection. This is a DNS issue. In order to fix this, you need to set the Windows 10 machine as your default DNS server so that when you request access to internal websites by name internal.mycompany.com then ZScaler can be used to resolve those hostnames. Once the hostname is resolved and the IP address shows a valid internal range you configured previously via IPTables then the traffic should be routed to the eno1 or equivalent adapter and then to ZScaler to load. You should see two wired connections now in Linux when you're done. Go ahead and edit the 'PCI Ethernet Connected' connection because that's the one we get internet from via our USB->Ethernet adapter. Now we need the static IP address that we chose for our Windows 10 machine on the private network that exists between Linux <-> Windows 10. This is why setting a static IP address is important. We want to hard code this IP address as our DNS server. And that's it! And this is how it works. All DNS requests are sent to ZScaler due to the DNS entry when configuring your network adapter. When a public IP is returned, your regular USB Ethernet connection resolves it successfully. When a private IP is returned, the IPTables forward the request to the adapter you specified when you executed sudo ip route add... . This allows the Windows 10 / ZScaler machine to load the website's content and send it back to you via Windows 10 connection sharing. This is essentially a split connection where all DNS requests are handled by ZScaler (since it is the only one who can resolve and load internal hostnames) but public content is loaded via your Linux USB Ethernet adapter and private content is loaded via your Ethernet<->Ethernet shared connection to Windows 10. What breaks this 'workaround'? Any reboot to the Windows 10 machine Power outage. See above. Changes in network topography on the Windows 10 side causing a new network / internet connection Changes in DHCP lease timing / renewing What is sub-optimal about this workaround? All DNS requests go through the ZScaler machine so your once hyper-fast wired connection is now as slow as WiFi for DNS requests at least. You can never turn off your W10 computer ever again otherwise you must perform this ritual every time to get your internet working again for both internal and external hosts. It takes a lot of practice to get this setup working reliably on a day-to-day basis. It took me about two months to master the workflow and resolve issues quickly when they come up. I'll finish by saying that this is an absolute last-resort and that any company looking to switch to ZScaler from a VPN solution should consider the lack of a dedicated Linux client and how that may or may not affect their engineers' ability to get work done. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/520733",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/155574/"
]
} |
520,810 | I can't determine exactly what file is eating up my disk. Firstly I used df command to list my directories: devtmpfs 16438304 0 16438304 0% /devtmpfs 16449868 0 16449868 0% /dev/shmtmpfs 16449868 1637676 14812192 10% /runtmpfs 16449868 0 16449868 0% /sys/fs/cgroup/dev/mapper/fedora-root 51475068 38443612 10393632 79% /tmpfs 16449868 384 16449484 1% /tmp/dev/sda3 487652 66874 391082 15% /boot/dev/mapper/fedora-home 889839636 44677452 799937840 6% /home Then I ran du -h / | grep '[0-9\,]\+G' . The problem is I get everything including other directories,so I need to get specifically find /dev/mapper/fedora-root but when I try du -h /dev/mapper/fedora-root | grep '[0-9\,]\+G' I get no results. I need to know what's eating up 79% of directory / How can I solve this? | My magic command in such situation is : du -m . --max-depth=1 | sort -nr | head -20 To use this : cd into the top-level directory containing the files eating space. This can be / if you have no clue ;-) run du -m . --max-depth=1 | sort -nr | head -20 . This will list the 20 biggest subdirectories of the current directory, sorted by decreasing size. cd into the biggest directory and repeat the du ... command until you find the BIG file(s) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/520810",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/354557/"
]
} |
520,985 | I have always been using single-quotes for the field separation like: awk -F';' ... Quite new to me is the way of using a backslash like: awk -F\; ... is there a technical difference for either, or is it just a matter of preference? | That's all to do with your shell, not with awk . In Bourne-like shells, \ , '...' and "..." are all quoting operators. Quoting removes the special meaning a character may have in the syntax of the shell. \ quotes a single character (except for newline which it removes instead), '...' and "..." can quote more than one (with "..." not quoting every character). ; is a special character in the syntax of the shell. It's used to separate commands. You want to quote it if you want to pass it verbatim to a command. \; , ';' will do. ";" will also do as ; is not one of those characters that are still special within double quotes, but you'd need "\\" to pass one literal backslash to a command because \ is one of those characters that are still special within "..." (though it's then only special when followed by other special characters within "..." like that " itself). Again that very much depends on the shell. In the rc shell for instance, \ and " are not special let alone quoting characters, -F\; wouldn't work there as the command would be parsed as both the awk -F\ and ... command separated with ; . See How to use a special character as a normal one? for more details. To complicates things further, note that the argument to -F itself also goes through one or two layers of backslash processing by awk . awk processes first the argument it receives to expand ANSI C escape sequences in it. If you use awk -F '\t' or awk -F \\t or awk -F "\\t" or awk -F "\t" , awk receives an argument that contains \t , which it expands to a TAB character. The FS awk variable will contain a TAB character, not \t . With awk -F '\\' , awk receives a \\ argument and sets FS to the \ character. Strictly speaking, awk -F '\' would is unspecified as that escape sequence is unfinished but in practice, except for busybox awk , all awk implementations I know treat it the same as awk -F '\\' . In awk , when FS contains a single character, that character is the field separator. awk -F . splits the records on dot characters. However when FS contains more than one character, it is interpreted as a regular expression. awk -F .. doesn't spilt on sequences of two dots, but on sequences of any two characters as . is the regular expression operator that matches any single character. To split on two dots, you'd need awk -F '[.][.]' or awk -F '\\.\\.' . With awk -F '\\\\' , a literal \\\\ is passed by the shell to awk , awk expands each of those two \\ to \ , so FS becomes \\ , which is treated as a regular expression. \ is also special in the regular expression syntax and is used to remove the special meaning of a character as a regex operator this time. So again, that is splitting on backslash characters, though this time, as a regular expression. So, in practice, to split on \ , all of these (in Bourne-like shells) will work: awk -F '\' # FS becomes a single \ except in busybox where it's emptyawk -F "\\" # instead so it's a one-character split on backslashawk -F \\ # and a one-field-by-character split in busyboxawk -F '\\' # FS becomes a single \ in every awk implementationawk -F \\\\ # so one-character split on backslashawk -F "\\\\"awk -F '\\\' # FS is \ on busybox and \\ in other implementationsawk -F \\\\\\ # so one-character split on backslash in busybox andawk -F "\\\\\\" # \\ regex split in other implementations, to the same effectawk -F '\\\\' # FS is \\ in all implementations soawk -F \\\\\\\\ # \\ regex splitawk -F "\\\\\\\" I would advise to use single quotes as they are the most straightforward and least surprising kind of quotes. So here, to split on backslash portably: awk -F '\\' . You can also do things like: awk -v FS='\\' ... Or awk 'BEGIN{FS="\\"} ...' or awk ... 'FS=\\' or: FS='\' awk 'BEGIN{FS = ENVIRON["FS"]} ...' (that one avoiding the extra backslash expansion performed by awk , so need only one backslash). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/520985",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240990/"
]
} |
521,037 | Almost every page I've found is about to automatically start Xorg after login wihout explianation, take ~/.bash_profile for example: if [[ ! $DISPLAY && $XDG_VTNR -eq 1 ]]; then exec xinitfi I suppose $XDG_VTNR could be a variable for obtaining the current TTY number, however, there is already a command called tty , which can meet the same purpose. My questions: What is $XDG_VTNR ? Where and when is it being set? Where can I find the official documentation about this variable? tty is a built-in command while $XDG_VTNR is provided by Xorg, why people choose to use $XDG_VTNR instead of built-in tty ? | What is $XDG_VTNR ? Where and when is it being set? It's set by the pam_systemd PAM module, and is only set on machines which are using systemd, which means that you should not rely on it in your scripts, unless you want to make them depend on systemd. On systems which are using systemd, $XDG_VTNR will be set both in graphical (by lightdm , gdm , etc) and in text-mode sessions (by /bin/login ). Where can I find the official documentation about this variable? In the pam_systemd(8) manpage. tty is a built-in command while $XDG_VTNR is provided by Xorg, why people choose to use $XDG_VTNR instead of built-in tty ? 1) tty is a standalone program, not a built-in, and $XDG_VTNR is not provided by Xorg. 2) Because they're completely different things. As clearly stated in its manpage, tty(1) will tell you the name of the terminal connected to its standard input, not the name of the virtual terminal your GUI session or such may be running on[1]. Consider this: $ script -q /dev/null$ tty/dev/pts/5$ script -q /dev/null$ tty/dev/pts/6$ tty </dev/zeronot a tty [1] for which XDG_VTNR isn't a reliable indicator either. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/521037",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/223471/"
]
} |
521,096 | So some background first: I am attempting to convert a non-encrypted shared folder into an encrypted one on my Synology NAS and am seeing this error: So I would like to locate the offending files so that I may rename them. I have come up with the following grep command: grep -rle '[^\ ]\{143,\}' * but it outputs all files with paths greater than 143 characters: #recycle/Music/TO SORT/music/H/Hooligans----Heroes of Hifi/Metalcore Promotions - Heroes of Hifi - 03 Sly Like a Megan Fox.mp3... What I would like is for grep to split on / and then perform its search. Any idea on an efficient command to go about this (directory easily contains hundreds of thousands of files)? | Try: find /your/path | grep -E '[^/]{143,}$' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/521096",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96057/"
]
} |
521,136 | Under Xorg, I used ~/.Xmodmap in order to be able to type amongst others a German Umlaut (i.e., äüö) using the right Ctrl key and a , u , o , respectively (as well as Shift for capitals): remove Control = Control_Rkeycode 105 = Mode_switchkeysym e = e E EuroSignkeysym c = c C centkeysym a = a A adiaeresis Adiaeresiskeysym o = o O odiaeresis Odiaeresiskeysym u = u U udiaeresis Udiaeresiskeysym s = s S ssharp I haven't found a way to achieve the same under Wayland using xkb . So far, I've only managed to set my keyboard variant to altgr-intl , which then lets me use right Alt + q , for example, to get an ä. Since I'm also using Sway, I can't use Alt + Shift + q though for the capital version, because in Sway this is the shortcut to closing a window - and I don't want to remap this. So, how do I go about putting Umlauts to right- Ctrl + a , u , o , respectively, as I've had it before under Xorg? | Try: find /your/path | grep -E '[^/]{143,}$' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/521136",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/354840/"
]
} |
521,139 | I have a mostly automated media server and currently have everything going into one folder and being sorted out by file extension into the correct locations. At the moment the photos are coming in named "folder.jpg" and I need to rename them to match the movie name. What it looks like now:Before: /Directory/ folder.jpg Movie.mp4 Movie.xml What I need it to look like:After: /Directory/ Movie.jpg Movie.mp4 Movie.xml How would I go about matching the jpg to mp4. | Try: for movie in ./*/*.mp4; do mv -- "${movie%/*}/folder.jpg" "${movie%.mp4}.jpg"; done ${movie%/*} and ${movie%.mp4} are both examples of suffix removal . ${movie%/*} returns the directory that the movie file is in and ${movie%.mp4} returns the name of the movie file minus the extension .mp4 . Example Consider three directories, dir1 , dir2 , and dir3 , with the files: $ ls -1 */*dir1/Animal Crackers.mp4dir1/Animal Crackers.xmldir1/folder.jpgdir2/folder.jpgdir2/Monkey Business.mp4dir2/Monkey Business.xmldir3/Duck Soup.mp4dir3/Duck Soup.xmldir3/folder.jpg Now, run our command: $ for movie in ./*/*.mp4; do mv -- "${movie%/*}/folder.jpg" "${movie%.mp4}.jpg"; done After running our command, the files are: $ ls -1 */*dir1/Animal Crackers.jpgdir1/Animal Crackers.mp4dir1/Animal Crackers.xmldir2/Monkey Business.jpgdir2/Monkey Business.mp4dir2/Monkey Business.xmldir3/Duck Soup.jpgdir3/Duck Soup.mp4dir3/Duck Soup.xml Multiple line version For those who prefer their commands spread over multiple lines: for movie in ./*/*.mp4do mv -- "${movie%/*}/folder.jpg" "${movie%.mp4}.jpg"done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/521139",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/341479/"
]
} |
521,174 | I have a text file which is usually filled with multiple lines which I want to "print" with a while loop. The text inside this file contains variables - my problem is, that these variables are not interpreted unlike a similar test-string containing variables stored inside the script. Is it possible to also interpret those variables from my external file or do I have to parse them beforehands etc.? What is the difference between $LINE_INSIDE and $LINE_OUTSIDE ?I tried some suggestions from other questions like ${!varialbe_name} and different constructs with quote signs but with no luck so far. #!/bin/bash # color.sh BLUE='\033[1;34m' NC='\033[0m' # No Color LINE_INSIDE="${BLUE}Blue Text${NC}" echo -e ${LINE_INSIDE} while read LINE_OUTSIDE; do echo -e ${LINE_OUTSIDE} done < text_file Output: Additional Information: I (indeed) also have shell-commands in my input-text-file which should not by executed. Only the variables should be expaned. | It would probably make more sense to write it as: BLUE=$'\033[1;34m'NC=$'\033[0m' # No Coloreval "cat << EOF$(<text_file)EOF" than using a while read loop ( that's not the right syntax for reading lines btw ). Of course that means that code in there would be interpreted. A $(reboot) in there for instance would cause a reboot, but that's more or less what you're asking for. That also assumes the text_file doesn't contain an EOF line. Another approach that would only do variable (environment variable) substitution (and not command substitution for instance) would be to use GNU gettext 's envsubst : BLUE=$'\033[1;34m'NC=$'\033[0m' # No Colorexport BLUE NCenvsubst < text_file Or so that only those two variables are expanded: BLUE=$'\033[1;34m'NC=$'\033[0m' # No Colorexport BLUE NCenvsubst '$BLUE$NC' < text_file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/521174",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/354864/"
]
} |
521,198 | I am currently monitoring directories with subdirectories, and I need to detect file rename. what I do is making a md5sum of all files and a keep a file list on another place, and from time to time check again md5sums if files changes are additions or rename. This process is heavy, I guess there must be a simpler way to just detect rename. | It would probably make more sense to write it as: BLUE=$'\033[1;34m'NC=$'\033[0m' # No Coloreval "cat << EOF$(<text_file)EOF" than using a while read loop ( that's not the right syntax for reading lines btw ). Of course that means that code in there would be interpreted. A $(reboot) in there for instance would cause a reboot, but that's more or less what you're asking for. That also assumes the text_file doesn't contain an EOF line. Another approach that would only do variable (environment variable) substitution (and not command substitution for instance) would be to use GNU gettext 's envsubst : BLUE=$'\033[1;34m'NC=$'\033[0m' # No Colorexport BLUE NCenvsubst < text_file Or so that only those two variables are expanded: BLUE=$'\033[1;34m'NC=$'\033[0m' # No Colorexport BLUE NCenvsubst '$BLUE$NC' < text_file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/521198",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134693/"
]
} |
521,200 | I have a process running the foreground. I am wondering if it's possible to exit Bash, without (of course) killing the foreground process and exiting as usual, and without killing the terminal program, or if the connection is remote via a client (like iTerm2 or PuTTY), without killing the said client. I know if I don't have a foreground process running, I can readily send End of Transmission via Ctrl-d or issuing exit . I'd like to know if this is possible with a foreground process running. | It would probably make more sense to write it as: BLUE=$'\033[1;34m'NC=$'\033[0m' # No Coloreval "cat << EOF$(<text_file)EOF" than using a while read loop ( that's not the right syntax for reading lines btw ). Of course that means that code in there would be interpreted. A $(reboot) in there for instance would cause a reboot, but that's more or less what you're asking for. That also assumes the text_file doesn't contain an EOF line. Another approach that would only do variable (environment variable) substitution (and not command substitution for instance) would be to use GNU gettext 's envsubst : BLUE=$'\033[1;34m'NC=$'\033[0m' # No Colorexport BLUE NCenvsubst < text_file Or so that only those two variables are expanded: BLUE=$'\033[1;34m'NC=$'\033[0m' # No Colorexport BLUE NCenvsubst '$BLUE$NC' < text_file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/521200",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227169/"
]
} |
521,201 | I know that you can schedule a shutdown for a specific time via shutdown -h 21:45 and that you shouldn't use crontabs for such things because of their repetitive nature. How can I schedule a shutdown for a specific date like 31st of August at 20:00pm? | The at command is for scheduling one off future executions. e.g. % at 8pm Aug 31at> echo helloat> <EOT>job 161 at Sat Aug 31 20:00:00 2019 (the "<EOT>" was produced by pressing control-D) % atq161 Sat Aug 31 20:00:00 2019 a sweh You can put your shutdown command here. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/521201",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/354899/"
]
} |
521,269 | ssh clients (by default, at least in Ubuntu 18.04 and FreeBSD 12) always check if server's key fingerprint is in the known_hosts file. I have a host in the LAN which has dual boot; both the OSs use the same static IP. I would like to connect through ssh to both of them, without encountering errors. This obviously violates the checks performed on known_hosts : if I accept one fingerprint, it will be related to the host IP; when OS is switched, the fingerprint changes, while the IP is the same, and I need to manually delete it in known_hosts before being able to connect again. I would like that one fingerprint, or the other, is accepted when considering that IP. Is there a client side solution to overcome this issue? I am using OpenSSH_7.8p1, OpenSSL 1.1.1a-freebsd 20 Nov 2018 and OpenSSH_7.6p1 Ubuntu-4ubuntu0.3, OpenSSL 1.0.2n 7 Dec 2017 . Note : I do not want "no check" over the server's fingerprint. I am just wondering if it is possible to relate two alternative fingerprints (not just one) to server's IP address. | Your problem is that host keys are just that, they are a key for the host. There is really only supposed to be one per host. Of course there are several because there are several types of key, but I would avoid relying on key types to give you multiple acceptable keys for a single host. On the server side My first suggestion is that you consider carefully if you really want to do this on the client side. You could treat these two OS as the same host and simply copy the host key from one to the other. If you copy /etc/ssh/ssh_host* from OpenSSH you can use these on other operating systems. Although they might need some reformatting depending on the SSH server you run. But ... May I ask why you have ruled out server side solutions?. Wouldn't the easiest way be to try to make both OS use the same host key? – Philip Couling @PhilipCouling Partly for ease of use: one of the OSs is Windows. Partly to not transfer keys from a host to another, which is a sometimes discouraged practice. But the main reason is: I would like to obtain some degree of flexibility in ssh client configuration, if it is possible. – BowPark I think that what you are looking for is a way to treat the two OS as different hosts even though they share an IP and port number. On the client side Perhaps the most reliable way will be to set host specific configuration for each OS. Edit (or create) ~/.ssh/config to add: Host windows.dualbootbox Hostname 192.168.10.20 UserKnownHostsFile ~/.ssh/windows.dualbootbox.known_hostsHost ubuntu.dualbootbox Hostname 192.168.10.20 UserKnownHostsFile ~/.ssh/ubuntu.dualbootbox.known_hosts You don't need to specify Hostname if each Host already resolves to an IP. See man ssh_config for more configuration options. With the above configuration you can then either: ssh [email protected] [email protected] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/521269",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48707/"
]
} |
521,464 | I found this code here https://www.tldp.org/HOWTO/Bash-Prompt-HOWTO/x700.html which nicely gives me the number of files in my directory. ls -1 | wc -l but I only want to know how many of those files' names start with 2009 (for example 20091210_005037.nc ). I tried ls -1 | wc -l 2009* but that slowly lists all the files and does not seem to give me a number. | set -- 2009*echo "$#" This sets the list of positional parameters ( $1 , $2 , ..., etc.) to the names matching 2009* . The length of this list is $# . The issue with ls -1 | wc -l 2009* is that you execute wc -l directly on the files matching 2009* , counting the number of lines in each. Meanwhile, ls -1 is trying to write to the standard input of wc , which wc is not reading from since it was given an explicit list of files to work on. You may have wanted to use ls -d 2009* | wc -l . This would have listed all the names that match 2009* (using ls with -d to not list the contents of directories), and would count the number of lines in the output. Note that -1 is not needed if you pipe the result of ls somewhere (unless ls is an alias or shell function that forces column output). Note also that this would give you the wrong count if any filename contains a newline: $ touch '2009> was> a> good> year'$ ls2009?was?a?good?year$ ls -ltotal 0-rw-r--r-- 1 kk wheel 0 May 28 11:09 2009?was?a?good?year$ ls -12009?was?a?good?year$ ls | wc -l 5$ ls -1 | wc -l 5 However: $ set -- 2009*$ echo "$#"1 (using set and outputting $# additionally does not use any external commands in most shells) Using find to count recursively: find . -type f -name '2009*' -exec echo . \; | wc -l Here, we output a dot for each found pathname in or under the current directory, and then we count the number of lines that this produces. We don't count the filename strings themselves, and instead do it this way to avoid counting too many lines if a filename contains newlines. With find we're able to more closely control the type of file that we count. Above, we explicitly test for regular files with -type f (i.e. not directories and other types of files). The * pattern in the shell does not distinguish between directories and files, but the zsh shell can use *(.) to modify the behaviour of the pattern to only match regular files (the zsh user would probably use 2009*(.) instead of 2009* in the non- find variations above and below). Using ** in (with shopt -s globstar in bash , or set -o extended-glob in yash , or in any other shell that may support it), to count recursively: set -- **/2009*echo "$#" The pattern ** matches almost like * , but also matches across / in pathnames. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/521464",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/334419/"
]
} |
521,497 | I have a script that have "while true" loop. And I want to run that script from cron on every minute, so that when the process is killed (or is failed - no matter why) cron will run the script again. But when I'm checking the ps -aef --forest there is my process runned by /usr/sbin/CROND -n . This wasn't be bad for cron or system? Or maybe I should do it differently? | Maybe a short example for a systemd service will do. This is our infinite script, location /path/to/infinite_script , executable bit set: #!/bin/bashwhile ((1)) ; do date >> /tmp/infinite_date sleep 2done No we need to define a service file: [Unit]#just what it doesDescription= infinite date service[Service]#not run by root, but by meUser=fiximan#we assume the full service as active one the script was startedType=simple#where to find the executableExecStart=/path/to/infinite_script#what you want: make sure it always is runningRestart=always[Install]#which service wants this to run - default.target is just it is loaded by defaultWantedBy=default.target and place it in /etc/systemd/system/infinite_script.service Now load and start the service (as root): systemctl enable infinite_script.servicesystemctl start infinite_script.service The service is running now and we can check its status systemctl status infinite_script.service● infinite_script.service - infinite date service Loaded: loaded (/etc/systemd/system/infinite_script.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2019-05-28 14:18:52 CEST; 1min 33s ago Main PID: 7349 (infinite_script) Tasks: 2 (limit: 4915) Memory: 1.5M CGroup: /system.slice/infinite_script.service ├─7349 /bin/bash /path/to/infinite_script └─7457 sleep 2Mai 28 14:18:52 <host> systemd[1]: Started infinite date service. Now if you kill the script ( kill 7349 - main PID) and check the status again: ● infinite_script.service - infinite date service Loaded: loaded (/etc/systemd/system/infinite_script.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2019-05-28 14:22:21 CEST; 12s ago Main PID: 7583 (infinite_script) Tasks: 2 (limit: 4915) Memory: 1.5M CGroup: /system.slice/infinite_script.service ├─7583 /bin/bash /path/to/infinite_script └─7606 sleep 2Mai 28 14:22:21 <host> systemd[1]: Started infinite date service. So note how it was just restarted instantly with a new PID. And check the file ownership of the output: ls /tmp/infinite/date-rw-r--r-- 1 fiximan fiximan 300 Mai 28 14:31 infinite_date So the script is run by the correct user as set in the service file. Of course you can stop and disable the service: systemctl stop infinite_script.servicesystemctl disable infinite_script.service EDIT: A few more details: a user's personal services can (by default) be placed in $HOME/.config/systemd/user/ and managed accordingly with systemctl --user <commands> . No root needed just like with a personal crontab. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/521497",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133532/"
]
} |
521,506 | I'm in a directory where running tree command produces something like this: ├── directory1│ └── image_sequence│ ├── image.0001.jpg│ ├── image.0002.jpg│ ├── image.0003.jpg│ ├── image.0004.jpg│ ├── image.0005.jpg│ └── image.0006.jpg│ └── directory2 ├── somefile.ext └── someanotherfile.ext2 The image sequence inside image_sequence produces a large listing that I want to trim. My desired output is something like below: ├── directory1│ └── image_sequence│ └── image.####.jpg│ └── directory2 ├── somefile.ext └── someanotherfile.ext2 Can the output of tree command somehow be modified? | Maybe a short example for a systemd service will do. This is our infinite script, location /path/to/infinite_script , executable bit set: #!/bin/bashwhile ((1)) ; do date >> /tmp/infinite_date sleep 2done No we need to define a service file: [Unit]#just what it doesDescription= infinite date service[Service]#not run by root, but by meUser=fiximan#we assume the full service as active one the script was startedType=simple#where to find the executableExecStart=/path/to/infinite_script#what you want: make sure it always is runningRestart=always[Install]#which service wants this to run - default.target is just it is loaded by defaultWantedBy=default.target and place it in /etc/systemd/system/infinite_script.service Now load and start the service (as root): systemctl enable infinite_script.servicesystemctl start infinite_script.service The service is running now and we can check its status systemctl status infinite_script.service● infinite_script.service - infinite date service Loaded: loaded (/etc/systemd/system/infinite_script.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2019-05-28 14:18:52 CEST; 1min 33s ago Main PID: 7349 (infinite_script) Tasks: 2 (limit: 4915) Memory: 1.5M CGroup: /system.slice/infinite_script.service ├─7349 /bin/bash /path/to/infinite_script └─7457 sleep 2Mai 28 14:18:52 <host> systemd[1]: Started infinite date service. Now if you kill the script ( kill 7349 - main PID) and check the status again: ● infinite_script.service - infinite date service Loaded: loaded (/etc/systemd/system/infinite_script.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2019-05-28 14:22:21 CEST; 12s ago Main PID: 7583 (infinite_script) Tasks: 2 (limit: 4915) Memory: 1.5M CGroup: /system.slice/infinite_script.service ├─7583 /bin/bash /path/to/infinite_script └─7606 sleep 2Mai 28 14:22:21 <host> systemd[1]: Started infinite date service. So note how it was just restarted instantly with a new PID. And check the file ownership of the output: ls /tmp/infinite/date-rw-r--r-- 1 fiximan fiximan 300 Mai 28 14:31 infinite_date So the script is run by the correct user as set in the service file. Of course you can stop and disable the service: systemctl stop infinite_script.servicesystemctl disable infinite_script.service EDIT: A few more details: a user's personal services can (by default) be placed in $HOME/.config/systemd/user/ and managed accordingly with systemctl --user <commands> . No root needed just like with a personal crontab. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/521506",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22240/"
]
} |
521,508 | UPDATE: The REAL answer to this MULTILINE question was given here by Stephane https://unix.stackexchange.com/a/521560/354415 Alternatively go line by line with perl by Terdon below: https://unix.stackexchange.com/a/521512/354415 Alternatively go line by line with IFS by myself here: https://unix.stackexchange.com/a/521550/354415 Here is my new toy: Problem is that I want it to match only on lines which do not have the #\s* in front of the parameter. Please do not provide alternative code e.g. sed etc. use perl perl -we 'my $file= "parameter=9# parameter=10parameter=10"; $file=~ s/.*((?<!^# ))parameter\s*=.*/parameter=replaced/g; print(":$file:\n")' Expected output parameter=replaced# parameter=10parameter=replaced PS if you are interested to see how I progressed with this, look here: Perl Negative Lookbehind with variable length bypass maybe? | Maybe a short example for a systemd service will do. This is our infinite script, location /path/to/infinite_script , executable bit set: #!/bin/bashwhile ((1)) ; do date >> /tmp/infinite_date sleep 2done No we need to define a service file: [Unit]#just what it doesDescription= infinite date service[Service]#not run by root, but by meUser=fiximan#we assume the full service as active one the script was startedType=simple#where to find the executableExecStart=/path/to/infinite_script#what you want: make sure it always is runningRestart=always[Install]#which service wants this to run - default.target is just it is loaded by defaultWantedBy=default.target and place it in /etc/systemd/system/infinite_script.service Now load and start the service (as root): systemctl enable infinite_script.servicesystemctl start infinite_script.service The service is running now and we can check its status systemctl status infinite_script.service● infinite_script.service - infinite date service Loaded: loaded (/etc/systemd/system/infinite_script.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2019-05-28 14:18:52 CEST; 1min 33s ago Main PID: 7349 (infinite_script) Tasks: 2 (limit: 4915) Memory: 1.5M CGroup: /system.slice/infinite_script.service ├─7349 /bin/bash /path/to/infinite_script └─7457 sleep 2Mai 28 14:18:52 <host> systemd[1]: Started infinite date service. Now if you kill the script ( kill 7349 - main PID) and check the status again: ● infinite_script.service - infinite date service Loaded: loaded (/etc/systemd/system/infinite_script.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2019-05-28 14:22:21 CEST; 12s ago Main PID: 7583 (infinite_script) Tasks: 2 (limit: 4915) Memory: 1.5M CGroup: /system.slice/infinite_script.service ├─7583 /bin/bash /path/to/infinite_script └─7606 sleep 2Mai 28 14:22:21 <host> systemd[1]: Started infinite date service. So note how it was just restarted instantly with a new PID. And check the file ownership of the output: ls /tmp/infinite/date-rw-r--r-- 1 fiximan fiximan 300 Mai 28 14:31 infinite_date So the script is run by the correct user as set in the service file. Of course you can stop and disable the service: systemctl stop infinite_script.servicesystemctl disable infinite_script.service EDIT: A few more details: a user's personal services can (by default) be placed in $HOME/.config/systemd/user/ and managed accordingly with systemctl --user <commands> . No root needed just like with a personal crontab. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/521508",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/354415/"
]
} |
521,596 | trying to understand the command: bash -i &> /dev/tcp/10.3.0.13/222 0>&1 it means that the STDIN of "bash -i" will get the STDOUT contents? | &> file itself is the same as > file 2>&1 , that is open file in write-only mode on file descriptor 1, and duplicate that file descriptor 1 to the file descriptor 2, so that both fd 1 and 2 (stdout and stderr) point to that open file description 0>&1 (same as 0<&1 or <&1 ) adds 0 (stdin) to the list. It duplicates fd 1 to 0 as well (fd 0 is made to point to the same resource as pointed to by fd 1). Now, when doing > /dev/tcp/host/port in bash (like in ksh where the feature comes from), instead of doing a open(file, O_WRONLY) , bash creates a TCP socket and connects it to host:port . That's not a write-only redirection, that's a read+write network socket. So you end up with fds 0, 1 and 2 of bash -i being a TCP socket. When bash -i reads on its stdin, it reads from the socket so from whatever sits at the other end of host:post and when it (or any command run from there) writes to fd 1 or 2, it is sent over that socket. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/521596",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/355200/"
]
} |
521,648 | Related to the SuperUser question at https://superuser.com/questions/200387/linux-overcommit-memory my question is what was the reason why they made to allow overcommit the default? Since 2.5.30 the values are: 0 (default): as before: guess about how much overcommitment is reasonable, | Big part of the need to overcommit memory in Linux (and Unix systems overall) come from the need to implement the fork() system call, which duplicates the calling process' address space. Most often, this system call is followed by exec() , the combination of which results in spawning a separate program as a child of the current process, in which case most of the duplicated address space will end up not being used. In order to make that efficient, Linux uses copy-on-write to avoid duplicating the memory of the application calling fork() , in which case it can avoid having to copy all the pages, just to discard them shortly after exec() is called. But at the time fork() is called, there's no way to tell whether an exec() is coming. It's quite possible that this is being used to spawn worker children and that reusing the address space of the parent is what's desired. (This technique was quite popular in daemons using pre-forked workers to handle connections.) In which case, most or at least some of the memory requirements will exist for a forked child (perhaps not 100% of the parent's memory, but one could assume most of it.) But always reserving memory for that case is troublesome for the fork() + exec() case, especially if the parent is a long-running process that reserves multiple gigabytes of memory and forks many children. If it wasn't for overcommit, you'd have to reserve an amount totalling the many gigabytes used by the parent, and that for each forked child. But none (or almost none) of that would be really used, since exec() would flush that reservation right away. The end result is that such a workload would either require a huge amount of swap space (to cope with the reservations, most of it would be unused, but would need to be there for worst case) or something like overcommit. While fork() is a great illustration of this example, other APIs in Linux/Unix also lead to the need to overcommit. For instance, when malloc() is called (or more precisely the syscalls implementing it), no memory is actually allocated until it is "touched" by the process, so it's perfectly valid to allocate a very large block of gigabytes and use that sparsely so that only a few megabytes are actually used. The fact that these APIs work that way means programs exploit those properties, meaning they would most likely break in absence of overcommit (unless you really have a lot of memory or swap to waste by backing these reservations.) An interesting discussion on the issue with fork() can be found on this LWN posting about an article from Microsoft Research . The article itself is interesting, of course. But you can see how the comments go right away into overcommit and the problems with it. The article is named A fork() in the road . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/521648",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23965/"
]
} |
521,661 | How can I print variables to an output file in two or more columns (using echo for example). I have the following: grep -oP 'value\s*=\s*\K.*' file >x_vs_y # x valuesfor X in $(seq 0 50 400)do echo "$X" >>x_vs_y # y valuesdone Output: (x_vs_y) 1.087594323631.084772167021.082119884311.079479770451.076851854571.074236319411.071632825021.069041567981.06646245052050100150200250300350400 With this script I get an output file in a single column (18x1), but I want to get a 9x2 array (X vs Y) like: Output: (x_vs_y) 1.08759432363 01.08477216702 501.08211988431 1001.07947977045 1501.07685185457 2001.07423631941 2501.07163282502 3001.06904156798 3501.06646245052 400 | Big part of the need to overcommit memory in Linux (and Unix systems overall) come from the need to implement the fork() system call, which duplicates the calling process' address space. Most often, this system call is followed by exec() , the combination of which results in spawning a separate program as a child of the current process, in which case most of the duplicated address space will end up not being used. In order to make that efficient, Linux uses copy-on-write to avoid duplicating the memory of the application calling fork() , in which case it can avoid having to copy all the pages, just to discard them shortly after exec() is called. But at the time fork() is called, there's no way to tell whether an exec() is coming. It's quite possible that this is being used to spawn worker children and that reusing the address space of the parent is what's desired. (This technique was quite popular in daemons using pre-forked workers to handle connections.) In which case, most or at least some of the memory requirements will exist for a forked child (perhaps not 100% of the parent's memory, but one could assume most of it.) But always reserving memory for that case is troublesome for the fork() + exec() case, especially if the parent is a long-running process that reserves multiple gigabytes of memory and forks many children. If it wasn't for overcommit, you'd have to reserve an amount totalling the many gigabytes used by the parent, and that for each forked child. But none (or almost none) of that would be really used, since exec() would flush that reservation right away. The end result is that such a workload would either require a huge amount of swap space (to cope with the reservations, most of it would be unused, but would need to be there for worst case) or something like overcommit. While fork() is a great illustration of this example, other APIs in Linux/Unix also lead to the need to overcommit. For instance, when malloc() is called (or more precisely the syscalls implementing it), no memory is actually allocated until it is "touched" by the process, so it's perfectly valid to allocate a very large block of gigabytes and use that sparsely so that only a few megabytes are actually used. The fact that these APIs work that way means programs exploit those properties, meaning they would most likely break in absence of overcommit (unless you really have a lot of memory or swap to waste by backing these reservations.) An interesting discussion on the issue with fork() can be found on this LWN posting about an article from Microsoft Research . The article itself is interesting, of course. But you can see how the comments go right away into overcommit and the problems with it. The article is named A fork() in the road . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/521661",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/355229/"
]
} |
521,770 | Starting this morning I'm getting errors checking for updates to packages with yum on Centos 7.6. When I run: $ sudo yum clean all && sudo yum check-updateLoaded plugins: fastestmirror, product-id, search-disabled-repos, subscription-managerThis system is not registered with an entitlement server. You can use subscription-manager to register.Cleaning repos: base epel extras google-cloud-compute google-cloud-sdk updatesCleaning up list of fastest mirrorsOther repos take up 1.5 M of disk space (use --verbose for details)Loaded plugins: fastestmirror, product-id, search-disabled-repos, subscription-managerThis system is not registered with an entitlement server. You can use subscription-manager to register.Determining fastest mirrorsepel/x86_64/metalink | 15 kB 00:00:00 * base: mirror.cogentco.com * epel: mirror.steadfastnet.com * extras: mirror.cogentco.com * updates: mirror.cogentco.combase | 3.6 kB 00:00:00epel | 4.9 kB 00:00:00extras | 3.4 kB 00:00:00google-cloud-compute/signature | 454 B 00:00:00google-cloud-compute/signature | 1.8 kB 00:00:00 !!!google-cloud-sdk/signature | 454 B 00:00:00google-cloud-sdk/signature | 1.4 kB 00:00:00 !!!updates | 3.4 kB 00:00:00(1/9): base/7/x86_64/group_gz | 166 kB 00:00:00(2/9): extras/7/x86_64/primary_db | 200 kB 00:00:00(3/9): epel/x86_64/group_gz | 88 kB 00:00:00(4/9): base/7/x86_64/primary_db | 6.0 MB 00:00:00(5/9): epel/x86_64/primary_db | 6.7 MB 00:00:00(6/9): updates/7/x86_64/primary_db | 5.0 MB 00:00:00(7/9): google-cloud-compute/updateinfo | 1.1 kB 00:00:00(8/9): google-cloud-compute/primary | 3.6 kB 00:00:00(9/9): google-cloud-sdk/primary | 100 kB 00:00:00google-cloud-compute 10/10google-cloud-sdk 705/705Updateinfo file is not valid XML: <open file '/var/cache/yum/x86_64/7/epel/92f2e15cad66d79ea1ad327e2af7af89d98e4d153d7a3e27ff41946f476af5b4-updateinfo.xml.zck', mode 'rt' at 0x7f4a26819ed0> So it looks like it doesn't like the EPEL updateinfo but... what can I do about that? How can I fix this? I found this , but I don't understand how it might apply to me? Edit : Apparently updates work, only check-update fails. Which is a nuisance because cron runs check-update hourly and my inbox explodes. But I can still run updates. Edit 2 : It appears maybe something is going wrong with EPEL at the moment and I have to adjust my cron jobs for now. | This is due to a bug in the bodhi-4.0.0 release which is apparently in the framework of the epel repo infrastructure. The bug caused incompatible update files to be generated and pushed to the production repos. A new update has been released and the repos should be repaired soon. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/521770",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82000/"
]
} |
521,772 | I have a csv file that is 6 gigabytes, but I don't need that much data, I need like 100 rows or so. How can I truncate it? | Depending on what you want you can: Take the 1st 100 rows as suggested by @K7AAY . head -n100 filename.csv > file100.csv Take the last 100 rows tail -n100 filename.csv > file100.csv Take a random selection of 100 rows. This requires you have the GNU shuf program installed. It should be installable from your distribution's repositories if you're on Linux. shuf -n100 filename.csv > file100.csv Alternatively, if your sort supports the -R (random sort) option, you can do: sort -R filename.csv | head -n100 > file100.csv | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/521772",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/334631/"
]
} |
521,960 | I'm studying the ELF specification ( http://www.skyfree.org/linux/references/ELF_Format.pdf ), and one point that is not clear to me about the program loading process is how the stack is initialized, and what the initial page size is. Here's the test (on Ubuntu x86-64): $ cat test.s.text .global _start_start: mov $0x3c,%eax mov $0,%edi syscall$ as test.s -o test.o && ld test.o$ gdb a.out -qReading symbols from a.out...(no debugging symbols found)...done.(gdb) b _startBreakpoint 1 at 0x400078(gdb) runStarting program: ~/a.out Breakpoint 1, 0x0000000000400078 in _start ()(gdb) print $sp$1 = (void *) 0x7fffffffdf00(gdb) info proc mapprocess 20062Mapped address spaces: Start Addr End Addr Size Offset objfile 0x400000 0x401000 0x1000 0x0 ~/a.out 0x7ffff7ffa000 0x7ffff7ffd000 0x3000 0x0 [vvar] 0x7ffff7ffd000 0x7ffff7fff000 0x2000 0x0 [vdso] 0x7ffffffde000 0x7ffffffff000 0x21000 0x0 [stack] 0xffffffffff600000 0xffffffffff601000 0x1000 0x0 [vsyscall] The ELF specification has very little to say about how or why this stack page exists in the first place, but I can find references that say that the stack should be initialized with SP pointing to argc, with argv, envp and the auxiliary vector just above that, and I have confirmed this. But how much space is available below SP? On my system there are 0x1FF00 bytes mapped below SP, but presumably this is counting down from the top of the stack at 0x7ffffffff000 , and there are 0x21000 bytes in the full mapping. What influences this number? I am aware that the page just below the stack is a "guard page" that automatically becomes writable and "grows down the stack" if I write to it (presumably so that naive stack handling "just works"), but if I allocate a huge stack frame then I could overshoot the guard page and segfault, so I want to determine how much space is already properly allocated to me right at process start. EDIT : Some more data makes me even more unsure what's going on. The test is the following: .text .global _start_start: subq $0x7fe000,%rsp movq $1,(%rsp) mov $0x3c,%eax mov $0,%edi syscall I played with different values of the constant 0x7fe000 here to see what happens, and for this value it is nondeterministic whether I get a segfault or not. According to GDB, the subq instruction on its own will expand the size of the mmap, which is mysterious to me (how does linux know what's in my register?), but this program will usually crash GDB on exit for some reason. It can't be ASLR causing the nondeterminism because I'm not using a GOT or any PLT section; the executable is always loaded at the same locations in virtual memory every time. So is this some randomness of the PID or physical memory bleeding through? All in all I'm very confused as to how much stack is actually legally available for random access, and how much is requested on changing RSP or on writing to areas "just out of range" of legal memory. | I don't believe this question is really to do with ELF. As far as I know, ELF defines a way to " flat pack " a program image into files and then re-assemble it ready for first execution. The definition of what the stack is and how it's implemented sits somewhere between CPU specific and OS specific if the OS behaviour hasn't been elevated to POSIX. Though no-doubt the ELF specification makes some demands about what it needs on the stack. Minimum stack Allocation From your question: I am aware that the page just below the stack is a "guard page" that automatically becomes writable and "grows down the stack" if I write to it (presumably so that naive stack handling "just works"), but if I allocate a huge stack frame then I could overshoot the guard page and segfault, so I want to determine how much space is already properly allocated to me right at process start. I'm struggling to find an authoritative reference for this. But I have found a large enough number of non-authoritative references to suggest this is incorrect. From what I've read, the guard page is used to catch access outside the maximum stack allocation, and not for "normal" stack growth. The actual memory allocation (mapping pages to memory addresses) is done on demand. Ie: when un-mapped addresses in memory are accessed which are between stack-base and stack-base - max-stack-size + 1, an exception might be triggered by the CPU, but the Kernel will handle the exception by mapping a page of memory, not cascading a segmentation fault. So accessing the stack inside the maximum allocation shouldn't cause a segmentation fault. As you've discovered Maximum stack Allocation Investigating documentation ought to follow lines of Linux documentation on thread creation and image loading ( fork(2) , clone(2) , execve(2) ). The documentation of execve mentions something interesting: Limits on size of arguments and environment ...snip... On kernel 2.6.23 and later, most architectures support a size limit derived from the soft RLIMIT_STACK resource limit (see getrlimit(2) ) ...snip... This confirms that the limit requires the architecture to support it and also references where it's limited ( getrlimit(2) ). RLIMIT_STACK This is the maximum size of the process stack, in bytes. Upon reaching this limit, a SIGSEGV signal is generated. To handle this signal, a process must employ an alternate signal stack (sigaltstack(2)). Since Linux 2.6.23, this limit also determines the amount of space used for the process's command-line arguments and envi‐ronment variables; for details, see execve(2). Growing the stack by changing the RSP register I don't know x86 assembler. But I'll draw your attention to the "Stack Fault Exception" which can be triggered by x86 CPUs when the SS register is changed. Please do correct me if I'm wrong , but I believe on x86-64 SS:SP has just become "RSP". So if I understand correctly a Stack Fault Exception can be triggered by decremented RSP ( subq $0x7fe000,%rsp ). See page 222 here: https://xem.github.io/minix86/manual/intel-x86-and-64-manual-vol3/o_fe12b1e2a880e0ce.html | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/521960",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/355517/"
]
} |
521,996 | I am trying to export variables from one script to the main script and passing one of the imported variables as argument to the main script. Here is the script fruitcolour.sh having variables only : apple="Red"mango="Yellow"orange="Orange"pear="Green" Here is the main script GetFruitColour.sh : #!/bin/bashsource fruitcolour.shecho "The colour of " $@ " is " $@ "." For passing apple as argument, I want to get the value of variable apple i.e. Red . So, When I run ./GetFruitColour.sh apple It must give output :: The colour of apple is Red. | One way to accomplish this is through indirection -- referring to another variable from the value of the first variable. To demonstrate: apple="Red"var="apple"echo "${!var}" Results in: Red Because bash first takes !var to mean the value of the var variable, which is then interpreted via ${apple} and turned into Red . As a result, your GetFruitColour.sh script could look like: #!/bin/bashsource ./fruitcolour.shfor arg in "$@"do printf 'The colour of %s is %s.\n' "$arg" "${!arg}"done I've made the path to the sourced script relative instead of bare, to make it clearer where the file is (if the given filename does not contain a slash, shells will search the $PATH variable for it, which may surprise you). I've also changed echo to printf . The functional change is to use the looping variable $arg and the indirect expansion of it to produce the desired values: $ ./GetFruitColour.sh apple mangoThe colour of apple is Red.The colour of mango is Yellow. Do note that there's no error-checking here: $ ./GetFruitColour.sh fooThe colour of foo is . You may find it easier to use an associative array: declare -A fruits='([orange]="Orange" [apple]="Red" [mango]="Yellow" [pear]="Green" )'for arg in "$@"do if [ "${fruits["$arg"]-unset}" = "unset" ] then echo "I do not know the color of $arg" else printf 'The colour of %s is %s.\n' "$arg" "${fruits["$arg"]}" fidone | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/521996",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87903/"
]
} |
522,012 | How do I use a command to convert an absolute path to a path relative to the current working directory? | Use realpath --relative-to=. /absolute/path More about it here . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/522012",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/352131/"
]
} |
522,052 | I would like to split a string into two halves and print them sequentially. For example: abcdef into abcdef Is there a simple way to do it, or it needs some string processing? | Using parameter expansion and shell arithmetic : The first half of the variable will be: ${var:0:${#var}/2} The second half of the variable will be: ${var:${#var}/2} so you could use: printf '%s\n' "${var:0:${#var}/2}" "${var:${#var}/2}" You could also use the following awk command: awk 'BEGIN{FS=""}{for(i=1;i<=NF/2;i++)printf $i}{printf "\n"}{for(i=NF/2+1;i<=NF;i++){printf $i}{printf "\n"}}' $ echo abcdef | awk 'BEGIN{FS=""}{for(i=1;i<=NF/2;i++)printf $i}{printf "\n"}{for(i=NF/2+1;i<=NF;i++){printf $i}{printf "\n"}}'abcdef | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/522052",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/211351/"
]
} |
522,145 | I've been reading about https://stackoverflow.com/questions/39791042/convert-vertical-text-into-horizontal-in-shell and wondering if tr alone can be used to convert vertical text to horizontal. user@linux:~$ seq 3123user@linux:~$ I've tried the following solution, it works but not perfect. user@linux:~$ seq 3 | tr -d '\n'123user@linux:~$ user@linux:~$ Would it be possible to used tr alone to produce output like this? Desired Output user@linux:~$ seq 3 | tr command here123user@linux:~$ | Choose whatever works for you. $ seq 3 | paste -s -d ''123$ seq 3 | tr -d '\n';echo123$ seq 3 | awk 1 ORS='';echo123 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/522145",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
522,259 | I have a command that creates very verbose output, on the order of hundreds of lines per second. However, the command uses \r to overwrite the previous line of output, in a manner similar to a progress bar. Occasionally it writes a newline to the terminal, which "bakes" the current output line. When redirecting this output to a file, I get hundreds of megs of output - each line is written to the file, instead of being 'overwritten' when the carriage return occurs. I understand this is the expected behavior, and that one way to solve it would be to make the program smarter, and realize it's being redirected to the file and not print this interactive status. However, I can't modify this program. Is there some way I can pipe/filter this output so that what ends up in the final output file is the same as what I would see if I ran it interactively on the terminal? I've tried: spammy_cr_command | uniq ... which outputs the same as without uniq and also: spammy_cr_command | sed '/\r/d' ... which deletes the "baked" lines that contain the newline character as well. | cmd | sed -e 's/.*\r//' > file This will replace all text on each line that is followed by a carriage return with nothing, leaving only the part of the line after the final carriage return behind. This is not necessarily the same as what would be left on the terminal, though, but it's a close approximation most of the time. In particular, the case where a line is longer than its successor isn't handled. This program would give incorrect results: printf 'abcdefg\rxyz\n'printf '123456789\r\nxyz\n' because what would be left behind visibly is xyzdefg123456789xyz but the sed would skip all the unerased characters as well and give xyzxyz You can determine whether your program behaves like that or not. It's not uncommon for progress bars and the like to rest the cursor on the left-hand edge, which may not give the result you wanted. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/522259",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49862/"
]
} |
522,333 | I want to make a specific combination of keyboard keys in order to terminate a process e.x I want to terminate the process by pressin CTRL + C ^ 3 (pressing three times C: CTRL +CCC). So basically I want to replace CTRL + C with CTRL + CCC | The default behavior of Ctrl + C is a combination of two things. The terminal driver¹ does not transmit this key press, but instead sends a SIGINT signal to the foreground process². By default, a process dies when it receives a SIGINT, but the process can set a signal handler, and then it'll run the signal handler when it receives SIGINT. There's no way to configure the terminal driver to only convert the third Ctrl + C in a row to kill the foreground process. To do that, you need to count to three in your program. There are two ways you can do that, which will behave differently if the user presses something else between the Ctrl + C 's. One way is to disable Ctrl + C 's behavior of sending a signal and telling the terminal driver to instead pass it through. You can do that by calling stty intr \^- from a shell script or tcsetattr(fd, &termios) with termios.c_cc[VINTR] set to _POSIX_VDISABLE from C. Then, in your program's input processing loop, exit when you've seen three Ctrl + C 's in a row. The other way is to set a signal handler for SIGINT that counts how many times it's been invoked and terminates the program the third time. You may want to reset the counter if there's normal input in between. ¹ Not the terminal emulator, the generic part of the operating system that handles all terminals. ² I'm only explaining the simple case. This is not a treatise on how the terminal driver works. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/522333",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/355827/"
]
} |
522,349 | Is there a difference in bash, or a preferred usage of negation of a statement? if ! [[ -z "${var}" ]]; then do_somethingfi Versus if [[ ! -z "${var}" ]]; then do_somethingfi | ! [[ expression ]] This will be true if the entire test returns false [[ ! expression ]] This will be true if the individual expression returns false. Since the test can be a compound expression this can be very different. For example: $ [[ ! 0 -eq 1 || 1 -eq 1 ]] && echo yes || echo noyes$ ! [[ 0 -eq 1 || 1 -eq 1 ]] && echo yes || echo nono Personally I prefer to use the negation inside the test construct where applicable but it's perfectly fine outside depending on your intent. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/522349",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/338177/"
]
} |
522,366 | I aim to understand the general concept of "variable attributes" hoping it will help me understand what is declare in Bash . What is a variable attribute? Why would someone want to give an attribute to a variable? Why isn't just creating an variables and expanding them in execution be "enough" when working with variables? | Normally, a variable is a place to store a value. You assign a value to the variable ( var="some value" ), and after that you can recall the value with a variable expansion (writing "$var" is equivalent to writing "some value" ). It's possible to make variables that do something special when you assign a value to them, or in other circumstances where the shell accesses variables. An attribute on a variable is an annotation that the shell stores next to the variable's name and value, which tells the shell to apply this special behavior. One example declare -i x tells the shell that x must contain integer values only. Normally, when you assign a value to a variable, the shell takes the string that results from expanding the right-hand side of the equal sign and stores it as the variable's value. But if the variable has the integer attribute, the shell parses that string as an arithmetic expression and stores the result of evaluating that expression. For example: $ x=2+2; echo $x2+2$ declare -i x; x=2+2; echo $x4$ declare -i x; x=2+hello; echo $x2$ declare -i x; x=2+bash: 2+: syntax error: operand expected (error token is "+") (The third line with x=2+hello sets x to 2 because hello is a variable name which is not defined, and unset variables are silently interpreted as 0 by default.) More examples declare -l var declares that var must contain lowercase letters only. When the shell stores the value of the variable, it converts any uppercase letter to lowercase. declare -u var does the conversion in the other direction. declare -r var makes var read-only, which is also a special behavior of assignment: it causes every subsequent assignment to var to fail. declare -x var causes var to be exported to the environment. For this attribute, the special behavior happens when bash runs an external command: external commands see an environment that contains the variables that the shell is exporting at the time the shell runs the external command. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/522366",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
522,610 | im trying to find duplicate ids from a large csv file, there is just on record per line but the condition to find a duplicate will be the first column. <id>,<value>,<date> example.csv 11111111,high,6/3/201922222222,high,6/3/201933333333,high,6/3/201911111111,low,5/3/201911111111,medium,7/3/2019 Desired output: 11111111,high,6/3/201911111111,low,5/3/201911111111,medium,7/3/2019 No order is required for the output. | Using AWK: awk -F, 'data[$1] && !output[$1] { print data[$1]; output[$1] = 1 }; output[$1]; { data[$1] = $0 }' This looks at every line, and behaves as follows: if we’ve seen the value in the first column already, note that we should output any line matching that, and output the memorised line; output the current line if its first column matches one we want to output; store the current line keyed on the first column. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/522610",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/356066/"
]
} |
522,619 | I have two folders - Folder A and Folder B. I compared the files from both folders using diff command. Now after finding that certain files are only available in Folder A and Certain files in Folder B, I would like copy those distinctive files from both folders into 1 folder called folder C which will now have all the unique files from both A and B How can I do this? | Using AWK: awk -F, 'data[$1] && !output[$1] { print data[$1]; output[$1] = 1 }; output[$1]; { data[$1] = $0 }' This looks at every line, and behaves as follows: if we’ve seen the value in the first column already, note that we should output any line matching that, and output the memorised line; output the current line if its first column matches one we want to output; store the current line keyed on the first column. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/522619",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/355386/"
]
} |
522,637 | I have a program on a remote host, whose execution I need to automate. The command execute that program, on the same machine, looks something like this: /path/to/program -a file1.txt -b file2.txt In this case, file1.txt and file2.txt are used for entirely different things within the program, so I can't just cat them together. However, in my case, the file1.txt and file2.txt that I want to pass into the program exist only on my device, not on the host where I need to execute the program. I know that I can feed at least one file through SSH by passing it through stdin : cat file1.txt | ssh host.name /path/to/program -a /dev/stdin -b file2.txt but, since I'm not allowed to store files on the host, I need a way to get the file2.txt over there as well. I'm thinking it might be possible through abuse of environment variables and creative use of cat and sed together, but I don't know the tools well enough to understand how I would use them to accomplish this. Is it doable, and how? | If the files given as arguments to your program are text files, and you're able to control their content (you know a line which doesn't occur inside them), you can use multiple here-documents: { echo "cat /dev/fd/3 3<<'EOT' /dev/fd/4 4<<'EOT' /dev/fd/5 5<<'EOT'" cat file1 echo EOT cat file2 echo EOT cat file3 echo EOT} | ssh user@host sh Here cat is a sample command which takes filenames as arguments. It could be instead: echo "/path/to/prog -a /dev/fd/3 3<<'EOT' -b /dev/fd/4 4<<'EOT' Replace each EOT with something that doesn't occur in each of the files, respectively. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/522637",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/298691/"
]
} |
522,647 | Our cluster runs Linux, and I can successfully ssh login to it using my windows 10 PC. However, when I'm trying to use X11 forwarding I always get the error: qt.qpa.screen: QXcbConnection: Could not connect to display localhost:0.0Could not connect to any X display I've tried everything: using Xterminal, PuTTY, Ubuntu (from the windows 10 store), MobaXterm - and nothing works.I've tried the export display command, and when I'm logging in I'm using -X (also tried -Y).I read online but couldn't find anything to work.Also, my colleague has a personal Macbook with the same user properties, and she managed to do X11 using XQuartz. Any ideas what can I try? | If the files given as arguments to your program are text files, and you're able to control their content (you know a line which doesn't occur inside them), you can use multiple here-documents: { echo "cat /dev/fd/3 3<<'EOT' /dev/fd/4 4<<'EOT' /dev/fd/5 5<<'EOT'" cat file1 echo EOT cat file2 echo EOT cat file3 echo EOT} | ssh user@host sh Here cat is a sample command which takes filenames as arguments. It could be instead: echo "/path/to/prog -a /dev/fd/3 3<<'EOT' -b /dev/fd/4 4<<'EOT' Replace each EOT with something that doesn't occur in each of the files, respectively. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/522647",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/356086/"
]
} |
522,663 | I am learning terminal tips. In this tutorial, the guy says that Ctrl + U deletes everything from the cursor until the end of line. In my case, it always deletes the whole line. I am using zsh on macOS. | First map the key binding by typing bindkey \^U backward-kill-line . Then test to see if this worked. If it works, make it permanent by adding the same line to an appropriate zsh RC file. echo 'bindkey \^U backward-kill-line' >> ~/.zshrc The Z Shell Manual , section 18.6.3, defines the "widgets," such as backward-kill-line . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/522663",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/354758/"
]
} |
522,665 | Suppose I have a file.txt with the following lines : hello myname1 is yellow.pcapng redfestive myname33 is hddd.pcapng dfdfcrude myname44 is hello.pcapng Now my goal is to filter to the lines so it outputs to out.txt as follows : myname1 yellow.pcapngmyname33 hddd.pcapngmyname44 hello.pcapng Now I know that I can use : grep -oh "\w*myname\w*" /tmp/file.txt > /tmp/out.txtgrep -o '[^ ]\+g' /tmp/file.txt > /tmp/out.txt to get the both respective parts of the expression individually.How do I combine these commands so that I get my desired output? | Given your sample data, you could assume that words #2 and #4 are what you want to extract; you'd express that in awk with: awk '{ print $2, $4 }' < /tmp/file.txt > /tmp/out.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/522665",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/356104/"
]
} |
522,672 | Let's say I want to edit the /etc/ssh/sshd_config file in RHEL 7 to secure our ssh configuration. I want to replace let's take the Ciphers line for example, keeping the original in place and commenting it out. I also want to be able to key off of the smallest part of the string as possible to keep from having Red Hat break it every time they update the rpm / install iso. Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc should become... #Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,aes192-cbc,aes256-cbcCiphers abc-123,def-456,ghi-789 etc... I have tried to do a find / add newline in a test file where the filename is testfile, and the contents are This is my test file. I tried adding No it's not as a new line and have miserably failed. sed -i '/This is my test file./aNo it's not.' testfile How would I go about that and more importantly, what are the key concepts behind it? | Given your sample data, you could assume that words #2 and #4 are what you want to extract; you'd express that in awk with: awk '{ print $2, $4 }' < /tmp/file.txt > /tmp/out.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/522672",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/356095/"
]
} |
522,746 | If run: true \ false; echo $? I get an exit code of 0. Does anyone know why that is? | true \ false is equivalent to true ' false' It causes the shell to run true with false (including an initial space character) as its parameter. The implementation of true that you’re using ignores this parameter and exits with a zero return value. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/522746",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
522,877 | If there is nothing in a named pipe and I do: cat my_named_pipe it will wait until data arrives. Is there a flag I can use to exit immediately if there is no data to be read? Or perhaps a command other than cat that I can use? I also tried: read val < "$my_named_pipe"; but this also waits for the next chunk of data - I don't want to wait if the fifo is empty. | To prevent cat from hanging in the absence of any writer (in which case it's the opening of the fifo, not reading from it, that hangs), you can do: cat 0<> "$my_named_pipe" <"$my_named_pipe" The first redirection opens in read+write mode which on most systems doesn't block and instantiates the pipe even if there's no writer nor reader already. Then the second open (read-only this time) would not block because there is at least one writer now (itself). The 0 is only needed in recent versions of ksh93 where the default fd for <> changed from 0 to 1. Also, in ksh93 , that would not work when cat is the shell builtin, like when ksh93 is called when /opt/ast/bin is ahead of /bin in $PATH or after a call to builtin cat as upon the <"$my_named_pipe" , (I guess) ksh93 saves the previous target of stdin on a separate file descriptor which would hold the pipe open. You can work around that by writing it instead: cat 3<> "$my_named_pipe" <"$my_named_pipe" 3<&- (which you might also argue conveys the intention more clearly) Note that that <> on the pipe would also unlock other readers to the fifo. If there were some writers, cat would still have to read all their output and wait until they have closed their end of the pipe. You could open the pipe in non-blocking mode, like with GNU dd 's: dd bs=64k if="$my_named_pipe" iflag=nonblock status=noxfer Which would only read from the pipe as long as there's some data in it, and exit with a dd: error reading 'fifo': Resource temporarily unavailable error when there's no more, and not unlock other readers, but that means you could miss some of the writers output if they are slower to write to the pipe than you ( dd ) are to read it. Another approach could be to timeout when there's been no input in a while, for instance by using socat 's -T option: socat -u -T1 - - 0<> "$my_named_pipe" <"$my_named_pipe" Which would exit if there's not been anything coming from the pipe in one second. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/522877",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
522,907 | I am trying to jump between zsh and bash. By default, I have zsh as my shell, I can know this by typing: echo $SHELL and I get /bin/zsh However, I want to open Bash, so I type /bin/bash ; I assume I am in bash now, but if I echo $SHELL I still get /bin/zsh What's wrong, please? | SHELL is an environment variable that is passed from bash to zsh when you call zsh. SHELL is not one of the Parameters Set By The Shell in zsh, so its value remains intact. bash$ SHELL=turtle zshzsh$ echo $SHELLturtle For indications that you're in a zsh shell, try: echo $ZSH_NAMEecho $0 The SHELL variable is traditionally set by the login program, "as specified by the password database". (Copied from What sets the $SHELL environment variable? ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/522907",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/354758/"
]
} |
523,053 | In a file named result , containing this: <span class=timestamp><b>15:31:00</b></span> How to grep for the timestamp? Here are some tries and their output: > grep "[0..9]*:[0..9]*:[0..9]*" result -o> grep "[0..9]*:[0..9]*" result -o::00> grep "[0..9]*:" result -o:: | I would use grep -o '[0-2][0-9]:[0-5][0-9]:[0-5][0-9]' result to restrict the results to strings which are potentially timestamps — hours between 0 and 29 (as an approximation for 23, assuming 24h rather that 12h AM/PM), minutes and seconds between 0 and 59. Introducing extended regular expressions allows the match to be stricter: grep -oE '([01][0-9]|2[0-3]):[0-5][0-9]:[0-5][0-9]' result To allow for leap seconds, 60 should be an acceptable value: grep -oE '([01][0-9]|2[0-3]):[0-5][0-9]:([0-5][0-9]|60)' result (they are added just before midnight UTC, but the above allows for other timezones). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/523053",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/218040/"
]
} |
523,079 | I just installed build-essential , which installed gcc-8 but the man pages seem to be unavailable. $ man gccNo manual entry for gccSee 'man 7 undocumented' for help when manual pages are not available. Moreover, I can see that the man pages aren't provided by gcc-8 (or gcc-7 ), # dpkg -L gcc-8 | grep -i man/usr/share/man/usr/share/man/man1/usr/share/man/man1/x86_64-linux-gnu-gcc-ar-8.1.gz/usr/share/man/man1/x86_64-linux-gnu-gcc-nm-8.1.gz/usr/share/man/man1/x86_64-linux-gnu-gcc-ranlib-8.1.gz/usr/share/man/man1/gcc-ar-8.1.gz/usr/share/man/man1/gcc-nm-8.1.gz/usr/share/man/man1/gcc-ranlib-8.1.gz I'm quite sure previously there was a man gcc . I'm using Debian 10.0 Buster (testing). | The manpages are provided in contrib packages, gcc-doc etc. (See the links at the top-right of the linked page for all the releases where the package is available.) Debian 10’s default compiler is GCC 8. The GCC 8 documentation wasn’t packaged in time for Debian 10’s release , but it is available in backports , along with the corresponding gcc-doc package . To install it, you need to enable backports with contrib and non-free , and install it from there explicitly: echo deb http://deb.debian.org/debian buster-backports main contrib non-free | sudo tee /etc/apt/sources.list.d/buster-backports.listsudo apt updatesudo apt install -t buster-backports gcc-doc Note though, even with gcc-doc you may want to install manpages-posix-dev for access to POSIX docs on ISO C standard library docs. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/523079",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3285/"
]
} |
523,095 | Yesterday, my laptop with OpenSuse 15.1 died after a couple of years of use. I went out and bought a new HP Spectre X360 15 laptop ( 15-DF0033DX ) that has 16GB of RAM, 1 256GB SSD NVMe drive, and don't even know what video it has since I am not super interested in that. I went into the BIOS to make sure the boot order is: 1) CD/DVD Boot 2) USB Boot 3) HDD Boot I also made sure that Legacy Boot is disabled and Secured Boot is disabled. There are 3 partitions on this drive: EFI Boot 260MB, C:\ with 475 GB of space, and a 27GB Restoration Drive. The C:\ drive had BitLocker on it for encryption, but I turned that off. This machine also has some sort of Intel Optane Memory and Storage management. I have created a DVD for latest/greatest OpenSuse Leap 15.1 and I can boot to the installation. I get past half the installation where I can choose the KDE Desktop UI and then the next step is the Partitioning. However, it is at this point that the OpenSuse says there is an error with the system and that it says: cannot delete mdContainer For the life of me, I cannot even find these words anywhere on the internet, and I have spent literally the last 24 hours looking. The drive partitions it shows me are: 475 GB nvme0 and 27 GB nvme1. When I go to create new partitions, it tells me that the drive is in use ... what could be using it and locking it like that?????? I've never had this issue before. I tried to create a partition on the larger drive: /efi/boot 1GB FAT/ 20GB BRTFS/home 475GB EXT4 I get the error that it tells me the device is being used, and I don't get to see what is using the drive ... just that it is being used? Then I set my username/password, the system username/password, then I go with the default install for software since I will change it later ...and when I go to install ... the screen goes blank and then it starts the DOS-UI version of the installation ... and I go through the whole process all over again. I tried to find an answer on the OpenSuse Forums, and I created a new account there, but I am unable to verify my registration because of problems on their side. So, I have come here as a last resort. The next step might be to just delete the partitions in Windows 10 to just get rid of them and then hopefully that would be enough to do it.So, I am looking to find out what this error message means: cannot delete mdContainer How can I fix this, if I can, and then install opensuse. I am also going to try installing Fedora on this machine if I can. Other sites I went to said they were able to install 5 different versions of Linux on the HP Spectre x360 laptop. So, maybe my system has a legitimate issue, and I need to exchange it? | The manpages are provided in contrib packages, gcc-doc etc. (See the links at the top-right of the linked page for all the releases where the package is available.) Debian 10’s default compiler is GCC 8. The GCC 8 documentation wasn’t packaged in time for Debian 10’s release , but it is available in backports , along with the corresponding gcc-doc package . To install it, you need to enable backports with contrib and non-free , and install it from there explicitly: echo deb http://deb.debian.org/debian buster-backports main contrib non-free | sudo tee /etc/apt/sources.list.d/buster-backports.listsudo apt updatesudo apt install -t buster-backports gcc-doc Note though, even with gcc-doc you may want to install manpages-posix-dev for access to POSIX docs on ISO C standard library docs. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/523095",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/181979/"
]
} |
523,098 | Consider the commands eval false || echo okecho also ok Ordinarily, we'd expect this to execute the false utility and, since the exit status is non-zero, to then execute echo ok and echo also ok . In all the POSIX-like shells I use ( ksh93 , zsh , bash , dash , OpenBSD ksh , and yash ), this is what happens, but things get interesting if we enable set -e . If set -e is in effect, OpenBSD's sh and ksh shells (both derived from pdksh ) will terminate the script when executing the eval . No other shell does that. POSIX says that an error in a special built-in utility (such as eval ) should cause the non-interactive shell to terminate. I'm not entirely sure whether executing false constitutes "an error" (if it was, it would be independent of set -e being active). The way to work around this seems to be to put the eval in a sub shell, ( eval false ) || echo okecho also ok The question is whether I'm expected to have to do that in a POSIX-ly correct shell script, or whether it's a bug in OpenBSD's shell? Also, what is meant by "error" in the POSIX text linked to above? Extra bit of info: The OpenBSD shells will execute the echo ok both with and without set -e in the command eval ! true || echo ok My original code looked like set -eif eval "$string"; then echo okelse echo not okfi which would not output not ok with string=false using the OpenBSD shells (it would terminate), and I wasn't sure it was by design, by mistake or by misunderstanding, or something else. | That no other shell needs such workaround is an strong indication that it is a bug in OpenBSD ksh. In fact, ksh93 doesn't show such issue. That there is a || in the command line must avoid the shell exit caused by an return code of 1 on the left side of it. The error of an special built-in shall cause the exit of a non interactive shell acording to POSIX but that is not always true. Trying to continue out of a loop is an error, and continue is a builtin. But most shells do not exit on: continue 3 A builtin that emits a clear error but doesn't exit. So, the exit on false is generated by the set -e condition not by the builtin characteristic of the command ( eval in this case). The exact conditions on which set -e shall exit are quite more fuzzy in POSIX. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/523098",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116858/"
]
} |
523,152 | This is my xrdp config: [Globals]ini_version=1fork=trueport=3389use_vsock=falsetcp_nodelay=truetcp_keepalive=truesecurity_layer=negotiatecrypt_level=highcertificate=key_file=ssl_protocols=TLSv1.2, TLSv1.3autorun=allow_channels=trueallow_multimon=truebitmap_cache=truebitmap_compression=truebulk_compression=truemax_bpp=128use_compression=yesnew_cursors=trueuse_fastpath=bothblue=009cb5grey=dededels_top_window_bg_color=009cb5ls_width=350ls_height=430ls_bg_color=dededels_logo_filename=ls_logo_x_pos=55ls_logo_y_pos=50ls_label_x_pos=30ls_label_width=65ls_input_x_pos=110ls_input_width=210ls_input_y_pos=220ls_btn_ok_x_pos=142ls_btn_ok_y_pos=370ls_btn_ok_width=85ls_btn_ok_height=30ls_btn_cancel_x_pos=237ls_btn_cancel_y_pos=370ls_btn_cancel_width=85ls_btn_cancel_height=30[Logging]LogFile=xrdp.logLogLevel=DEBUGEnableSyslog=trueSyslogLevel=DEBUG[Channels]rdpdr=truerdpsnd=truedrdynvc=truecliprdr=truerail=truexrdpvr=truetcutils=true[Xvnc]name=Xvnclib=libvnc.sousername=askpassword=askip=127.0.0.1port=-1[Xorg]name=Xorglib=libxup.sousername=askpassword=askip=127.0.0.1port=-1code=20 I am trying to connect with mstsc to this machine (this is after fresh pc restart, noone has logged in): while in this login box, no disconnect happens: after I put there correct login/password, I get black screen first and then mstsc window closes. I tried to connect from KDE remote connection application, but it also failed same way. xrdp.log doesn't seem to contain anything interesting: [20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: connecting to sesman ip 127.0.0.1 port 3350[20190606-04:14:36] [INFO ] xrdp_wm_log_msg: sesman connect ok[20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: sending login info to session manager, please wait...[20190606-04:14:36] [DEBUG] return value from xrdp_mm_connect 0[20190606-04:14:36] [INFO ] xrdp_wm_log_msg: login successful for display 10[20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC started connecting[20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC connecting to 127.0.0.1 5910[20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC tcp connected[20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC security level is 2 (1 = none, 2 = standard)[20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC password ok[20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC sending share flag[20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC receiving server init[20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC receiving pixel format[20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC receiving name length[20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC receiving name[20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC sending pixel format[20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC sending encodings[20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC sending framebuffer update request[20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC sending cursor[20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: VNC connection complete, connected ok[20190606-04:14:36] [DEBUG] xrdp_wm_log_msg: connected ok[20190606-04:14:36] [DEBUG] xrdp_mm_connect_chansrv: chansrv connect successful[20190606-04:14:36] [DEBUG] Closed socket 18 (AF_INET 127.0.0.1:47744)[20190606-04:14:37] [DEBUG] Closed socket 20 (AF_UNIX)[20190606-04:14:37] [DEBUG] Closed socket 12 (AF_INET 127.0.0.1:3389)[20190606-04:14:37] [DEBUG] xrdp_mm_module_cleanup[20190606-04:14:37] [DEBUG] VNC mod_exit[20190606-04:14:37] [DEBUG] Closed socket 19 (AF_INET 127.0.0.1:40224) How can I fix that? | I solved the issue myself, hopefully someone else will find it usefull. I took a look at ~/.xsession-errors, it contained: (imsettings-check:16467): IMSettings-WARNING **: 04:42:56.491: Could not connect: Connection refused(imsettings-check:16467): GLib-GIO-CRITICAL **: 04:42:56.491: g_dbus_proxy_call_sync_internal: assertion 'G_IS_DBUS_PROXY (proxy)' failedGLib-GIO-Message: 04:42:56.807: Using the 'memory' GSettings backend. Your settings will not be saved or shared with other applications.** (process:16260): WARNING **: 04:42:56.824: Could not make bus activated clients aware of XDG_CURRENT_DESKTOP=GNOME environment variable: Could not connect: Connection refused and then I've googled a rootcause, miniconda installation has broken PATH in .bashrc file, I have removed this line and it has fixed it: export PATH="/home/stiv/miniconda3/bin:$PATH" UPDATE: Later I've found x2go , which works way more reliable and faster then XRDP. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/523152",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28212/"
]
} |
523,153 | I'm using busybox tools and I want to take all http links in a web page. I save an example link page by using curl or wget. However, it saves the page as html. How to do it with curl or wget commands? example webpage = http://www.turanevdekorasyon.com/wp-includes/test/ The following data was saved in text format with firefox browser. Index of /wp-includes/test/Name <http://www.turanevdekorasyon.com/wp-includes/test/?ND> Last modified <http://www.turanevdekorasyon.com/wp-includes/test/?MA> Size <http://www.turanevdekorasyon.com/wp-includes/test/?SA> Description <http://www.turanevdekorasyon.com/wp-includes/test/?DA>------------------------------------------------------------------------up Parent Directory <http://www.turanevdekorasyon.com/wp-includes/> 28-May-2019 02:15 - [CMP] v1.0.zip <http://www.turanevdekorasyon.com/wp-includes/test/v1.0.zip> 28-May-2019 02:15 4k [CMP] v1.1.zip <http://www.turanevdekorasyon.com/wp-includes/test/v1.1.zip> 28-May-2019 02:15 4k [CMP] v1.2.zip <http://www.turanevdekorasyon.com/wp-includes/test/v1.2.zip> 28-May-2019 02:15 4k ------------------------------------------------------------------------Proudly Served by LiteSpeed Web Server at www.turanevdekorasyon.com Port 80 | I solved the issue myself, hopefully someone else will find it usefull. I took a look at ~/.xsession-errors, it contained: (imsettings-check:16467): IMSettings-WARNING **: 04:42:56.491: Could not connect: Connection refused(imsettings-check:16467): GLib-GIO-CRITICAL **: 04:42:56.491: g_dbus_proxy_call_sync_internal: assertion 'G_IS_DBUS_PROXY (proxy)' failedGLib-GIO-Message: 04:42:56.807: Using the 'memory' GSettings backend. Your settings will not be saved or shared with other applications.** (process:16260): WARNING **: 04:42:56.824: Could not make bus activated clients aware of XDG_CURRENT_DESKTOP=GNOME environment variable: Could not connect: Connection refused and then I've googled a rootcause, miniconda installation has broken PATH in .bashrc file, I have removed this line and it has fixed it: export PATH="/home/stiv/miniconda3/bin:$PATH" UPDATE: Later I've found x2go , which works way more reliable and faster then XRDP. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/523153",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/342007/"
]
} |
523,209 | I am in a directory in which I have two text files: $ touch test1.txt$ touch test2.txt When I try to list the files (with Bash) using some pattern it works: $ ls test?.txttest1.txt test2.txt$ ls test{1,2}.txttest1.txt test2.txt However, when a pattern is produced by a command enclosed in $() , only one of patterns work: $ ls $(echo 'test?.txt')test1.txt test2.txt$ ls $(echo 'test{1,2}.txt')ls: cannot access test{1,2}.txt: No such file or directory What's going on here? Why the pattern {1,2} does not work? | It's a combination of two things. First, brace expansion is not a pattern that matches file names: it's a purely textual substitution — see What is the difference between `a[bc]d` (brackets) and `a{b,c}d` (braces)? . Second, when you use the result of a command substitution outside double quotes ( ls $(…) ), what happens is only pattern matching (and word splitting: the “split+glob” operator), not a complete re-parsing. With ls $(echo 'test?.txt') , the command echo 'test?.txt' outputs the string test?.txt (with a final newline). The command substitution results in the string test?.txt (without a final newline, because command substitution strips trailing newlines). This unquoted substitution undergoes word splitting, yielding a list consisting of the single string test?.txt since there are no whitespace characters (more precisely, no characters in $IFS ) in it. Each element of this one-element list then undergoes conditional wildcard expansion, and since there is a wildcard character ? in the string the wildcard expansion does happen. Since the pattern test?.txt matches at least one file name, the list element test?.txt is replace by the list of file names that match the patterns, yielding the two-element list containing test1.txt and test2.txt . Finally ls is called with two arguments test1 and test2 . With ls $(echo 'test{1,2}') , the command echo 'test{1,2}' outputs the string test{1,2} (with a final newline). The command substitution results in the string test{1,2} . This unquoted substitution undergoes word splitting, yielding a list consisting of the single string test{1,2} . Each element of this one-element list then undergoes conditional wildcard expansion, which does nothing (the element is left as is) since there is no wildcard character in the string. Thus ls is called with the single argument test{1,2} . For comparison, here's what happens with ls $(echo test{1,2}) . The command echo test{1,2} outputs the string test1 test2 (with a final newline). The command substitution results in the string test1 test2 (without a final newline). This unquoted substitution undergoes word splitting, yielding two strings test1 and test2 . Then, since neither of the strings contains a wildcard character, they're left alone, so ls is called with two arguments test1 and test2 . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/523209",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/356501/"
]
} |
523,223 | I'm looking to find the lines between two matching patterns. If any start or end pattern missing, lines should not print. Correct input: a***** BEGIN *****BASH is awesomeBASH is awesome***** END *****b Output will be ***** BEGIN *****BASH is awesomeBASH is awesome***** END ***** Now suppose END pattern is missing in input a***** BEGIN *****BASH is awesomeBASH is awesomeb Lines should not print. I have tried with sed: sed -n '/BEGIN/,/END/p' input It prints all data up to the last line if END pattern is missing. How to solve it? | cat input |sed '/\*\*\*\*\* BEGIN \*\*\*\*\*/,/\*\*\*\*\* END *\*\*\*\*/ p;d' | tac |sed '/\*\*\*\*\* END \*\*\*\*\*/,/\*\*\*\*\* BEGIN *\*\*\*\*/ p;d' |tac It works by having tac reverse the lines so that sed can find both delimiters in both orders. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/523223",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/356509/"
]
} |
523,327 | I have the Unicode character ᚠ, represented by its Unicode code point 16A0, in a text file (the text file is encoded(?) as utf-8). When I do grep '\u16A0' test.txt I get no result. How do I grep that character? | You can use ANSI-C quoting provided by your shell, to replace backslash-escaped characters as specified by the ANSI C standard. This should work for any command, not just grep , in shells like Bash and Zsh: grep $'\u16A0' For some more complex examples, you might refer to this related question and its answers. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/523327",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/356586/"
]
} |
523,421 | Script: #!/bin/sh## reads stdin/file and copies it to clipboard# clears it after 30s#cat "${1:-/dev/stdin}" | timeout 30 xclip -i -selection clipboard -r -verbose &>/dev/null & I can see that only stdin does not work (with bash it works on stdin/file). P.S. verbose is used to make xclip not daemonize. | [this answer is about asynchronous pipelines in scripts; for the deprecated &> bash operator and why you should always use >output 2>&1 instead, refer to obsolete and deprecated syntax ] #! /bin/shcat "${1:-/dev/stdin}" | ... & Here you have a pipeline running asynchronously (because terminated by & ), started from a script, ie is from a shell with the job control disabled. According to the standard : command1 & [command2 & ... ] The standard input for an asynchronous list, before any explicit redirections are performed, shall be considered to be assigned to a file that has the same properties as /dev/null . The problem is that dash , ksh , mksh , yash , etc intepret "asynchronous list" as any command, including a pipeline, and will redirect the stdin of the first command from /dev/null : $ echo foo | dash -c 'cat | tr fo FO & echo DONE'DONE$ echo | dash -c 'readlink /proc/self/fd/0 | cat & echo DONE'DONE/dev/null But bash will only interpret it as "simple command" and will only redirect its stdin from /dev/null when it's not part of a pipeline: $ echo foo | bash -c 'cat | tr fo FO & echo DONE'DONEFOO$ echo | bash -c 'readlink /proc/self/fd/0 | cat & echo DONE'DONEpipe:[69872]$ echo | bash -c 'readlink /proc/self/fd/0 & echo DONE'DONE/dev/null$ bash -c 'cat | tr a A & echo DONE'DONEcat: -: Input/output error zsh will only redirect it from /dev/null when the original stdin is a tty, not when it's other kind of file: $ zsh -c 'readlink /proc/self/fd/0 &' </dev/tty/dev/null$ zsh -c 'readlink /proc/self/fd/0 &' </dev/zero/dev/zero A workaround which works in all shells is to duplicate the stdin into another file descriptor, and redirect the stdin of the first command from it: #! /bin/shexec 3<"${1:-/dev/stdin}"cat <&3 | timeout 30 xclip -i -selection clipboard -verbose -r >/dev/null 2>&1 & | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/523421",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/316741/"
]
} |
523,487 | I am working on a CentOS 7 workstation. I had installed Rstudio and it was working fine until recently. But now, if I try to launch it at http://localhost:8787/ I get an error that says Unable to connect to service I checked if R is working properly in terminal and I got following error. /usr/lib64/R/bin/exec/R: error while loading shared libraries: /lib/libgcc_s.so.1: file too short If I try to install R again using following command, sudo yum install R -y I get following reply Package R-3.5.2-2.el7.x86_64 already installed and latest versionNothing to do What do I have to do? | You have a damaged .so . In general, you issue the following command to find the package it belongs to: yum provides \*/<so_file> In your case: $ yum provides \*/libgcc_s.so.1[...]libgcc-4.4.6-4.el6.i686 : GCC version 4.4 shared support libraryRepo : baseMatched from:Filename : /lib/libgcc_s.so.1[...] In this case, we want libgcc-4.4.6-4.el6.i686 , you will get another version. You need to reinstall that package. yum reinstall libgcc-<version>.i686 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/523487",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/312835/"
]
} |
523,543 | I am happy and really like the Ctrl - R backward search feature of the bash shell. Some of my colleagues don't like it, since it is sometimes confusing. I understand them. If you enter the wrong characters, the current position in the history is somewhere in the past, and you won't find the recent matches. Is there a more user friendly alternative for seaching backward in the shell history? I want to stick with bash. Suggesting an alternative shell is not an answer to this question. The issue with the "lost position" is explained here: Reset bash history search position . These solutions work. That's right. But the solution there are not easy and user friendly according to my point of view. These solutions are not simple and straight forward. These are solutions of the past. In the past the human needed to learn the way the computer wanted the input. But today the tools should accept the input in a way which is easy for the user. Maybe someone knows a IDE of jetbrains like PyCharm. If you search for "foobar" you even get the lines which contain "foo_bar". That's great, that's unix :-) | I'm using the fuzzy finder program FZF . I've written my own key bindings and shell scripts to utilize FZF as my tool of choice to reverse-search an interactive Bash shell's history. Feel free to copy and paste the code from my Config GitHub repository. ~/.bashrc configuration file # Test if fuzzy finder program _Fzf_ is installed.#if type -p fzf &> /dev/null; then # Test if _Fzf_ specific _Readline_ file is readable. # if [[ -f ~/.inputrc.fzf && -r ~/.inputrc.fzf ]]; then # Make _Fzf_ available through _Readline_ key bindings. # bind -f ~/.inputrc.fzf fifi ~/.inputrc.fzf configuration file ## $if mode=vi # Key bindings for _Vi_ _Insert_ mode # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ set keymap vi-insert "\C-x\C-a": vi-movement-mode "\C-x\C-e": shell-expand-line "\C-x\C-r": redraw-current-line "\C-x^": history-expand-line "\C-r": "\C-x\C-addi$(HISTTIMEFORMAT= history | fzf-history)\C-x\C-e\C-x\C-r\C-x^\C-x\C-a$a" # Key bindings for _Vi_ _Command_ mode # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ set keymap vi-command "\C-r": "i\C-r" "\ec": "i\ec"$endif fzf-history executable Bash script #!/usr/bin/env bash## Retrieve command from history with fuzzy finder# ===============================================# Tim Friske <[email protected]>## See also:# * man:bash[1]# * man:fzf[1]# * man:cat[1]shopt -os nounset pipefail errexit errtraceshopt -s extglob globstarfunction print_help { 1>&2 cat \<<'HELP'usage: HISTTIMEFORMAT= history | fzf-historyHELP}function fzf_history { if [[ -t 0 ]]; then print_help exit fi local fzf_options=() fzf_options+=(${FZF_DEFAULT_OPTS:-}) fzf_options+=('--tac' '-n2..,..' '--tiebreak=index') fzf_options+=(${FZF_HISTORY_FZF_OPTS:-}) fzf_options+=('--print0') local cmd='' cmds=() while read -r -d '' cmd; do cmds+=("${cmd/#+([[:digit:]])+([[:space:]])/}") done < <(fzf "${fzf_options[@]}") if [[ "${#cmds[*]}" -gt 0 ]]; then (IFS=';'; printf '%s\n' "${cmds[*]}") fi}fzf_history "$@" key-bindings.bash sourceable Bash script Taken and slightly adapted from FZF's Bash key bindings file here are the Emacs mode compatible key bindings for Bash's history reverse-search with Ctrl-R (untested): if [[ ! -o vi ]]; then # Required to refresh the prompt after fzf bind '"\er": redraw-current-line' bind '"\e^": history-expand-line' # CTRL-R - Paste the selected command from history into the command line bind '"\C-r": " \C-e\C-u\C-y\ey\C-u$(HISTTIMEFORMAT= history | fzf-history)\e\C-e\er\e^"'fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/523543",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22068/"
]
} |
523,600 | I have two linux containers connected with a veth-pair. At veth-interface of one container I set up tc qdisc netem delay and send traffic from it to the other container. If I watch traffic on both sides using tcpdump/wireshark it can be seen that timestamps of the same packet at sender and receiver do not differ by selected delay. I wanted to understand more in detail at which point libpcap puts timestamps to egress traffic corresponding to tc qdisc. I searched for a scheme/image on Internet but did not find. I found this topic ( wireshark packet capture point ) but it advises to introduce an indirection by having one more container/interface. This is not a possible solution in my situation. Is there any way to solve the problem not introducing additional intermediate interfaces (that is, not changing topology) and only by recording at the already given veth-interface but in such a way that the delay can be seen? UPDATE: I was too quick and so got mistaken. Neither my solution present below (same as the first variant of solution of the answer of @A.B), nor the solution with IFB of @A.B (I have already checked) solve my problem. The problem is with overflow of transmit queue of interface a1-eth0 of sender in the topology: [a1-br0 ---3Gbps---a1-eth0]---100Mbps---r1---100Mbps---r2 I was too quick and checked only for delay 10ms at link between a1-eth0 and router r1 . Today I tried to make the delay higher: 100ms, 200ms and the results (per-packet delay and rate graphs which I get) start to differ for the topology above and for the normal topology: [a1-eth0]---100Mbps---r1---100Mbps---r2 So no, certainly, for accurate testing I cannot have extra links: nor introduced by Linux bridge, nor by this IFB, nor by any other third system. I test congestion control schemes. And I want to do it in a specific topology. And I cannot change the topology just for the sake of plotting -- I mean if at the same time my rate and delay results/plots get changed. UPDATE 2: So it looks like the solution has been found, as it can be seen below (NFLOG solution). UPDATE 3: Here are described some disadvantages of NFLOG solution (big Link-Layer headers and wrong TCP checksums for egress TCP packets with zero payload) and proposed a better solution with NFQUEUE which does not have any of these problems: TCP checksum wrong for zero length egress packets (captured with iptables) . However, for my tasks (testing of congestion control schemes) neither NFLOG, nor NFQUEUE are suitable. As it is explained by the same link, sending rate gets throttled when packets get captured from kernel's iptables (this is how I understand it). So when you record at sender by capturing from interface (i.e., regularly) you get 2 Gigabytes dump, while if you record at sender by capturing from iptables you get 1 Gigabyte dump. Roughly speaking. UPDATE 4: Finally, in my project I use Linux bridge solution described in my own answer bewow. | According to the Packet flow in Netfilter and General Networking schematic, tcpdump captures ( AF_PACKET ) after egress (qdisc) . So it's normal you don't see the delay in tcpdump: the delay was already present at initial capture. You'd have to capture it one step earlier, so involve a 3rd system: S1: system1, runs tcpdump on outgoing interface R: router (or bridge, at your convenience, this changes nothing), runs the qdisc netem S2: system2, runs tcpdump on incoming interface __________________ ________________ __________________| | | | | || (S1) -- tcpdump -+---+- (R) -- netem -+---+- tcpdump -- (S2) ||__________________| |________________| |__________________| That means 3 network stacks involved, be they real, vm, network namespace (including ip netns , LXC, ...) Optionally, It's also possible to cheat and move every special settings on the router (or bridge) by using an IFB interface with mirred traffic: allows by a trick (dedicated for this case) to insert netem sort-of-after ingress rather than on egress: _______ ______________________________________________ _______| | | | | | | (S1) -+---+- tcpdump -- ifb0 -- netem -- (R) -- tcpdump -+---+- (S2) ||_______| |______________________________________________| |_______| There's a basic IFB usage example in tc mirred manpage: Using an ifb interface, it is possible to send ingress traffic through an instance of sfq: # modprobe ifb# ip link set ifb0 up# tc qdisc add dev ifb0 root sfq# tc qdisc add dev eth0 handle ffff: ingress# tc filter add dev eth0 parent ffff: u32 \ match u32 0 0 \ action mirred egress redirect dev ifb0 Just use netem on ifb0 instead of sfq (and in non-initial network namespace, ip link add name ifbX type ifb works fine, without modprobe). This still requires 3 network stacks for proper working. using NFLOG After a suggestion from JenyaKh, it turns out it's possible to capture a packet with tcpdump , before egress (thus before qdisc) and then on egress (after qdisc): by using iptables (or nftables ) to log full packets to the netlink log infrastructure, and still reading them with tcpdump , then again using tcpdump on the egress interface. This requires only settings on S1 (and doesn't need a router/bridge anymore). So with iptables on S1, something like: iptables -A OUTPUT -o eth0 -j NFLOG --nflog-group 1 Specific filters should probably be added to match the test done, because tcpdump filter is limited on nflog interface (wireshark should handle it better). If the answer capture is needed (here done in a different group, thus requiring an additional tcpdump ): iptables -A INPUT -i eth0 -j NFLOG --nflog-group 2 Depending on needs it's also possible to move them to raw/OUTPUT and raw/PREROUTING instead. With tcpdump : # tcpdump -i nflog:1 -n -tt ... If a different group (= 2) was used for input: # tcpdump -i nflog:2 -n -tt ... Then at the same time, as usual: # tcpdump -i eth0 -n -tt ... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/523600",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/346609/"
]
} |
523,625 | I want to write a function that checks if a given variable, say, var , starts with any of the words in a given list of strings. This list won't change. To instantiate, let's pretend that I want to check if var starts with aa , abc or 3@3 . Moreover, I want to check if var contains the character > . Let's say this function is called check_func . My intended usage looks something like if check_func "$var"; then do stufffi For example, it should "do stuff" for aardvark , abcdef , [email protected] and 12>5 . I've seen this SO question where a user provides part of the work: beginswith() { case $2 in "$1"*) true;; *) false;; esac; } My idea is that I would iterate over the list mentioned above and use this function. My difficulty lies in not understanding exactly how exiting (or whatever replaces returning) should be done to make this work. | check_prefixes () { value=$1 for prefix in aa abc 3@3; do case $value in "$prefix"*) return 0 esac done return 1}check_contains_gt () { value=$1 case $value in *">"*) return 0 esac return 1}var='aa>'if check_prefixes "$var" && check_contains_gt "$var"; then printf '"%s" contains ">" and starts with one of the prefixes\n' "$var"fi I divided the tests up into two functions. Both use case ... esac and returns success (zero) as soon as this can be determined. If nothing matches, failure (1) is returned. To make the list of prefixes more of a dynamic list, one could possibly write the first function as check_prefixes () { value=$1 shift for prefix do case $value in "$prefix"*) return 0 esac done return 1} (the value to inspect is the first argument, which we save in value and then shift off the list of arguments to the function; we then iterate over the remaining arguments) and then call it as check_prefixes "$var" aa abc 3@3 The second function could be changed in a similar manner, into check_contains () { value=$1 shift case $value in *"$1"*) return 0 esac return 1} (to check for some arbitrary substring), or check_contains_oneof () { value=$1 shift for substring do case $value in *"$substring"*) return 0 esac done return 1} (to check for any of a number of substrings) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/523625",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/356819/"
]
} |
523,635 | Whenever I plug in a USB keyboard, the layout of all keyboards is reset to some system default (a US layout which doesn't have modifiers and other keys the way I want them). I've observed this on many Debian and Ubuntu systems, including Ubuntu 16.04 and 18.04. This behavior has been around for a very long time . I use X11 with no desktop environment (though some Gnome demons tend to get started). I set my keyboard layout with XKB (specifically … | xkbcomp - "$DISPLAY" ) when I log in. When I insert a USB keyboard, I want it to have my layout, not the system layout. In fact, I wish the system would just keep using my current layout for both the already present keyboard(s) if any, and the newly inserted keyboard. If that's not possible, I'd settle for re-applying a layout that I chose. Likewise the repeat rate on both keyboards is set to the login-time default instead of the rate I set with xset r . How can I prevent a keyboard hotplug from resetting the keyboard layout and the repeat rate? Or failing that how can I at least make it reset to my chosen layout? There's a fairly clumsy way which is with a udev rule . It's clumsy because it assumes that there's a single X server, and most problematically, it assumes that the user has root permission. I do not have root permissions , so any method that involves setting udev rules or editing Xorg.conf is inapplicable here. | I set my keyboard layout with XKB (specifically … | xkbcomp - "$DISPLAY" ) when I log in. How can I prevent a keyboard hotplug from resetting the keyboard layout and the repeat rate? It's not that it resets it. If you already have a keyboard plugged in, and are adding a second one, the old keyboard will continue to use the same settings. The problem is that the either the client-side way of loading an xkb configuration (with xkbcomp ) or the server-side (with setxkbmap ) will only apply the layout to the existing, actual devices, not to the "Core Keyboard" abstraction. When a device is unplugged, its settings are lost. The solution is to monitor yourself when a keyboard is added, and call xkbcomp / setxkbmap and xset r rate with your preferred settings. For that, you do not need any udev rule or any root privileges; any X11 client program can monitor changes to the input devices via the X11 Input extension and act on them. A program which can be used from the shell for that and is readily installable with apt-get on Debian & similar distros is inputplug . Example: $ cat ./on-new-kbd#! /bin/shkeymap=/path/to/your/keymapecho >&2 "$@"event=$1 id=$2 type=$3case "$event $type" in'XIDeviceEnabled XISlaveKeyboard') xset r rate 200 50 xkbcomp -i "$id" "$keymap" "$DISPLAY"esac$ chmod 755 ./on-new-kbd$ inputplug -d -c ./on-new-kbd<plug keyboard>XIDeviceEnabled 13 XISlavePointer GASIA USB KB V11XISlaveAdded 13 XIFloatingSlave GASIA USB KB V11XISlaveAdded 14 XIFloatingSlave GASIA USB KB V11XIDeviceEnabled 14 XISlaveKeyboard GASIA USB KB V11 Notice the -i option of xkbcomp -- you can use different settings for each keyboard. The protocol also allows to set the repeat rate on a per-device basis, but I don't know how to do that with xset . Of course, your window manager / desktop environment may itself listen for those events and override your settings. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/523635",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/885/"
]
} |
523,638 | I have a file which contains firewall log like this: Feb 3 0:18:51 17.1.1.1 id=firewall sn=qasasdasd "time=""2018-02-03" 22:47:55 "UTC""" fw=111.111.111.111 pri=6 c=2644 m=88 "msg=""Connection" "Opened""" app=2 n=2437 src=12.1.1.11:49894:X0 dst=4.2.2.2:53:X1 dstMac=42:16:1b:af:8e:e1 proto=udp/dns sent=83 "rule=""5" "(LAN->WAN)""" I need to get an output which should be like this: src=ipaddress:port , dst=ipaddress:port , proto=udp/dns Specifically, for the above input, src=12.1.1.11:49894,dst=4.2.2.2:53,proto=udp/dns I tried cat logfile.txt | awk '{ print $18" "$19" "$21 }' but result seems to different from what I expect. | I set my keyboard layout with XKB (specifically … | xkbcomp - "$DISPLAY" ) when I log in. How can I prevent a keyboard hotplug from resetting the keyboard layout and the repeat rate? It's not that it resets it. If you already have a keyboard plugged in, and are adding a second one, the old keyboard will continue to use the same settings. The problem is that the either the client-side way of loading an xkb configuration (with xkbcomp ) or the server-side (with setxkbmap ) will only apply the layout to the existing, actual devices, not to the "Core Keyboard" abstraction. When a device is unplugged, its settings are lost. The solution is to monitor yourself when a keyboard is added, and call xkbcomp / setxkbmap and xset r rate with your preferred settings. For that, you do not need any udev rule or any root privileges; any X11 client program can monitor changes to the input devices via the X11 Input extension and act on them. A program which can be used from the shell for that and is readily installable with apt-get on Debian & similar distros is inputplug . Example: $ cat ./on-new-kbd#! /bin/shkeymap=/path/to/your/keymapecho >&2 "$@"event=$1 id=$2 type=$3case "$event $type" in'XIDeviceEnabled XISlaveKeyboard') xset r rate 200 50 xkbcomp -i "$id" "$keymap" "$DISPLAY"esac$ chmod 755 ./on-new-kbd$ inputplug -d -c ./on-new-kbd<plug keyboard>XIDeviceEnabled 13 XISlavePointer GASIA USB KB V11XISlaveAdded 13 XIFloatingSlave GASIA USB KB V11XISlaveAdded 14 XIFloatingSlave GASIA USB KB V11XIDeviceEnabled 14 XISlaveKeyboard GASIA USB KB V11 Notice the -i option of xkbcomp -- you can use different settings for each keyboard. The protocol also allows to set the repeat rate on a per-device basis, but I don't know how to do that with xset . Of course, your window manager / desktop environment may itself listen for those events and override your settings. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/523638",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/356831/"
]
} |
523,646 | I have a file which contains lines as proto=tcp/http sent=144 rcvd=52 spkt=3 proto=tcp/https sent=145 rcvd=52 spkt=3proto=udp/dns sent=144 rcvd=52 spkt=3 I need to extract the value of proto which is tcp/http , tcp/https , udp/dns . So far I have tried this grep -o 'proto=[^/]*/' but only able to extract the value as proto=tcp/ . | With grep -o , you will have to match exactly what you want to extract. Since you don't want to extract the proto= string, you should not match it. An extended regular expression that would match either tcp or udp followed by a slash and some non-empty alphanumeric string is (tcp|udp)/[[:alnum:]]+ Applying this on your data: $ grep -E -o '(tcp|udp)/[[:alnum:]]+' filetcp/httptcp/httpsudp/dns To make sure that we only do this on lines that start with the string proto= : grep '^proto=' file | grep -E -o '(tcp|udp)/[[:alnum:]]+' With sed , removing everything before the first = and after the first blank character: $ sed 's/^[^=]*=//; s/[[:blank:]].*//' filetcp/httptcp/httpsudp/dns To make sure that we only do this on lines that start with the string proto= , you could insert the same pre-processing step with grep as above, or you could use sed -n '/^proto=/{ s/^[^=]*=//; s/[[:blank:]].*//; p; }' file Here, we suppress the default output with the -n option, and then we trigger the substitutions and an explicit print of the line only if the line matches ^proto= . With awk , using the default field separator, and then splitting the first field on = and printing the second bit of it: $ awk '{ split($1, a, "="); print a[2] }' filetcp/httptcp/httpsudp/dns To make sure that we only do this on lines that start with the string proto= , you could insert the same pre-processing step with grep as above, or you could use awk '/^proto=/ { split($1, a, "="); print a[2] }' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/523646",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/356831/"
]
} |
523,763 | function mv1 { mv -n "$1" "targetdir" -v |wc -l ;}mv1 *.png It does only move the first .png file it finds, not all of them. How can I make the command apply to all files that match the wildcards? | mv1 *.png first expands the wildcard pattern *.png into the list of matching file names, then passes that list of file names to the function. Then, inside the function $1 means: take the first argument to the function, split it where it contains whitespace, and replace any of the whitespace-separated parts that contain wildcard characters and match at least one file name by the list of matching file names. Sounds complicated? It is, and this behavior is only occasionally useful and is often problematic. This splitting and matching behavior only occurs if $1 occurs outside of double quotes, so the fix is easy: use double quotes. Always put double quotes around variable substitutions unless you have a good reason not to. For example, if the current directory contains the two files A* algorithm.png and graph1.png , then mv1 *.png passes A* algorithm.png as the first argument to the function and graph1.png as the second argument. Then $1 is split into A* and algorithm.png . The pattern A* matches A* algorithm.png , and algorithm.png doesn't contain wildcard characters. So the function ends up running mv with the arguments -n , A* algorithm.png , algorithm.png , targetdir and -v . If you correct the function to function mv1 { mv -n "$1" "targetdir" -v |wc -l ;} then it will correctly move the first file. To process all the arguments, tell the shell to process all arguments and not just the first. You can use "$@" to mean the full list of arguments passed to the function. function mv1 { mv -n "$@" "targetdir" -v |wc -l ;} This is almost correct, but it still fails if a file name happens to begin with the character - , because mv will treat that argument as an option. Pass -- to mv to tell it “no more options after this point”. This is a very common convention that most commands support. function mv1 { mv -n -v -- "$@" "targetdir" |wc -l ;} A remaining problem is that if mv fails, this function returns a success status, because the exit status of commands on the left-hand side of a pipe is ignored. In bash (or ksh), you can use set -o pipefail to make the pipeline fail. Note that setting this option may cause other code running in the same shell to fail, so you should set it locally in the function, which is possible since bash 4.4. function mv1 { local - set -o pipefail mv -n -v -- "$@" "targetdir" | wc -l} In earlier versions, setting pipefail would be fragile, so it would be better to check PIPESTATUS explicitly instead. function mv1 { mv -n -v -- "$@" "targetdir" | wc -l ((!${PIPESTATUS[0] && !${PIPESTATUS[1]}}))} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/523763",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/270469/"
]
} |
523,855 | GNU bash, version 4.4.19(1)-release (x86_64-pc-linux-gnu) Idea is to set a variable to a NUL delimited data set. Here $samples This, however, result in: warning: command substitution: ignored null byte in input when doing: samples="$(find . -type d -iregex './sample[0-9][0-9]' -printf "%f\0" | sort -z)" Thought I could re-use this variable as I need to iterate the same values multiple times: while IFS= read -rd '' sample; do echo $sampledone<<< "$samples" I could use \n over \0 in the find command in this exact case, but would like to know how, if possible, to do it with NUL delimiter generally speaking. Optionally I could do: while IFS= read -rd '' sample; do echo $sampledone< <(find . -type d -iregex './E[0-9][0-9]' -printf "%f\0" | sort -z) but - as I need to loop it several times it makes for some very redundant code - and would have to run the find and sort command each time. Convert the result into an array perhaps? Is this possible? Why can not NUL delimited data be used as is? | It is a fact that you can't store \0 null bytes in a bash string context, because of the underlying C implementation. See Why $'\0' or $'\x0' is an empty string? Should be the null-character, isn't it? . One option would be strip off the null bytes after the sort command, at the end of the pipeline using tr and store the result to solve the immediate problem of the warning message thrown. But that would still leave your logic flawed as the filenames with newlines would still be broken. Use an array, use the mapfile or readarray command (on bash 4.4+) to directly slurp in the results from the find command IFS= readarray -t -d '' samples < <(find . -type d -iregex './sample[0-9][0-9]' -printf "%f\0" | sort -z) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/523855",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140633/"
]
} |
523,929 | I am trying to combine a few programs like so (please ignore any extra includes, this is heavy work-in-progress): pv -q -l -L 1 < input.csv | ./repeat <(nc "host" 1234) Where the source of the repeat program looks as follows: #include <fcntl.h>#include <stdint.h>#include <stdio.h>#include <stdlib.h>#include <string.h>#include <sys/epoll.h>#include <sys/stat.h>#include <sys/types.h>#include <unistd.h>#include <iostream>#include <string>inline std::string readline(int fd, const size_t len, const char delim = '\n'){ std::string result; char c = 0; for(size_t i=0; i < len; i++) { const int read_result = read(fd, &c, sizeof(c)); if(read_result != sizeof(c)) break; else { result += c; if(c == delim) break; } } return result;}int main(int argc, char ** argv){ constexpr int max_events = 10; const int fd_stdin = fileno(stdin); if (fd_stdin < 0) { std::cerr << "#Failed to setup standard input" << std::endl; return -1; } /* General poll setup */ int epoll_fd = epoll_create1(0); if(epoll_fd == -1) perror("epoll_create1: "); { struct epoll_event event; event.events = EPOLLIN; event.data.fd = fd_stdin; const int result = epoll_ctl(epoll_fd, EPOLL_CTL_ADD, fd_stdin, &event); if(result == -1) std::cerr << "epoll_ctl add for fd " << fd_stdin << " failed: " << strerror(errno) << std::endl; } if (argc > 1) { for (int i = 1; i < argc; i++) { const char * filename = argv[i]; const int fd = open(filename, O_RDONLY); if (fd < 0) std::cerr << "#Error opening file " << filename << ": error #" << errno << ": " << strerror(errno) << std::endl; else { struct epoll_event event; event.events = EPOLLIN; event.data.fd = fd; const int result = epoll_ctl(epoll_fd, EPOLL_CTL_ADD, fd, &event); if(result == -1) std::cerr << "epoll_ctl add for fd " << fd << "(" << filename << ") failed: " << strerror(errno) << std::endl; else std::cerr << "Added fd " << fd << " (" << filename << ") to epoll!" << std::endl; } } } struct epoll_event events[max_events]; while(int event_count = epoll_wait(epoll_fd, events, max_events, -1)) { for (int i = 0; i < event_count; i++) { const std::string line = readline(events[i].data.fd, 512); if(line.length() > 0) std::cout << line << std::endl; } } return 0;} I noticed this: When I just use the pipe to ./repeat , everything works as intended. When I just use the process substitution, everything works as intended. When I encapsulate pv using process substitution, everything works as intended. However, when I use the specific construction, I appear to lose data (individual characters) from stdin! I have tried the following: I have tried to disable buffering on the pipe between pv and ./repeat using stdbuf -i0 -o0 -e0 on all processes, but that doesn't seem to work. I have swapped epoll for poll, doesn't work. When I look at the stream between pv and ./repeat with tee stream.csv , this looks correct. I used strace to see what was going on, and I see lots of single-byte reads (as expected) and they also show that data is going missing. I wonder what is going on? Or what I can do to investigate further? | Because the nc command inside <(...) will also read from stdin. Simpler example: $ nc -l 9999 >/tmp/foo &[1] 5659$ echo text | cat <(nc -N localhost 9999) -[1]+ Done nc -l 9999 > /tmp/foo Where did the text go? Through the netcat. $ cat /tmp/footext Your program and nc compete for the same stdin, and nc gets some of it. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/523929",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/357079/"
]
} |
523,953 | I am trying to make sure when a script is run it is run as a specific user without having to su to that user before the script is run. Also the script is run with a couple of flags for example ./myscript.sh -e dev -v 1.9 I have tried the following [ `whoami` = myuser ] || exec sudo -S su - myuser -c "bash `pwd`/`basename $0` $@" But the -v flag which is supposed to be an input to my script is being fed as input to su. So it complains of an invalid option, is there a way to correct the above? NB: The person running the script has sudo privileges. | The current user is already in the variable $USER . So all you need is: [ "$USER" = "myuser" ] || sudo -u myuser $0 "$@" There is no need for sudo su , sudo can do everything you require. You also don't need pwd or basename , the $0 variable already has the full path to the script. Your original command was starting a login shell ( su - ). If that's really needed (which seems strange), you can do: [ "$USER" = "myuser" ] || sudo -iu myuser $0 "$@" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/523953",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/353543/"
]
} |
523,967 | Very often, I need to quickly type some lines of characters (for instance for simplifying a mathematical expression) and then discard the whole stuff. Of course, I generally have several running sessions of my text editor, and I can do it there. I have also been using the following system; I type echo " in a terminal (running bash) and then type anything; when I am done, I type a closing " . Yesterday, I decided to add something more elegant in my .bashrc and I created the following alias: # write junk on the terminal (stop with a line containing only "." )# alias scratch="awk '/^\.$/{exit}'"# alias scratch="sed -ne '/^\.$/q'"alias scratch='grep -q "^\.$"' You can see above three variants doing the same thing: I type scratch , then type whatever I want to type, and I end with a line containing only . (dot). I am very happy with that, but I also thought at the following variant: alias scratch="cat - > /dev/null << ." which does the same thing with an initial prompt > at the beginning of each line. Then, I wondered how I could use the PS2 environment variable for changing this prompt, and I couldn't get anything fully working. Of course, the current PS2 prompt should be restored when I am done. How can I set the prompt for this variant of my scratch alias to any arbitrary string during the time of the "scratch" session only? | The current user is already in the variable $USER . So all you need is: [ "$USER" = "myuser" ] || sudo -u myuser $0 "$@" There is no need for sudo su , sudo can do everything you require. You also don't need pwd or basename , the $0 variable already has the full path to the script. Your original command was starting a login shell ( su - ). If that's really needed (which seems strange), you can do: [ "$USER" = "myuser" ] || sudo -iu myuser $0 "$@" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/523967",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66810/"
]
} |
523,989 | In Linux (I am using CentOS 7), there is a built in functionality to view all runnable commands. The command is run by pressing tab twice in the console followed by the prompt: Display all 1130 possibilities? (y or n) Pressing y outputs a huge list of commands to the console. Is there a way to capture this output in a file? Or is this list already stored locally? If so, how can I access this? | The solution I chose was to run the command: $ compgen -A function -abck | sort -u >> cmds.txt which appends all runnable commands, functions and aliases to a text file cmds.txt Taken from: https://stackoverflow.com/questions/948008/linux-command-to-list-all-available-commands-and-aliases Edit: added sort -u to command to remove duplicates as suggested by glenn jackman | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/523989",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/357125/"
]
} |
524,018 | I am a programmer looking to gain expert experience into the workings of the Linux operating system. I have gone through many tutorials and materials on the basic workings of operating systems and have even had a pass at the source for the xv6 operating system. I have an old laptop/notebook, which I will like to set up to go through all the examples in the free eBook "Linux device drivers". The computer in question has the following specifications: PROCESSOR: Intel(R) Atom(TM) CPU N280 @1.66Ghz 1.67Ghz MEMORY: 1GB TYPE: 32 bit I am looking to wipe the hard disk clean and have Linux running as the only operating system on the computer. Also, reading Chapter 2 of the above mentioned eBook, it talks about having a kernel source tree in place to run the examples. I will appreciate if someone could explain how this will be used in the context of experimenting with the tutorials. | The solution I chose was to run the command: $ compgen -A function -abck | sort -u >> cmds.txt which appends all runnable commands, functions and aliases to a text file cmds.txt Taken from: https://stackoverflow.com/questions/948008/linux-command-to-list-all-available-commands-and-aliases Edit: added sort -u to command to remove duplicates as suggested by glenn jackman | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/524018",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/342078/"
]
} |
524,048 | If I have a file with contents similar to: FirstSection Unique first line in first section Unique second line in first sectionSecondSection Unique first line in second section Unique second line in second section...NthSection Unique first line in Nth section Unique second line in Nth section Is it possible to use unix commands (e.g. sort, awk) to sort the file alphabetically by the first non-indented line in each three line group, whilst keeping the indented lines under their existing group? | Using Perl you could run something along the lines of: slurp the file ( perl -0n ) split the input by not indented lines split(/^(?=\S)/m) sort and print perl -0ne 'print sort split(/^(?=\S)/m) ' ex | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/524048",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/161907/"
]
} |
524,053 | I'm making a very basic Debian repo for a client. Started with a bash script, but decided a Makefile would eliminate duplicated work. I've hammered out a working Makefile, but it works only if I make all . For some reason, a straight make only builds the first deb. make clean does nothing. What am I doing wrong? MFGR:=MyCoolEmployerMAJOR:=1MINOR:=0REVISION:=1000VERSION:=$(MAJOR).$(MINOR)-$(REVISION)MODELS:=SpiffyModel1 SpiffyModel2COMMON:=commonSOFT_TARGETS:=$(COMMON) $(MODELS)SOFT_TARGET_FOLDERS:=$(patsubst %, $(MFGR)-%_$(VERSION), $(SOFT_TARGETS))DEBs := $(patsubst %, %.deb, $(SOFT_TARGET_FOLDERS))REPO_DEBs :=$(patsubst %, Repo/binary/%, $(DEBs))REPO=repo.zipPACKAGES_GZ := Repo/binary/Packages.gz%.deb: % dpkg-deb --build $<$(REPO_DEBs): $(DEBs) cp $^ Repo/binary/$(PACKAGES_GZ): $(REPO_DEBs) dpkg-scanpackages Repo/binary /dev/null | gzip -9c > $@$(REPO): $(REPO_DEBs) $(PACKAGES_GZ) cd Repo; zip -r -v -0 ../$@ . ; cd ..TARGETS: $(REPO) $(PACKAGES_GZ) $(REPO_DEBs) $(DEBs)all: TARGETSclean: rm -f $(TARGETS) This assumes the packages exist as folders named MyCoolEmployer- PACKAGENAME _1.0-1000 | Your question contains its own answer. By default, make processes only the first entry in the Makefile . You need to put the "all" entry first. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/524053",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/357172/"
]
} |
524,059 | Let's say I have a URL like so: https://mywebsite.com/files/myfolder How do I download all the files in the myfolder part (which is a directory), excluding files like index files, into a directory with the same name? I.e. I want to end up with a directory on my computer called myfolder with all of the contents of myfolder on the server . Ideally I'd like to not have to specify the directory name (on the client) and just have wget do its thing and copy it from the server. How do I do this? | Your question contains its own answer. By default, make processes only the first entry in the Makefile . You need to put the "all" entry first. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/524059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/357179/"
]
} |
524,073 | allHexChars.txt \x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f\x20\x21\x22\x23\x24\x25\x26\x27\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f\x30\x31\x32\x33\x34\x35\x36\x37\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f\x40\x41\x42\x43\x44\x45\x46\x47\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f\x50\x51\x52\x53\x54\x55\x56\x57\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f\x60\x61\x62\x63\x64\x65\x66\x67\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f\x70\x71\x72\x73\x74\x75\x76\x77\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f\x80\x81\x82\x83\x84\x85\x86\x87\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f\x90\x91\x92\x93\x94\x95\x96\x97\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xab\xac\xad\xae\xaf\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\xec\xed\xee\xef\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff allowedChars.txt \x01\x02\x03\x04\x05\x06\x07\x08\x09\x31\x32\x33\x34\x35\x36\x37\x38\x39\x3b\x3c\x3d\x3e\x41\x42\x43\x44\x45\x46\x47\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f\x50\x51\x52\x53\x54\x55\x56\x57\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f\x60\x61\x62\x63\x64\x65\x66\x67\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f\x70\x71\x72\x73\x74\x75\x76\x77\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f How can I get this output? \x0a\x0b\x0c\x0d\x0e\x0f\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f\x20\x21\x22\x23\x24\x25\x26\x27\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f\x30\x80\x81\x82\x83\x84\x85\x86\x87\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f\x90\x91\x92\x93\x94\x95\x96\x97\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xab\xac\xad\xae\xaf\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\xec\xed\xee\xef\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff I've tried diff, vimdiff, sdiff, perl, awk, sed. I've tried echoing the contents of both files into one, and running the below: perl -ne 'print unless $seen{$_}++' everything.txtawk '!seen[$0]++' everything.txt But nothing seems to give me the output I need. Not sure if I'm just minterpreting, or if I need to specify the \x as a delimiter, or replace it with something else. All I want is the delta between the two files: the hex characters that are in allHexChars.txt that don't exist in allowedChars.txt. I don't mind how | sed -r 'H;$!d;x;s:\n::g;:l;s:(\\x..)(.*)\1:\2:;tl' allHexChars.txt allowedChars.txt > missingChars.txt The above GNU sed script assumes two things as I understood them from the question: inside the files no hex character is listed more than one time the first file contains all the hex characters from the second file To visualize the differences, use: diff -y <(fold -4 allHexChars.txt) <(fold -4 allowedChars.txt) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/524073",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/280736/"
]
} |
524,245 | I have these lines in multiple files: <update>2013-02-10</update><version>1.15</version> and I want to replace them with (new date and version): <update>2019-06-30</update><version>1.28</version> How can do this on multiple files using sed or awk? (I'm on Mac OS) edit 1: lines <update> and <version> are not one after another and I want to replace every occurrence of them. edit 2: date varies but the string <update> doesn't get changed, so I can't use find "2013-02-10" and replace with "new date" | sed -i backup -E '/<update>/s/[0-9]{4}-[0-9]{2}-[0-9]{2}/2019-06-30/;/<version>/s/[0-9]\.[0-9]{1,2}/1.28/' * -i backup means to edit the files at their place, but keep a backup file with extension backup . You can delete them if the command did what you expected. If it did something else you'll be happy to have the backups! -E is for extended regular expressions. Makes the script more readable because you don't need to escape the {} For each line with pattern /<update>/ do s/[0-9]{4}-[0-9]{2}-[0-9]{2}/2019-06-30/ which is replacing a date string ####-##-## with the given date For each line with pattern /<version>/ do s/[0-9]\.[0-9]{1,2}/1.28/ which is replacing the version string #.# or #.## with the given version | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/524245",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102398/"
]
} |
524,254 | In my copy of the conda.sh script, I see the following lines: if [ -n "${_CE_CONDA}" ] && [ -n "${WINDIR+x}" ]; then SYSP=$(\dirname "${CONDA_EXE}")else SYSP=$(\dirname "${CONDA_EXE}") SYSP=$(\dirname "${SYSP}")fi I am curious as to why there is a backslash in front the the d in dirname . I do not believe it is necessary. This use of backslashes also appears in other places in the source file. Is there a reason for doing this that I am missing? | Backslash will suppress alias expansion, ie it executes the original command and makes sure that alias version does not run. Scripts can unknowingly run with alias expansion when the system has set shopt -s expand_aliases (BASH only) or if it is executed using source . ./conda.sh # usually no alias expansion (unless `shopt -s expand_aliases` in BASH)source ./conda.sh # alias expansion. ./conda.sh # alias expansion Some sysadmins like to put backslash in everything as a preventive measure against side-effects of aliases, just in case it was aliased unintentionally somewhere else and the alias gets expanded as explained previously. For example, if the system has set this alias dirname='dirname -z' somewhere and the condition allows the alias to be expanded, then a script that tries to call dirname will unfortunately call dirname -z instead, which was not the script intended. If there's certainty that such alias do not exist, we can remove all the backslash and it should work fine. Alternatively, one can use command instead of backslash version to suppress alias. Thus, instead of \dirname , one can use command dirname , which might look more readable. (For built-in commands like cd , one should use builtin instead). I prefer this instead, as it also bypasses function with same name as well as any aliases. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/524254",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/209133/"
]
} |
524,321 | I am currently looking for an alternative to the following code that works a little less 'wonky'. printf "Please enter the ticket number:\t"; read vTICKET; vTICKET=$(printf %04d "$vTICKET"); printf "$vTICKET\n"; If I input 072 as the input, this is what I see Please enter the ticket number: 0720058 I am wondering if there is another way I can be a little more forgiving on the input or with the read command? printf seemed like the cool way to add leading zeroes without actually testing string length. | The leading zeros on the input value are causing the shell to interpret it as an octal number. You can force decimal conversion using 10# e.g. $ printf "Please enter the ticket number:\t"; read vTICKET; vTICKET=$(printf %04d "$((10#$vTICKET))" ); printf "$vTICKET\n";Please enter the ticket number: 0720072 Note that in bash, you can assign the results of a printf to a variable directly using -v e.g. printf -v vTICKET %04d "$((10#$vTICKET))" See also How do I stop Bash from interpreting octal code instead of integer? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/524321",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/356307/"
]
} |
524,506 | I'm trying to write a function to replace the functionality of the exit builtin to prevent myself from exiting the terminal. I have attempted to use the SHLVL environment variable but it doesn't seem to change within subshells: $ echo $SHLVL1$ ( echo $SHLVL )1$ bash -c 'echo $SHLVL'2 My function is as follows: exit () { if [[ $SHLVL -eq 1 ]]; then printf '%s\n' "Nice try!" >&2 else command exit fi} This won't allow me to use exit within subshells though: $ exitNice try!$ (exit)Nice try! What is a good method to detect whether or not I am in a subshell? | In bash, you can compare $BASHPID to $$ $ ( if [ "$$" -eq "$BASHPID" ]; then echo not subshell; else echo subshell; fi )subshell$ if [ "$$" -eq "$BASHPID" ]; then echo not subshell; else echo subshell; finot subshell If you're not in bash, $$ should remain the same in a subshell, so you'd need some other way of getting your actual process ID. One way to get your actual pid is sh -c 'echo $PPID' . If you just put that in a plain ( … ) it may appear not to work, as your shell has optimized away the fork. Try extra no-op commands ( : ; sh -c 'echo $PPID'; : ) to make it think the subshell is too complicated to optimize away. Credit goes to John1024 on Stack Overflow for that approach. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/524506",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237982/"
]
} |
524,552 | I have upgraded my system (CentOS 7) to Python 3.7 and it seems that has broken a ton of things. In particular, I can't perform a yum upgrade ... [myuser@server ~]$ sudo yum upgradeLoaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: mirror.us-midwest-1.nexcess.net * epel: mirror.layeronline.com * extras: mirror.us-midwest-1.nexcess.net * updates: mirror.us-midwest-1.nexcess.net File "/usr/libexec/urlgrabber-ext-down", line 28 except OSError, e: ^SyntaxError: invalid syntax File "/usr/libexec/urlgrabber-ext-down", line 28 except OSError, e: ^SyntaxError: invalid syntax File "/usr/libexec/urlgrabber-ext-down", line 28 except OSError, e: ^SyntaxError: invalid syntax File "/usr/libexec/urlgrabber-ext-down", line 28 except OSError, e: ^SyntaxError: invalid syntax File "/usr/libexec/urlgrabber-ext-down", line 28 except OSError, e: ^SyntaxError: invalid syntaxExiting on user cancel Is there any way I can heal the pain here? | NOTE: In case someone still needs it. Not MINE link at the end If this is what you see on yum install <package-name> (base) [root@localhost rstudio]# yum install shiny-server-1.5.9.923-x86_64.rpm File "/usr/bin/yum", line 30 except KeyboardInterrupt, e: ^SyntaxError: invalid syntax Cause AnalysisBecause yum supports python2 by default, when you upgrade to python3, you get an error.If you can enter python2 by building python2 (base) [root@localhost rstudio]# python2Python 2.7.5 (default, Jul 13 2018, 13:06:57) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] on linux2Type "help", "copyright", "credits" or "license" for more information. Then you can modify the yum code python to python to implement. lets solve it.... vi /usr/bin/yum Change #!/usr/bin/python on the first line to #!/usr/bin/python2. #!/usr/bin/python2import systry: import yumexcept ImportError: print >> sys.stderr, """\There was a problem importing one of the Python modulesrequired to run yum. The error leading to this problem was: problem solved!! postscript Found that yum no matter what software is installed, is an error, the type is as follows: base) [root@localhost ~]# yum install yum-fastestmirrorLoaded plugins: fastestmirror, langpacksLoading mirror speeds from cached hostfile * base: mirrors.tuna.tsinghua.edu.cn * extras: mirrors.huaweicloud.com * updates: mirror.jdcloud.com File "/usr/libexec/urlgrabber-ext-down", line 28 except OSError, e: ^SyntaxError: invalid syntax File "/usr/libexec/urlgrabber-ext-down", line 28 except OSError, e: ^SyntaxError: invalid syntax File "/usr/libexec/urlgrabber-ext-down", line 28 except OSError, e: ^SyntaxError: invalid syntax File "/usr/libexec/urlgrabber-ext-down", line 28 except OSError, e: ^SyntaxError: invalid syntax Solution 1, enter the edit urlgrabber-ext-down2, change python to python2#vi /usr/libexec/urlgrabber-ext-down#!/usr/bin/python >--Replace with -->#!/usr/bin/python2 P.S. Copied, almost to the word, from Solution | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/524552",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166917/"
]
} |
524,606 | I want to match: aaa aaa,bbb aaa,bbb,ccc But not a list with a trailing comma. My current regex: (\w{3},?)+ also matches lists with trailing commas ( aaa,bbb, ). I was thinking I could also do : (\w{3})(,\w{3})* but that is rather ugly. My real regex is not matching 3-letter-words, but something bigger, and repeating the regex is ugly. How can this be fixed? | You can give a name to your big regex in PCRE like: (?<big>[a-zA-Z0-9]+) Everything after the ?<name> will be recorded with the name given. Called Regular Expression Subroutines So, repeats ( ?&name ) become easy: ^(?<big>[a-zA-Z0-9]+)(,(?&big))*$ Test it Online So, matching an IP, for example, becomes simpler: ^(?<ip>25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(.(?&ip)){3}$ Test it OnLine . Use it with grep as: grep -P '^(?<ip>25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(.(?&ip)){3}$' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/524606",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5352/"
]
} |
524,708 | As the title says, every time I try to connect (via RDP), Remmina only connects once (and it works for that remainder of the session), but any subsequent attempts of connecting to another or even the same PC, fails. I need to restart Remmina, after which it works again. What could be the reason for that? Starting remmina from console I can see the following output. The first time the connection works, and everything is good. The second time it fails. Only restarting remmina allows me to reconnect again. StatusNotifier/Appindicator support: not supported by desktop. Remmina will try to fallback to GtkStatusIcon/xembed(org.remmina.Remmina:11362): Gtk-WARNING **: 12:05:52.660: gtk_menu_attach_to_widget(): menu already attached to GtkMenuItem(org.remmina.Remmina:11362): Gtk-CRITICAL **: 12:05:56.558: gtk_window_resize: assertion 'width > 0' failed[12:05:56:021] [11362:11367] [INFO][com.freerdp.client.common.cmdline] - loading channelEx rdpdr[12:05:56:021] [11362:11367] [INFO][com.freerdp.client.common.cmdline] - loading channelEx rdpsnd[12:05:56:021] [11362:11367] [INFO][com.freerdp.client.common.cmdline] - loading channelEx cliprdr[12:05:56:021] [11362:11367] [INFO][com.freerdp.client.common.cmdline] - loading channelEx drdynvc[12:05:57:335] [11362:11367] [ERROR][com.winpr.sspi.Kerberos] - error while getting credentials[12:05:57:335] [11362:11367] [ERROR][com.winpr.sspi.Kerberos] - Kerberos credentials not found and could not be acquired[12:05:57:335] [11362:11367] [WARN][com.winpr.negotiate] - No Kerberos credentials. Retry with NTLM[12:05:57:335] [11362:11367] [WARN][com.winpr.sspi] - InitializeSecurityContextA status SEC_E_NO_CREDENTIALS [0x8009030E][12:05:57:844] [11362:11367] [INFO][com.freerdp.gdi] - Local framebuffer format PIXEL_FORMAT_BGRA32[12:05:57:844] [11362:11367] [INFO][com.freerdp.gdi] - Remote framebuffer format PIXEL_FORMAT_RGB16[12:05:57:845] [11362:11367] [INFO][com.freerdp.channels.drdynvc.client] - Loading Dynamic Virtual Channel disp[12:05:57:847] [11362:11386] [INFO][com.freerdp.channels.rdpsnd.client] - Loaded pulse backend for rdpsnd(org.remmina.Remmina:11362): Gtk-CRITICAL **: 12:06:02.825: gtk_window_resize: assertion 'width > 0' failed[12:06:02:287] [11362:11391] [INFO][com.freerdp.client.common.cmdline] - loading channelEx rdpdr[12:06:02:287] [11362:11391] [INFO][com.freerdp.client.common.cmdline] - loading channelEx rdpsnd[12:06:02:287] [11362:11391] [INFO][com.freerdp.client.common.cmdline] - loading channelEx cliprdr[12:06:02:287] [11362:11391] [INFO][com.freerdp.client.common.cmdline] - loading channelEx drdynvc[12:06:03:434] [11362:11391] [ERROR][com.winpr.sspi] - EncryptMessage status SEC_E_INVALID_TOKEN [0x80090308][12:06:03:434] [11362:11391] [ERROR][com.freerdp.core.nla] - EncryptMessage status SEC_E_INVALID_TOKEN [0x80090308][12:06:03:434] [11362:11391] [ERROR][com.freerdp.core.rdp] - rdp_recv_callback: CONNECTION_STATE_NLA - nla_recv_pdu() fail[12:06:03:434] [11362:11391] [ERROR][com.freerdp.core.transport] - transport_check_fds: transport->ReceiveCallback() - -1[12:06:03:434] [11362:11391] [ERROR][com.freerdp.core] - freerdp_set_last_error ERRCONNECT_CONNECT_TRANSPORT_FAILED [0x0002000D][12:06:03:601] [11362:11391] [ERROR][com.winpr.sspi] - EncryptMessage status SEC_E_INVALID_TOKEN [0x80090308][12:06:03:601] [11362:11391] [ERROR][com.freerdp.core.nla] - EncryptMessage status SEC_E_INVALID_TOKEN [0x80090308][12:06:03:601] [11362:11391] [ERROR][com.freerdp.core.rdp] - rdp_recv_callback: CONNECTION_STATE_NLA - nla_recv_pdu() fail[12:06:03:601] [11362:11391] [ERROR][com.freerdp.core.transport] - transport_check_fds: transport->ReceiveCallback() - -1[12:06:03:601] [11362:11391] [ERROR][com.freerdp.core] - freerdp_set_last_error ERRCONNECT_CONNECT_TRANSPORT_FAILED [0x0002000D][12:06:03:601] [11362:11391] [ERROR][com.freerdp.core] - freerdp_post_connect failedlibfreerdp returned code is 0002000D I should note that I'm on Arch, with all the latest updates and I'm using Remmina 1.3.4. I've found another person with the same problem, but this is more than a year ago, and the recommended solution (downgrading) seems impractical, since it did work up to a week ago or so. | This is a bug from freerdp, the only solution is either to downgrade or install freerdp 2.0 which is not available as is. You can install it in arch using the freerdp-git aur.It solved the issue for me | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/524708",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/337646/"
]
} |
524,760 | I'm basically looking for files then sorting by the size. The script works if I don't sort the size by human readable. But I want the size to be human readable. How can I sort sizes that are human readable? For example: ls -l | sort -k 5 -n | awk '{print $9 " " $5}' This works as expected, I got the size of my files in bytes ascending: 1.txt 1test.txt 3bash.sh* 573DocGeneration.txt 1131andres_stuff.txt 1465Branches.xlsx 15087foo 23735bar 605662016_stuff.pdf 996850 Now, I want the size to be human readable, so I added an -h parameter to ls, and now some files are out of order: ls -lh | sort -k 5 -n | awk '{print $9 " " $5}' 1.txt 1DocGeneration.txt 1.2Kandres_stuff.txt 1.5Ktest.txt 3Branches.xlsx 15Kfoo 24Kbar 60Kbash.sh* 5732016_stuff.pdf 974K | Try sort -h k2 -h, --human-numeric-sort compare human readable numbers (e.g., 2K 1G) It is part of gnu sort, BSD sort, and others. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/524760",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/357731/"
]
} |
524,836 | i have this: echo $MSG | sed -e $'s/;/\\\n/g' I want to put the result of that sed in a new variable called $MSG2 Something like: $MSG2=echo $MSG|sed -e $'s/;/\\\n/g' How can i do it? Thank you! | For your task, you don't need pipelines or sed. It can all be done much more efficiently using builtin bash commands like this: NewMsg=${MSG//;/$'\n'} ${MSG//;/$'\n'} is an example of pattern substitution . It replaces every occurrence of ; with a newline character. The result is saved in the shell variable NewMsg . As an example: $ Msg='1;2;3'$ NewMsg=${Msg//;/$'\n'}$ echo "$NewMsg"123 Notes: It is best practice to use lower-case or mixed-case shell variables. The system uses all caps for its variables and you don't want to accidentally overwrite one of them. Unless you explicitly want word splitting and pathname expansion always put your shell variables in double-quotes. Thus, when temped to use echo $MSG , use instead echo "$MSG" . Also, unless you know what characters are going to be in the string that you are echoing, echo has problems and it is safer and more portable to use printf '%s\n' "$MSG" . For more details, see Stéphane Chazelas' very informative discussion of echo vs printf . Be aware that if you do use command substitution, $(...) , the shell will remove all trailing newlines. While this is usually helpful, there are times when the change is unwanted. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/524836",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/357806/"
]
} |
524,846 | I am using Arch Linux (5.1.8-arch1-1-ARCH) with the XFCE DE and XFWM4 WM. Things are pretty elegant and low on RAM and CPU usage. After the boot, and when the DE is loaded completely, I see 665 MiB of RAM usage. But after opening applications like Atom, Code, Firefox, Chromium, or after working in GIMP, Blender etc. the RAM usage increases, which is obvious. But after closing all the applications and left with nothing but a gnome-system-monitor, I can see that the RAM usage is 1.2 - 1.4 GiB. /proc/meminfo agrees with gnome-system-monitor, but htop gives different results all the time. The worse thing is that when I open a RAM hogging application later on, it again consumes needed memory on top of that 1.4 GiB. This is always the case. No files that could add up to megabytes are stored in the /tmp/ directory. Also, if I look for the process that's using that much RAM (from 700 MiB at start to 1.4 GiB after closing the browser!!), I see nothing. In fact I faced the same issue even on my raspberry pi running Arch ARM. The Ruby code: #!/usr/bin/ruby -wSTDOUT.sync = trueloop do IO.readlines(File.join(%w(/ proc meminfo))).then { |x| [x[0], x[2]] }.map { |x| x.split[1].to_i }.reduce(:-) .tap { |x| print "\e[2K\rRAM Usage:".ljust(20), "#{x / 1024.0} MiB".ljust(24), "#{(x / 1000.0)} MB" } Kernel.sleep(0.1)end The cat /proc/meminfo command has the following output: MemTotal: 3851796 kBMemFree: 1135680 kBMemAvailable: 2055708 kBBuffers: 1048 kBCached: 1463960 kBSwapCached: 284 kBActive: 1622148 kBInactive: 660952 kBActive(anon): 923580 kBInactive(anon): 269360 kBActive(file): 698568 kBInactive(file): 391592 kBUnevictable: 107012 kBMlocked: 32 kBSwapTotal: 3978216 kBSwapFree: 3966696 kBDirty: 280 kBWriteback: 0 kBAnonPages: 924844 kBMapped: 563732 kBShmem: 374848 kBKReclaimable: 74972 kBSlab: 130016 kBSReclaimable: 74972 kBSUnreclaim: 55044 kBKernelStack: 8000 kBPageTables: 14700 kBNFS_Unstable: 0 kBBounce: 0 kBWritebackTmp: 0 kBCommitLimit: 5904112 kBCommitted_AS: 3320548 kBVmallocTotal: 34359738367 kBVmallocUsed: 0 kBVmallocChunk: 0 kBPercpu: 1456 kBHardwareCorrupted: 0 kBAnonHugePages: 0 kBShmemHugePages: 0 kBShmemPmdMapped: 0 kBHugePages_Total: 0HugePages_Free: 0HugePages_Rsvd: 0HugePages_Surp: 0Hugepagesize: 2048 kBHugetlb: 0 kBDirectMap4k: 226736 kBDirectMap2M: 3778560 kBDirectMap1G: 0 kB Firstly you noticed htop never agrees. I don't know much about that. And secondly you can see the xfdesktop uses 44 MiB, and some other processes uses some of the memory, the kernel uses ~150 MiB, and apart from that, why am I seeing 1.5 GiB RAM is being used? Does this really affect the performance of the system? | Unused RAM is wasted RAM. The Linux kernel has advanced memory management features and tries to avoid putting a burden on the bottleneck in your system, your hard drive/SSD. It tries to cache files in memory. The memory management system works in complex ways, better performance is the goal. You can see what it is doing by inspecting /proc/meminfo . cat /proc/meminfo You can reclaim this cached memory, using "drop_caches". However, note the documentation says "use outside of a testing or debugging environment is not recommended", simply because "it may cost a significant amount of I/O and CPU to recreate thedropped objects" when they are needed again :-). Clear PageCache only: # sync; echo 1 > /proc/sys/vm/drop_caches Clear dentries and inodes: # sync; echo 2 > /proc/sys/vm/drop_caches Clear PageCache, dentries and inodes: # sync; echo 3 > /proc/sys/vm/drop_caches Note that sync will flush the file system buffer to ensure all data has been written. From the kernel docs : Page cache The physical memory is volatile and the common case for getting data into the memory is to read it from files. Whenever a file is read, the data is put into the page cache to avoid expensive disk access on the subsequent reads. Similarly, when one writes to a file, the data is placed in the page cache and eventually gets into the backing storage device. The written pages are marked as dirty and when Linux decides to reuse them for other purposes, it makes sure to synchronize the file contents on the device with the updated data. Reclaim Throughout the system lifetime, a physical page can be used for storing different types of data. It can be kernel internal data structures, DMA’able buffers for device drivers use, data read from a filesystem, memory allocated by user space processes etc. Depending on the page usage it is treated differently by the Linux memory management. The pages that can be freed at any time, either because they cache the data available elsewhere, for instance, on a hard disk, or because they can be swapped out, again, to the hard disk, are called reclaimable. The most notable categories of the reclaimable pages are page cache and anonymous memory. In most cases, the pages holding internal kernel data and used as DMA buffers cannot be repurposed, and they remain pinned until freed by their user. Such pages are called unreclaimable. However, in certain circumstances, even pages occupied with kernel data structures can be reclaimed. For instance, in-memory caches of filesystem metadata can be re-read from the storage device and therefore it is possible to discard them from the main memory when system is under memory pressure. The process of freeing the reclaimable physical memory pages and repurposing them is called (surprise!) reclaim. Linux can reclaim pages either asynchronously or synchronously, depending on the state of the system. When the system is not loaded, most of the memory is free and allocation requests will be satisfied immediately from the free pages supply. As the load increases, the amount of the free pages goes down and when it reaches a certain threshold (high watermark), an allocation request will awaken the kswapd daemon. It will asynchronously scan memory pages and either just free them if the data they contain is available elsewhere, or evict to the backing storage device (remember those dirty pages?). As memory usage increases even more and reaches another threshold - min watermark - an allocation will trigger direct reclaim. In this case allocation is stalled until enough memory pages are reclaimed to satisfy the request. Memory Leaks Now, some programs can have "memory leaks", that is, they "forget" to free up memory they no longer use. You can see this if you leave a program running for some time, its memory usage constantly increases, when you close it, the memory is never freed. Now, programmers try to avoid memory leaks, of course, but programs can have some. The way to reclaim this memory is a reboot. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/524846",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/274717/"
]
} |
524,871 | After adding a new mount point in /etc/fstab , we usually execute mount -a to reflect the change (if we want to bypass reboot), and df -kh output shows the new mount point. How does mount -a work/impact already mounted partitions, which have reference to the /etc/fstab file? Does it umount and then mount those partitions, or just ignore them since they are already mounted? | It skips ones already mounted. https://github.com/karelzak/util-linux/blob/master/sys-utils/mount.c#L185-L193 while (mnt_context_next_mount(cxt, itr, &fs, &mntrc, &ignored) == 0) { const char *tgt = mnt_fs_get_target(fs); if (ignored) { if (mnt_context_is_verbose(cxt)) printf(ignored == 1 ? _("%-25s: ignored\n") : _("%-25s: already mounted\n"), tgt); } // ...} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/524871",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130604/"
]
} |
524,888 | I want to redirect the output of a command ( diff in this case) to a file but only if there is a difference in files I'm comparing. For example, imagine I have three files a , b , and c where a and b are equivalent but a and c are not. If I do diff a c > output.txt or diff a b > output.txt , regardless of whether there is a difference or not, output.txt will be created. I only want output.txt to be created if there is a diff (i.e, diff returns 1). I'd want to do something like: if ! diff a c > /dev/null; then diff a c > output.txtfi But without running the command twice. I could save the contents of the command like so: res=$(diff a c)if [ $? != 0 ]; then echo "$res" > output.txtfi But then I'm bringing echo into this as a "middle-man", which could potentially raise some issues. How can I redirect output/create a file only if there's output without duplicating code? | You could call the command once, redirect the output, then remove the output if there were no differences: diff a c > output.txt && rm output.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/524888",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/357823/"
]
} |
524,892 | I wrote a simple application in python and compiled it with cython , which generated .so files as shown below: $ ls -l total 2040 -rw-r--r-- 1 groot groot 486 Jun 14 15:50 compile.py -rwxr-xr-x 1 groot groot 349232 Jun 14 17:12 CopyDebugThread.cpython-36m-x86_64-linux-gnu.so -rwxr-xr-x 1 groot groot 491040 Jun 14 17:12 CopyDialog.cpython-36m-x86_64-linux-gnu.so drwxrwxr-x 2 groot groot 4096 Jun 10 21:09 images -rwxr-xr-x 1 groot groot 84224 Jun 14 17:12 Main.cpython-36m-x86_64-linux-gnu.so -rwxr-xr-x 1 groot groot 403424 Jun 14 17:12 MainWindow.cpython-36m-x86_64-linux-gnu.so -rw-r--r-- 1 groot groot 12 Jun 14 17:43 run.py -rwxr-xr-x 1 groot groot 739760 Jun 14 17:13 UiMainWindow.cpython-36m-x86_64-linux-gnu.so How can I run this project as a real application, installed in my Ubuntu 18.04 ? Is it possible ? Or do I import it into another python file, then run the python file? | You could call the command once, redirect the output, then remove the output if there were no differences: diff a c > output.txt && rm output.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/524892",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/319042/"
]
} |
524,963 | I'm aware of the methods where you can run a Bash for loop and ping multiple servers, is there a Linux CLI tool that I can use which will allow for me to do this without having to resort to writing a Bash script to ping a list of servers one at a time? Something like this: $ ping host1 host2 host3 NOTE: I'm looking specifically for CentOS/Fedora, but if it works on other distros that's fine too. | If you look into the NMAP project you'll find that it includes additional tools on top of just nmap . One of these tools is nping , which includes the following ability: Nping has a very flexible and powerful command-line interface that grants users full control over generated packets. Nping's features include: Custom TCP, UDP, ICMP and ARP packet generation. Support for multiple target host specification. Support for multiple target port specification. ... nping is in the standard EPEL repos to boot. $ repoquery -qlf nmap.x86_64 | grep nping/usr/bin/nping/usr/share/man/man1/nping.1.gz Usage To ping multiple servers you merely have to tell nping the names/IPs and which protocol you want to use. Here since we want to mimic what the traditional ping CLI does we'll use ICMP. $ sudo nping -c 2 --icmp scanme.nmap.org google.comStarting Nping 0.7.70 ( https://nmap.org/nping ) at 2019-06-14 13:43 EDTSENT (0.0088s) ICMP [10.3.144.95 > 45.33.32.156 Echo request (type=8/code=0) id=42074 seq=1] IP [ttl=64 id=57921 iplen=28 ]RCVD (0.0950s) ICMP [45.33.32.156 > 10.3.144.95 Echo reply (type=0/code=0) id=42074 seq=1] IP [ttl=46 id=24195 iplen=28 ]SENT (1.0091s) ICMP [10.3.144.95 > 45.33.32.156 Echo request (type=8/code=0) id=42074 seq=2] IP [ttl=64 id=57921 iplen=28 ]SENT (2.0105s) ICMP [10.3.144.95 > 45.33.32.156 Echo request (type=8/code=0) id=42074 seq=2] IP [ttl=64 id=57921 iplen=28 ]RCVD (2.0107s) ICMP [45.33.32.156 > 10.3.144.95 Echo reply (type=0/code=0) id=42074 seq=2] IP [ttl=46 id=24465 iplen=28 ]SENT (3.0138s) ICMP [10.3.144.95 > 64.233.177.100 Echo request (type=8/code=0) id=49169 seq=2] IP [ttl=64 id=57921 iplen=28 ]Statistics for host scanme.nmap.org (45.33.32.156): | Probes Sent: 2 | Rcvd: 2 | Lost: 0 (0.00%) |_ Max rtt: 86.053ms | Min rtt: 0.188ms | Avg rtt: 43.120msStatistics for host google.com (64.233.177.100): | Probes Sent: 2 | Rcvd: 0 | Lost: 2 (100.00%) |_ Max rtt: N/A | Min rtt: N/A | Avg rtt: N/ARaw packets sent: 4 (112B) | Rcvd: 2 (108B) | Lost: 2 (50.00%)Nping done: 2 IP addresses pinged in 3.01 seconds The only drawback I've found with this tool is the use of ICMP mode requiring root privileges. $ nping -c 2 --icmp scanme.nmap.org google.comMode ICMP requires root privileges. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/524963",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7453/"
]
} |
525,013 | I have a purge process that I am running and the command I'm using is : find $sentPurgerFolder -mtime +7 -print -delete >> $sentPurgeLogFile 2>&1 This code is in a while loop that is looping thru clients that have multiple folders of data to purge. The intent is to send all the purge info, regardless of the number of folders for that client to one log-file for that client. And that part seems to work pretty good. My disconnect is I would like to send the same output to a master logfile, however the examples of 'tee' that I have seen give me pause. I do not know how I would integrate that command into my code here without doubling up the log data. Can anyone lend some insight or make a suggestion? | If you look into the NMAP project you'll find that it includes additional tools on top of just nmap . One of these tools is nping , which includes the following ability: Nping has a very flexible and powerful command-line interface that grants users full control over generated packets. Nping's features include: Custom TCP, UDP, ICMP and ARP packet generation. Support for multiple target host specification. Support for multiple target port specification. ... nping is in the standard EPEL repos to boot. $ repoquery -qlf nmap.x86_64 | grep nping/usr/bin/nping/usr/share/man/man1/nping.1.gz Usage To ping multiple servers you merely have to tell nping the names/IPs and which protocol you want to use. Here since we want to mimic what the traditional ping CLI does we'll use ICMP. $ sudo nping -c 2 --icmp scanme.nmap.org google.comStarting Nping 0.7.70 ( https://nmap.org/nping ) at 2019-06-14 13:43 EDTSENT (0.0088s) ICMP [10.3.144.95 > 45.33.32.156 Echo request (type=8/code=0) id=42074 seq=1] IP [ttl=64 id=57921 iplen=28 ]RCVD (0.0950s) ICMP [45.33.32.156 > 10.3.144.95 Echo reply (type=0/code=0) id=42074 seq=1] IP [ttl=46 id=24195 iplen=28 ]SENT (1.0091s) ICMP [10.3.144.95 > 45.33.32.156 Echo request (type=8/code=0) id=42074 seq=2] IP [ttl=64 id=57921 iplen=28 ]SENT (2.0105s) ICMP [10.3.144.95 > 45.33.32.156 Echo request (type=8/code=0) id=42074 seq=2] IP [ttl=64 id=57921 iplen=28 ]RCVD (2.0107s) ICMP [45.33.32.156 > 10.3.144.95 Echo reply (type=0/code=0) id=42074 seq=2] IP [ttl=46 id=24465 iplen=28 ]SENT (3.0138s) ICMP [10.3.144.95 > 64.233.177.100 Echo request (type=8/code=0) id=49169 seq=2] IP [ttl=64 id=57921 iplen=28 ]Statistics for host scanme.nmap.org (45.33.32.156): | Probes Sent: 2 | Rcvd: 2 | Lost: 0 (0.00%) |_ Max rtt: 86.053ms | Min rtt: 0.188ms | Avg rtt: 43.120msStatistics for host google.com (64.233.177.100): | Probes Sent: 2 | Rcvd: 0 | Lost: 2 (100.00%) |_ Max rtt: N/A | Min rtt: N/A | Avg rtt: N/ARaw packets sent: 4 (112B) | Rcvd: 2 (108B) | Lost: 2 (50.00%)Nping done: 2 IP addresses pinged in 3.01 seconds The only drawback I've found with this tool is the use of ICMP mode requiring root privileges. $ nping -c 2 --icmp scanme.nmap.org google.comMode ICMP requires root privileges. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/525013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/355563/"
]
} |
525,036 | I have experimented with many approaches to setting permanent environment variables EC2 instances running Amazon Linux 2, but none of the approaches are persistent across users and across login sessions. What specific syntax and process will successfully set environment variables so that the values of the variables will be available to all users and during every session? So far, methods that I have tried include setting values separately in each of the following three files during the USERDATA's launch script for the instance: /etc/csh.cshrc/etc/environment/etc/profile I tried each file one at a time and not all three at the same time. I also tried using Python's os.environ but that did not work either. User Suggesions Per @NasirRiley's suggestion, it now works when I create a setVars.sh file and place it in /etc/profile.d during the instance's USERDATA startup sequence: #!/bin/bash export SOME_VAR_NAME=some-var-value | It's better to set universal variables by creating scripts in /etc/profile.d . You want to create it with an extension of your shell name. For example, if it's bash , it will be called script.sh for example. /etc/profile.d/script.sh The syntax inside will be: export SOME_VAR_NAME=some-var-value You will need to start a new shell session to add the variable to your environment which you can do by logging out and back in. It will added for the other users' environments when they do the same or the next time they log in if they aren't currently logged in. Just a note: you don't actually need the shebang line as it's sourced in according to your shell. I put it in myself sometimes as it's just a force of habit but it doesn't hurt or affect anything. You can leave it out if you'd like. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/525036",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92670/"
]
} |
525,055 | This is an example function: function example { echo "TextBefore $@ TextAfter" ; } This is the command using the function: example A{1..5}B The output: TextBefore A1B A2B A3B A4B A5B TextAfter How I want it to be: TextBefore A1B TextAfterTextBefore A2B TextAfterTextBefore A3B TextAfterTextBefore A4B TextAfterTextBefore A5B TextAfter That's as good as I can describe it. If you understand it and know a better way of describing it, please edit the question. How can I make each [insert word here] in the sequence being executed separately, as shown in that example? | Use printf rather than echo . The printf utility takes a format string as its first argument (which can always be a single quoted string), and this string would contain a placeholder for your other arguments: printf 'TextBefore %s TextAfter\n' "$@" The arguments in "$@" would be inserted in the position given by %s . Since there is only one %s placeholder in the format string, the format string will be reused for each argument in turn. This is different from how printf works in other languages. Note that printf does not output a terminating newline by default. Example: $ printf 'AAA %s BBB\n' 1 2 3 4 5AAA 1 BBBAAA 2 BBBAAA 3 BBBAAA 4 BBBAAA 5 BBB If there are more placeholders in the formatting string, these will be filled in turn by the arguments given to printf . The formatting string will be reused when all placeholders have been filled if there are still more arguments available. $ printf 'AAA %s %03d BBB\n' 1 2 3 4 5AAA 1 002 BBBAAA 3 004 BBBAAA 5 000 BBB Your function may therefore look like example () { [ "$#" -gt 0 ] && printf 'TextBefore %s TextAfter\n' "$@"} See also: Why is printf better than echo? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/525055",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/270469/"
]
} |
525,071 | As I was checking the /etc/fstab I noticed that the swap space has attr pass=0, which means its filesystem is not checked at boot time.Can anyone please tell me why is this behavior for? | At boot time, swap doesn’t contain any data which would need to be recovered, so there’s no point in writing a tool to repair swap. If a swap partition or file is corrupted in such a way that swapon can’t make use of it, the fix is to mkswap it again — there’s no need for a separate fsck.swap tool, so there isn’t one and /etc/fstab isn’t set up to use one. It might then seem nice for swapon to automatically mkswap if necessary, but that would mean that any mistake in the arguments to swapon would be instantly fatal to the data stored in the given volume or file. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/525071",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/351236/"
]
} |
525,164 | Write a script which takes a number of seconds as an argument and then holds the session for the period (sleep), after that presents the list of files that were deleted from your home directory. Use `date' to show the current time and date before and after the sleep. #!/bin/bashtime_b=$(date)(sleep 30ls -all $HOME | grep -v "(cat tmp_file.txt )" | while read sdo echo deleted file: $sdoneecho Time before: $time_becho Time after: $(date)) | #!/bin/shsnooze=$1set -- "$HOME"/*date +'Start: %F %T'sleep "$snooze"date +'End: %F %T'for pathname do if [ ! -e "$pathname" ]; then printf 'Deleted from home: %s\n' "${pathname##*/}" fidone This script takes the first command line argument, $1 , and assigns it to the variable snooze . It then gets the names of all files and directories in the home directory (excluding hidden names) and assigns them to the positional parameters ( $1 , $2 , etc.) Before sleeping the amount of time given by the user, it prints the current date and time. After waking up from the sleep, it prints the date and time again. The for loop loops over the original names found in the home directory and tests whether they are still there. If a name is no longer found, it is printed (with the directory path removed from its pathname). That is all. If you want to only detect deletion of regular files (or symbolic links to regular files), then you will have to make sure that the list of pathnames that we get for things in the home directory only contains pathnames of those files: #!/bin/shsnooze=$1set --for pathname in "$HOME"/*; do if [ -f "$pathname" ]; then set -- "$@" "$pathname" fidonedate +'Start: %F %T'sleep "$snooze"date +'End: %F %T'for pathname do if [ ! -e "$pathname" ]; then printf 'Deleted from home: %s\n' "${pathname##*/}" fidone Here, instead of just saving all visible names from the home directory, we loop over the names and only save the ones that the -f test is true for (regular files and symbolic links to regular files). Directories will be skipped. The rest of the script is as before. Detecting deletion of hidden files is easiest done by switching over to bash (note that the above scripts are executed with /bin/sh ): #!/bin/bashshopt -s dotglobsnooze=$1set --for pathname in "$HOME"/*; do if [ -f "$pathname" ]; then set -- "$@" "$pathname" fidonedate +'Start: %F %T'sleep "$snooze"date +'End: %F %T'for pathname do if [ ! -e "$pathname" ]; then printf 'Deleted from home: %s\n' "${pathname##*/}" fidone Note that the only difference is the #! -line, indicating that this is now supposed to be executed by the /bin/bash interpreter, and the shopt -s dotglob command which sets the dotglob shell option in the bash shell. This shell option makes filename globbing patterns, such as * , match hidden names as well as names not starting with a dot. Additionally detecting deletions also in subdirectories sounds a bit tricky, but it's not: #!/bin/bashshopt -s dotglob globstarsnooze=$1set --for pathname in "$HOME"/**; do if [ -f "$pathname" ]; then set -- "$@" "$pathname" fidonedate +'Start: %F %T'sleep "$snooze"date +'End: %F %T'for pathname do if [ ! -e "$pathname" ]; then printf 'Deleted from home: %s\n' "${pathname#$HOME/}" fidone The only differences here is that we also enable the globstar option. This shell option gives us access to the ** glob pattern, which matches just like * , but also reaches across / in pathnames. The $HOME/** pattern will therefore match everything under your home directory. I've also slightly modified the printing of the deleted pathnames to include a bit more than just the name of the file (since it may be have been located in a subdirectory, and it would be nice to see what subdirectory that was). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/525164",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/358010/"
]
} |
525,235 | I try to install ssmtp in Debian 10.0, but get the error Package ssmtp is not available, but is referred to by another package.This may mean that the package is missing, has been obsoleted, or is only available from another source But I get no result with neither apt search ssmpt nor: $ apt-file search ssmtpmonitoring-plugins-basic: /usr/lib/nagios/plugins/check_ssmtpsosreport: /usr/share/sosreport/sos/plugins/ssmtp.py How do I install ssmtp in Debian buster? | apt install msmtp ssmtp Package is currently unmaintained This package has been orphaned since 2019-03-19. msmtp can be used as an alternative. debian wiki: msmtp msmtp: documentation manpage msmtp | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/525235",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20661/"
]
} |
525,243 | To my understanding, for manipulating files there is only the sys_write syscall in Linux, which overwrites the file content (or extends it, if at the end). Why are there no syscalls for inserting or deleting content in files in Linux? As all current file systems do not require the file to be stored in a continuous memory block, an efficient implementation should be possible.(The files would get fragmented.) With file system features as "copy on write" or "transparent file compression", the current way of inserting content seems to be very inefficient. | On recent Linux systems that is actually possible, but with block (4096 most of the time), not byte granularity, and only on some filesystems (ext4 and xfs). Quoting from the fallocate(2) manpage: int fallocate(int fd, int mode, off_t offset, off_t len); [...] Collapsing file space Specifying the FALLOC_FL_COLLAPSE_RANGE flag (available since Linux 3.15) in mode removes a byte range from a file, without leaving a hole. The byte range to be collapsed starts at offset and continues for len bytes. At the completion of the operation, the contents of the file starting at the location offset+len will be appended at the location offset , and the file will be len bytes smaller. [...] Increasing file space Specifying the FALLOC_FL_INSERT_RANGE flag (available since Linux 4.1) in mode increases the file space by inserting a hole within the file size without overwriting any existing data. The hole will start at offset and continue for len bytes. When inserting the hole inside file, the contents of the file starting at offset will be shifted upward (i.e., to a higher file offset) by len bytes. Inserting a hole inside a file increases the file size by len bytes. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/525243",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/358092/"
]
} |
525,272 | Since one week ago, my laptop keeps disconnecting randomly from home wifi (we have Virgin Media at home) and then randomly connecting on. But the connecting won't last long. However, when I use the Eduroam at university, there is no such problem. Below is the system information: ' *-network description: Wireless interface product: QCA9377 802.11ac Wireless Network Adapter vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:01:00.0 logical name: wlp1s0 version: 31 serial: b0:52:16:c3:87:f9 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath10k_pci driverversion=4.15.0-51-generic firmware=WLAN.TF.2.1-00021-QCARMSWP-1 ip=192.168.0.87 latency=0 link=yes multicast=yes wireless=IEEE 802.11 resources: irq:129 memory:d1000000-d11fffff '*-network description: Ethernet interface product: RTL810xE PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: enp2s0 version: 07 serial: 58:8a:5a:18:ef:10 size: 10Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=rtl8106e-1_0.0.1 06/29/12 latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:17 ioport:e000(size=256) memory:d1204000-d1204fff memory:d1200000-d1203fff'*-network description: Ethernet interface physical id: 2 logical name: docker0 serial: 02:42:93:89:a7:11 capabilities: ethernet physical configuration: broadcast=yes driver=bridge driverversion=2.3 firmware=N/A ip=172.17.0.1 link=no multicast=yes' I tried some suggestions, like below, it didn't solve the problem permanently. ' $ sudo ifconfig wlp1s0 down $ sudo iwconfig wlp1s0 power off $ sudo ifconfig wlp1s0 up $ sudo service network-manager restart' Or disabled the ipv6, add 8.8.8.8 on ipv4, but none works. I also followed suggestions from https://askubuntu.com/questions/1030653/wifi-randomly-disconnected-on-ubuntu-18-04-lts , but still didn't solve this issue. I don't know why this problem only occurs on my home wifi. Could experts kindly give me some suggestions, please? Thank you. | Finally solved this annoying problem! Thanks for Freddy's reminder and the linked webpage, this issue turned out to be a router setting problem, rather than related with the Ubuntu system. Just copy the solution here: go to the router setting, change the Channel option from Auto to channel 9. I also disabled the Wifi frequency 5Ghz (also might work by change bandwidth from auto to 20MHz). After did this, my wifi connection works well at home. I think the reason why I didn't find the right solution at the beginning was I focused my key words at "Ubuntu disconnecting from wifi" which linked me to totally different methods. But I forgot the fact that my computer works well when connecting from Eduroam. It might be useful for some Ubuntu users (I am not an export) to aware the difference between those wifi randomly disconnecting issues. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/525272",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/358111/"
]
} |
525,399 | How can I print a list of files/directories one-per-line using echo ? I can replace spaces with newlines, but this doesn't work if the filenames contain spaces: $ echo small*jpgsmall1.jpg small2.jpg small photo 1.jpg small photo 2.jpg$ echo small*jpg | tr ' ' '\n'small1.jpgsmall2.jpgsmallphoto1.jpgsmallphoto2.jpg I know I can do this with ls -d1 , but is it also possible using echo ? | echo can't be used to output arbitrary data anyway, use printf instead which is the POSIX replacement for the broken echo utility to output text. printf '%s\n' small*jpg You can also do: printf '%s\0' small*jpg to output the list in NUL delimited records (so it can be post-processed; for instance using GNU xargs -r0 ; remember that the newline character is as valid as space or any character in a filename). Before POSIX came up with printf , ksh already had a print utility to replace echo . zsh copied it and added a -l option to print the arguments one per line: print -rl -- small*jpg ksh93 added a -f option to print for printf like printing. Copied by zsh as well, but not other ksh implementations: print -f '%s\n' -- small*jpg Note that all of those still print an empty line if not given any argument. A better println can be written as a function as: println() { [ "$#" -eq 0 ] || printf '%s\n' "$@"} In zsh : println() print -rC1 -- "$@" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/525399",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85900/"
]
} |
525,601 | CentOS Linux release 7.6.1810 (Core)Kernel 5.1.11-1.el7.elrepo.x86_64 I put a cert-file to /etc/pki/ca-trust/source/anchors File looks like that: -----BEGIN CERTIFICATE-----MIIDojCCAoqgAwIBAgIQeqkpty5ghoxP8YfCRe+7qjANBgkqhkiG9w0BAQUFADBPsome stringsFnpKVwAq6UcYOu4AoXweaqOOMsLNSw==-----END CERTIFICATE----- And after update-ca-trust extract I expect to see my cert in the bundle-file /etc/pki/tls/certs/ca-bundle.crt but there was nothing new in it. And ls -al show me latest edit time, so It was changed 2 monts ago, not now. Initially I try this with .crt file. But renaming .crt to .pem didn't solve my problem.I also tried update-ca-trust enable and update-ca-trust force-enable before extract, but it didn't help. /var/log/messages says nothing about that. What I shell do to fix it? | TL;DR The update-ca-trust won't extract your certificate file to the ca-bundle.crt unless this succeeds: openssl x509 -noout -text -in <cert_file> | grep --after-context=2 "X509v3 Basic Constraints" | grep "CA:TRUE" I spent a few hours on this issue. Its root was in a X.509 extension called Basic Constraints which is used to mark whether a certificate belongs to a CA or not. My humble findings: The update-ca-bundle tool is in fact a shell script, so it's easy to peek inside The script calls p11-kit utility multiple times each time using different filter and creating different bundle files. The file ca-bundle.crt is in fact a link from tls-ca-bundle.pem file which is generated by p11-kit using ca-anchors filter. So it ignores all certs besides "CA ones". If a certificate is or is not a CA is decided by Basic Constraints X.509 extension. This way it's possible to mark a certificate as a part of a CA. It's possible to list all X.509 extensions using openssl x509 -noout -text -in <cert_file> So any certificate file not labelled as a part of a CA will be filtered out by p11-kit and not exported to the desired ca-bundle.crt file. Feel free to correct this in comments. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/525601",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/177068/"
]
} |
525,653 | When trying to call /dev/tcp/www.google.com/80 , by typing /dev/tcp/www.google.com/80 Bash says no such file or directory . When looking at other people's code online, they use syntax such as 3<>/dev/tcp/www.google.com/80 I noticed that this works as well: </dev/tcp/www.google.com/80 Why are these symbols required to call certain things in bash? | Because that's a feature of the shell (of ksh, copied by bash), and the shell only. /dev/tcp/... are not real files, the shell intercepts the attempts to redirect to a /dev/tcp/... file and then does a socket(...);connect(...) (makes a TCP connection) instead of a open("/dev/tcp/..."...) (opening that file) in that case. Note that it has to be spelled like that. cat < /dev/./tcp/... or ///dev/tcp/... won't work, and will attempt to open those files instead (which on most systems don't exist and you'll get an error). The direction of the redirection also doesn't matter. Whether you use 3< /dev/tcp/... or 3> /dev/tcp/... or 3<> /dev/tcp/... or even 3>> /dev/tcp/... won't make any difference, you'll be able to both read and write from/to that file descriptor to receive/send data over that TCP socket. When you do cat /dev/tcp/... , that doesn't work because cat doesn't implement that same special handling, it does a open("/dev/tcp/...") like for every file (except - ), only the shell (ksh, bash only) does, and only for the target of redirections. That cat - is another example of a file path handled specially, this time, by cat , not the shell. Instead of doing a open("-") and reading the input from the resulting file descriptor, cat reads directly from the file descriptor 0 (stdin). cat and many text utilities do that, the shell doesn't for its redirections. To read the content of the - file, you need cat ./- , or cat < - (or cat - < - ). On systems that don't a have /dev/stdin , bash will however do something similar for redirections from that (virtual) file. GNU awk does the same for /dev/stdin , /dev/stdout , /dev/stderr even on systems that do have such files which can cause some surprises on systems like Linux where those files behave differently. zsh also has TCP (and Unix domain stream) socket support, but that's done with a ztcp (and zsocket ) builtins, so it's less limited than the ksh/bash approach. In particular, it can also act as a server which ksh/bash can't do. It's still much more limited than what you can do in a real programming language though. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/525653",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/357569/"
]
} |
525,654 | The ASCII character range is from 0 to 127, and within that range, awk's printf with the %c format specifier outputs one byte of data: $ awk 'BEGIN{printf "%c", 97}'a$ awk 'BEGIN{printf "%c", 127}' | xxd00000000: 7f$ awk 'BEGIN{printf "%c", 127}' | xxd -b00000000: 01111111 But for values greater than 127, it will print out multiple bytes: $ awk 'BEGIN{printf "%c", 128}' | xxd00000000: c280$ awk 'BEGIN{printf "%c", 128}' | xxd -b00000000: 11000010 10000000 What is the significance of 0xc280, and why does awk output that character instead of 0x80? | This is UTF-8 encoding. 11000010 starts a two-byte sequence (the first two bits set followed by a clear bit), and the significant bits are 00010000000 (the last five bits of the first byte, and the last six bits of the second byte), which is 128. AWK is outputting this because your locale is set to use UTF-8; you can switch to a non-UTF-8 locale to see the difference: $ LC_ALL=C awk 'BEGIN{printf "%c", 128}' | xxd -b00000000: 10000000 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/525654",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/335938/"
]
} |
525,791 | I have a black box UNIX program used in a Bash shell that reads columns of data from stdin, processes them (applying a smoothing effect) then outputs to stdout. I use it by UNIX pipes, like generate | smooth | plot For more smoothing, I can repeat the smooth, so it'd be invoked from the Bash command line as generate | smooth | smooth | plot or even generate | smooth | smooth | smooth | smooth | smooth | smooth | smooth | smooth | smooth | smooth | plot This is getting unweildy. I would like to make a Bash wrapper to be able to pipe into smooth and feed its output right back into a new instance of smooth an arbitrary number of times, something like generate | newsmooth 5 | plot instead of generate | smooth | smooth | smooth | smooth | smooth | plot My first attempt was a Bash script that generated temp files in the current directory and deleted them, but that turned ugly when I wasn't in a directory with write access, and also left garbage files when interrupted. There are no arguments to the smooth program. Is there a more elegant way to "wrap" such a program to parameterize the number of calls? | You could wrap it in a recursive function: smooth() { if [[ $1 -gt 1 ]]; then # add another call to function command smooth | smooth $(($1 - 1)) else command smooth # no further fi} You would use this as generate | smooth 5 | plot which would be equivalent to generate | smooth | smooth | smooth | smooth | smooth | plot | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/525791",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/317558/"
]
} |
525,843 | I am working with php(7.3) and mysql in linux machine. But I am getting PHP Fatal error: Uncaught Error: Class 'mysqli' not found in /var/www/html/ error in apache2 log file ( /var/log/error.log ). I checked few answers Fatal error: Class 'MySQLi' not found . But did't work. form-handler.php <?php $conn = new mysqli("localhost", "root", "root", "Users");// Check connectionif ($conn->connect_errno) { die("Connection failed: " . $conn->connect_error);}else echo "Connected successfully";$username = $_POST["username"]; $password = $_POST["password"]; $query = "SELECT * FROM Users WHERE username = " . $username . " AND password =" . $password; $re = mysqli_query($query); if (mysqli_num_rows($re) == 0) { echo 'Not Logged In'; } else { echo 'Logged In'; }?> Edit 1 Already done executing the following commands apt install php-mysql apt-get install php-mysqlnd Edit 2 I created new php file with the following code <?phpphpinfo(INFO_MODULES);?> and in additional modules it doesn't show mysqli but when I execute apt install php-mysql it shows php-mysql is already the newest version (2:7.3+69).. | You haven't told us which OS you are using. But since you've already mentioned apt-get then I'll assume that you're using something similar to Ubuntu or Debian. These have a relatively similar configuration setup. Check 1 The package php-mysql should be dependent on php7.2-mysql or similar so firstly check that this is installed eg: dpkg --list | grep 'php.*mysql'ii php-mysql 2:7.2+69ubuntu1 all MySQL module for PHP [default]ii php7.2-mysql 7.2.19-0ubuntu0.19.04.1 amd64 MySQL module for PHP Do take a note of which version you have installed, if not 7.2 like my setup you will need to change later checks to match your version. Check 2 I've just looked at my Ubuntu 19.04 setup and mysqli has it's own shared library currently installed to belonging to the php7.2-mysql package and installed to /usr/lib/php/20170718/mysqli.so . find /usr/lib/php -name mysqli.so Check 3 For PHP to use the module, it needs to be instructed to load it. Depending on the way you have setup PHP to run you may need to look in a different place. But for me, running PHP 7.2 under phpfpm, the instruction to load mysqli is located in: /etc/php/7.2/fpm/conf.d/20-mysqli.ini . Check to see if you have the module loaded with: grep mysqli.so /etc/php/7.2/*/conf.d/*/etc/php/7.2/cli/conf.d/20-mysqli.ini:extension=mysqli.so/etc/php/7.2/fpm/conf.d/20-mysqli.ini:extension=mysqli.so Make sure that whichever way you are using PHP` is configured to load the module. If it is not configured to do so then you should be able to add back the configuration: cd /etc/php/7.2/fpm/conf.dln -s ../../mods-available/mysqli.ini 20-mysqli.ini Check 3 is the most likly to be the cause given the evidence you've shown, but it's also the mos easy to make a mistake on. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/525843",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/357499/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.