source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
531,334 | I have the line number 55 in a text file which I want to move just before line number 23 in the same text file. I know that you can move lines up by a certain number of lines 1 , 2 but is it possible to append or prepend a line to another line instead of the verbose way of moving lines up or down a textfile? I am open to answers using vim, sed or any other command line tool but the more concise the better update: The sed command i inserts text before a line which is similar to what I am looking for except that I want to insert a line before a line | A scriptable editor makes this pretty straight-forward! printf '%s\n' '55m22' 'wq' | ed -s input This sends two commands to ed (editing the file named input ): 55m22 -- move line 55 after line 22 wq -- save the file back to disk and quit. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/531334",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/289865/"
]
} |
531,520 | I am trying to exclude certain entries of URLs (using IPs instead of domain names) in a list with a grep command, specifically to remove all entries starting with https:// and ending with :80 . however, entries with https:// and :8080 are also being included. My grep command is: egrep -v "https://[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}:80" How can I force grep to only exclude :80 and not :8080 An example of the input file contents is: http://10.10.46.1:80http://10.10.48.67:8080http://10.10.48.67:443https://10.10.46.1:80https://10.10.48.67:8080https://10.10.48.67:443 | Just be a tad more specific in your grep command. If the IPs are the only thing on the line, so the :80 will be the last 3 characters, use: grep -Ev "https://[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}:80$" Or, if there can be whitespace after the 80 , use: grep -Ev "https://[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}:80\s*$" If the :80 isn't the last thing on the line, just grep for cases where there is a space after it: grep -Ev "https://[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}:80 " If the last character isn't a space, adapt to suit your input. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/531520",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/357569/"
]
} |
531,554 | I always read that Linux kernel isn't pageable. If I'm not mistaken Windows, instead, divedes system virtual memory ina paged part (paged pool) and non-paged part (non-paged pool). The non-paged part is mapped directly to physical memory and stay there all the time because takes care of the most important tasks kernel must accomplish, while less important portions may be not. Linux kernel,instead, is divided into laodable modules, but no information I managed to gather on how these modules are implemented. I don't understand if they're paged and thus you can temporarily transfer them to the disk. What I usually read is that we can "free" memory by unloading them, what is meant with this is still obscure to me. When I writed "paged" or "pageable" along this post, I implicitly meant that you can swap out on disk these pages. I addressed this because usually Linux kernel is considered paged but it can't be swapped out | No part of the Linux kernel can be paged out, even parts that come from modules. A kernel module can be loaded and (if the module supports it) can be unloaded. This always happens from an explicit request from a userland process with the init_module and delete_module system calls (normally, via the insmod or modprobe utilities for loading, and via rmmod for unloading). Once a module is loaded, it's part of the kernel, like any other part of the kernel. In particular, there's no way to isolate the memory used by a specific module. The kernel keeps tracks of which part of the memory contains a specific module's code, but not where a module may have stored data. A module can modify any of the kernel's data structures, after all. A module can potentially add some code to any kernel subsystem. Most modules are hardware drivers, but some aren't (e.g. they can provide security functionality, filesystems, networking functionality, etc.). If data or code used by a module could be swapped out, then the rest of the kernel would have to load it when required, which would complicate the design of the system a lot. The kernel would also need to ensure that no part of the memory that's swapped out is ever necessary to swap it back in, which is difficult. What if the swap is in a swap file on a network filesystem, and the module provides firewall functionality that is involved in communicating with the server that stores the file? It's possible to completely unload a module because it's the module's job to provide code that ensures that the module is not needed for anything. The kernel runs the module's exit function, and only unloads the module if that function reports that the module can be safely unloaded. The exit function must free any remaining data memory that is “owned” by the module (i.e. data that the module needs, but no other part of the kernel needs), and must verify that no code in the module is registered to be called when something happens. There's no way to save a module's data to swap: a module can only be removed from RAM if it has no data left. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/531554",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/215663/"
]
} |
531,566 | I have a fairly simple little script. Basically, it performs ping over a given domain. It is like this: ping -c2 $1 | head -n4 And it prints out for example: PING google.com (172.217.17.206): 56 data bytes64 bytes from 172.217.17.206: icmp_seq=0 ttl=55 time=2.474 ms64 bytes from 172.217.17.206: icmp_seq=1 ttl=55 time=2.668 ms which is okay for me. But for example like you know sometimes the ping command does not return any response from the ICMP request. Like for example: ping intel.comPING intel.com (13.91.95.74): 56 data bytesRequest timeout for icmp_seq 0Request timeout for icmp_seq 1--- intel.com ping statistics --- And when this happens the script stuck for several seconds and then it resumes on its way.I'm trying to think of a way when this happens to just skip it and just proceed down. I actually not sure if it is possible at all.I was thinking at first to pipe it to grep for 'Request timeout' or to put the result in a variable and then cat | grep the variable. Can someone think of a way for this and is it possible at all to just skip the execution when it hits Request timeout? | No part of the Linux kernel can be paged out, even parts that come from modules. A kernel module can be loaded and (if the module supports it) can be unloaded. This always happens from an explicit request from a userland process with the init_module and delete_module system calls (normally, via the insmod or modprobe utilities for loading, and via rmmod for unloading). Once a module is loaded, it's part of the kernel, like any other part of the kernel. In particular, there's no way to isolate the memory used by a specific module. The kernel keeps tracks of which part of the memory contains a specific module's code, but not where a module may have stored data. A module can modify any of the kernel's data structures, after all. A module can potentially add some code to any kernel subsystem. Most modules are hardware drivers, but some aren't (e.g. they can provide security functionality, filesystems, networking functionality, etc.). If data or code used by a module could be swapped out, then the rest of the kernel would have to load it when required, which would complicate the design of the system a lot. The kernel would also need to ensure that no part of the memory that's swapped out is ever necessary to swap it back in, which is difficult. What if the swap is in a swap file on a network filesystem, and the module provides firewall functionality that is involved in communicating with the server that stores the file? It's possible to completely unload a module because it's the module's job to provide code that ensures that the module is not needed for anything. The kernel runs the module's exit function, and only unloads the module if that function reports that the module can be safely unloaded. The exit function must free any remaining data memory that is “owned” by the module (i.e. data that the module needs, but no other part of the kernel needs), and must verify that no code in the module is registered to be called when something happens. There's no way to save a module's data to swap: a module can only be removed from RAM if it has no data left. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/531566",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/355292/"
]
} |
531,576 | I am using Ubuntu for my terminal. I am unable to access my Desktop and when I type "ls" nothing shows up. How do I log in via desktop environment in order to access my desktop files? huque@RNB:~$ pwd/home/huquehuque@RNB:~$ cd Desktop-bash: cd: Desktop: No such file or directoryhuque@RNB:~$ lshuque@RNB:~$ ---EDIT 1---- In response to @WinEunuuchs2Unix's comment, this is what happens when I type apt list ubuntu-desktop: huque@RNB:~$ apt list ubuntu-desktop Listing... Done ubuntu-desktop/bionic-updates 1.417.1 amd64 N: There is 1 additional version. Please use the '-a' switch to see it huque@RNB:~$ -a -a: command not found huque@RNB:~$ ---EDIT 2---- In response to @l0b0's comment, here is a screenshot of my terminal and the desktop image behind it. | What you're actually looking for is /mnt/c/Users/huque/Desktop . You can ls /mnt/c/Users/huque/Desktop to see what's in it, or cd /mnt/c/Users/huque/Desktop to actually be there. /mnt/c maps in your C: drive from Windows and you can access all your files through there. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/531576",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
531,581 | Got some valuable help here earlier with grep so hopefully I can get this sorted out too. This is from Rclone log Transferred: 577.080M / 577.080 MBytes, 100%, 12.660 MBytes/s, ETA 0sErrors: 0Checks: 2 / 2, 100%Transferred: 2 / 2, 100%Elapsed time: 45.5s What I am trying to do is create a email notification with custom text. Something like "Transferred 577 MBytes, 2 files with 0 errors in 45.5 seconds @ 12,660 MBytes/s" So for this to work I need to print the values. I tried the same way I did before without any luck. Transferred it two times in the log, how to split them to get TRF= 577.080Mbytes and TRS= 12.660 MBytes/s TRF=$(grep -o 'Transferred:.*' $logfile| cut -d\ -f4)ERR=$(grep -o 'Errors:.*' $logfile | cut -d\ -f4)TIM=$(grep -o 'Elapsed time:.*' $logfile | cut -d\ -f3-)TRS=$(grep -o 'Transferred:.*' $logfile | cut -d\ -f4) | What you're actually looking for is /mnt/c/Users/huque/Desktop . You can ls /mnt/c/Users/huque/Desktop to see what's in it, or cd /mnt/c/Users/huque/Desktop to actually be there. /mnt/c maps in your C: drive from Windows and you can access all your files through there. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/531581",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/363483/"
]
} |
531,591 | I have a file with the following 3 lines: file.txt aaa|bbb|ccc| and another file with one line: regex.txt ^aaa\| $grep ^aaa\| file.txt yields: aaa| $grep -f regex.txt file.txt yields: aaa|bbb|ccc| Why are the results different for grep -f and grep ? $grep -Vgrep (GNU grep) 3.1$lsb_release -aNo LSB modules are available.Distributor ID: UbuntuDescription: Ubuntu 18.04.1 LTSRelease: 18.04Codename: bionic | You're using grep with basic regular expressions . grep ^aaa\| file.txt is the same as typing grep "^aaa|" file.txt . While reading the file grep -f regex.txt file.txt is the same as grep "^aaa\|" file.txt ., the escaped | means match ^aaa or "" which matches anything. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/531591",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/356961/"
]
} |
531,625 | E.g. command line: test.sh arg1 | grep "xyz" Is it possible to get the complete command line including the following grep in the bash script test.sh? | no bash (or your shell) will fork two distinct commands. test.sh arg1 grep "xyz" test.sh couldn't know about following grep. you might however know you are "inside" a pipe by testing /proc/self/fd/1 test.sh #!/bin/bashfile /proc/self/fd/1 which run as > ./test.sh/proc/self/fd/1: symbolic link to /dev/pts/0> ./test.sh | cat/proc/self/fd/1: broken symbolic link to pipe:[25544239] (Edit) see muru’s comment about knowing if you are on a pipe. you don't need to know if you're in a pipe for that. Just check if output is a TTY. [ -t 1 ] https://unix.stackexchange.com/a/401938/70524 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/531625",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73019/"
]
} |
531,795 | How may I shorten this shell script? CODE="A"if test "$CODE" = "A"then PN="com.tencent.ig"elif test "$CODE" = "a" then PN="com.tencent.ig"elif test "$CODE" = "B" then PN="com.vng.pubgmobile"elif test "$CODE" = "b" then PN="com.vng.pubgmobile"elif test "$CODE" = "C" then PN="com.pubg.krmobile"elif test "$CODE" = "c" then PN="com.pubg.krmobile"elif test "$CODE" = "D" then PN="com.rekoo.pubgm"elif test "$CODE" = "d" then PN="com.rekoo.pubgm"else echo -e "\a\t ERROR!" echo -e "\a\t CODE KOSONG" echo -e "\a\t MELAKUKAN EXIT OTOMATIS" exitfi | Use a case statement (portable, works in any sh -like shell): case "$CODE" in [aA] ) PN="com.tencent.ig" ;; [bB] ) PN="com.vng.pubgmobile" ;; [cC] ) PN="com.pubg.krmobile" ;; [dD] ) PN="com.rekoo.pubgm" ;; * ) printf '\a\t%s\n' 'ERROR!' 'CODE KOSONG' 'MELAKUKAN EXIT OTOMATIS' >&2 exit 1 ;;esac I'd also recommend changing your variable names from all capital letters (like CODE ) to something lower- or mixed-case (like code or Code ). There are many all-caps names that have special meanings, and re-using one of them by accident can cause trouble. Other notes: The standard convention is to send error messages to "standard error" rather than "standard output"; the >&2 redirect does this. Also, if a script (or program) fails, it's best to exit with a nonzero status ( exit 1 ), so any calling context can tell what went wrong. It's also possible to use different statuses to indicate different problems (see the "EXIT CODES" section of the curl man page for a good example). (Credit to Stéphane Chazelas and Monty Harder for suggestions here.) I recommend printf instead of echo -e (and echo -n ), because it's more portable between OSes, versions, settings, etc. I once had a bunch of my scripts break because an OS update included a version of bash compiled with different options, which changed how echo behaved. The double-quotes around $CODE aren't really needed here. The string in a case is one of the few contexts where it's safe to leave them off. However, I prefer to double-quote variable references unless there's a specific reason not to, because it's hard to keep track of where it's safe and where it isn't, so it's safer to just habitually double-quote them. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/531795",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/363485/"
]
} |
531,812 | I wanted to know if there is a way to differentiate physical and virtual network devices. ip a doesn't have an option. So I am trying /sys/class/net/<iface> .There are 2 attributes addr_assign_type and type, but type only tells Ethernet or loopback there is not way to tell if its virtual. I wanted to know does addr_assign_type tell us the different? As per my observation /sys/class/net/<iface>/{eth|loopback} gives 0 and /sys/class/net/<iface>/{virtualdevice} gives 1 or 3 . Is there something I can infer from this? | Use a case statement (portable, works in any sh -like shell): case "$CODE" in [aA] ) PN="com.tencent.ig" ;; [bB] ) PN="com.vng.pubgmobile" ;; [cC] ) PN="com.pubg.krmobile" ;; [dD] ) PN="com.rekoo.pubgm" ;; * ) printf '\a\t%s\n' 'ERROR!' 'CODE KOSONG' 'MELAKUKAN EXIT OTOMATIS' >&2 exit 1 ;;esac I'd also recommend changing your variable names from all capital letters (like CODE ) to something lower- or mixed-case (like code or Code ). There are many all-caps names that have special meanings, and re-using one of them by accident can cause trouble. Other notes: The standard convention is to send error messages to "standard error" rather than "standard output"; the >&2 redirect does this. Also, if a script (or program) fails, it's best to exit with a nonzero status ( exit 1 ), so any calling context can tell what went wrong. It's also possible to use different statuses to indicate different problems (see the "EXIT CODES" section of the curl man page for a good example). (Credit to Stéphane Chazelas and Monty Harder for suggestions here.) I recommend printf instead of echo -e (and echo -n ), because it's more portable between OSes, versions, settings, etc. I once had a bunch of my scripts break because an OS update included a version of bash compiled with different options, which changed how echo behaved. The double-quotes around $CODE aren't really needed here. The string in a case is one of the few contexts where it's safe to leave them off. However, I prefer to double-quote variable references unless there's a specific reason not to, because it's hard to keep track of where it's safe and where it isn't, so it's safer to just habitually double-quote them. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/531812",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/363767/"
]
} |
531,858 | I recorded my current session using the script command, and all information was saved in a typescript file, but when I opened it using Vim there were a lot of ^M s due to carriage returns. I tried to convert this file to the Unix format using the dos2unix command, but I was unable to do so. It was giving this error: dos2unix: Binary symbol 0x1B found at line 2,dos2unix: Skipping binary file typescript. I was just curious why it is happening. Why does script produce output in CR/LF form and not simply in LF form? | typescript saves everything what is sent to your terminal which may include escape sequences for positioning, colors, brightness etc. ( 0x1B is the ESC character.) The terminal output contains CR and LF even if the usual line ending in text files is different. The character 0x1B makes dos2unix assume your input might be a binary file. Because modifying a binary file might not be useful, dos2unix rejects to do this by default. Apart from this there is no problem with the escape character. You can try dos2unix -f to force conversion of the seemingly binary file. This way you tell it that you know that modifying the line endings in this file is safe. Or use vim to remove the CR characters. :%s/ CTRL + V CTRL + M ENTER In case there might be more than one CR per line :%s/ CTRL + V CTRL + M //g ENTER | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/531858",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/312238/"
]
} |
531,947 | I am new to bash scripting. Could someone help me with the following?I have a log file with output as shown below. I'm trying to grep for lines of an output with logDurationMillis>=950ms logAlias:Overall,logDurationMillis:382,logTimeStart:2019-07-24_15:30:06.075,logTimeStop:2019-07-24_15:30:06.107logAlias:Overall,logDurationMillis:388,logTimeStart:2019-07-24_15:30:06.406,logTimeStop:2019-07-24_15:30:06.444logAlias:Overall,logDurationMillis:545,logTimeStart:2019-07-24_15:30:06.583,logTimeStop:2019-07-24_15:30:06.638logAlias:Overall,logDurationMillis:961,logTimeStart:2019-07-24_15:30:06.599,logTimeStop:2019-07-24_15:30:06.660logAlias:Overall,logDurationMillis:640,logTimeStart:2019-07-24_15:30:07.197,logTimeStop:2019-07-24_15:30:07.237logAlias:Overall,logDurationMillis:934,logTimeStart:2019-07-24_15:30:07.474,logTimeStop:2019-07-24_15:30:07.508logAlias:Overall,logDurationMillis:336,logTimeStart:2019-07-24_15:30:07.546,logTimeStop:2019-07-24_15:30:07.582 The values are always in the second comma-delimited column. | With awk: if you know "logDurationMillis" is the second item: awk -F'[:,]' -v limit=950 '$4 >= limit' file otherwise awk -F'[:,]' -v limit=950 '{ for (i=1; i<NF; i+=2) if ($i == "logDurationMillis" && $(i+1) >= limit) print}' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/531947",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/246981/"
]
} |
532,134 | Why isn't there a ; character after do in shell loops when written on a single line? Here's what I mean. When written on a multiple lines, a for loop looks like: $ for i in $(jot 2)> do> echo $i> done And on a single line: $ for i in $(jot 2); do echo $i; done All the collapsed lines get a ; after them except for the do line, and if you include the ; , it is an error. Someone probably a heck of a lot smarter than me decided that this was the right thing to do for a reason, but I can't figure out what the reason is. It seems inconsistent to me. The same with while loops too. $ while something> do> anotherthing> done$ while something; do anotherthing; done | That is the syntax of the command. See Compound Commands for name [ [in [words …] ] ; ] do commands; done Note specifically: do commands Most people put the do and commands on a separate line to allow for easier readability but it is not necessary, you could write: for i in thingdo somethingdone I know this question is specifically about shell and I have linked to the bash manual. It is not written that way in the shell manual but it is written that way in an article written by Stephen Bourne for byte magazine. Stephen says: A command list is a sequence of one or more simple commands separated or terminated by a newline or ; (semicolon). Furthermore, reserved words like do and done are normally preceded by a newline or ; ... In turn each time the command list following do is executed. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/532134",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/235518/"
]
} |
532,141 | I have a file which has two file names per row, like this: file1.fastq.gz file2.fastq.gz...file9fastq.ga file10fastq.gz How can I pass the two names as arguments for a script? | Using a while read loop: while read -r file1 file2 trash; do something with "$file1" and "$file2"done < /path/to/input_file This will read your input file line by line, setting file1 and file2 with the first and second columns respectively. trash is probably unnecessary but I like to include it to handle things you may encounter such as: file1.fastq.gz file2.fastq.gz foo If your file contained a line like the above and you did not include the trash variable (or one similar), your file2 variable would be set to: file2.fastq.gz foo | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/532141",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/360874/"
]
} |
532,206 | I want to copy a file from A to B, which may be on different filesystems. There are some additional requirements: The copy is all or nothing, no partial or corrupt file B left in place on crash; Do not overwrite an existing file B; Do not compete with a concurrent execution of the same command, at most one can succeed. I think this gets close: cp A B.part && \ln B B.part && \rm B.part But 3. is violated by the cp not failing if B.part exists (even with -n flag). Subsequently 1. could fail if the other process 'wins' the cp and the file linked into place is incomplete. B.part could also be an unrelated file, but I'm happy to fail without trying other hidden names in that case. I think bash noclobber helps, does this work fully? Is there a way to get without the bash version requirement? #!/usr/bin/env bashset -o noclobbercat A > B.part && \ln B.part B && \rm B.part Followup, I know some file systems will fail at this anyway (NFS). Is there a way to detect such filesystems? Some other related but not quite the same questions: Approximating atomic move across file systems? Is mv atomic on my fs? is there a way to atomically move file and directory from tempfs to ext4 partition on eMMC https://rcrowley.org/2010/01/06/things-unix-can-do-atomically.html | rsync does this job. A temporary file is O_EXCL created by default (only disabled if you use --inplace ) and then renamed over the target file. Use --ignore-existing to not overwrite B if it exists. In practice, I never experienced any problems with this on ext4, zfs or even NFS mounts. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/532206",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/285525/"
]
} |
532,214 | I want to make an alias for busybox command BB=$(($(busybox))) | alias BB='busybox' Syntax for creating alias (in .bashrc or .bash_profile ) is, alias <alised command>='<command with options if any>' To create an alias for long listing of files in a directory; alias ll='ls -l' in your .bashrc or .bash_profile . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/532214",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/363485/"
]
} |
532,381 | What exactly does this do? I don't understand how you could access base memory with this...seems kinda weird. Is it safe? dd if=/dev/urandom of=/dev/mem | Don't try this at home! It can crash your system, and if you're really unlucky it could damage a peripheral or make your computer unbootable. Actually, on most platforms, it just fails with an error, but that depends on the hardware architecture. There is most definitely no guarantee that this is harmless unless you run the command as an unprivileged user. With an unprivileged user, the command is perfectly harmless because you can't open /dev/mem . When you run a command as root, you're supposed to know what you're doing. The kernel will sometimes prevent you from doing something dangerous, but not always. /dev/mem is one of those potentially dangerous things where you're really supposed to know what you're doing. I'm going to walk through how a write to /dev/mem works on Linux. The general principle would be the same on other Unices, but things like kernel options are completely different. What happens when a process reads or writes to a device file is up to the kernel. An access to a device file runs some code in the driver that handles this device file. For example, writing to /dev/mem invokes the function write_mem in drivers/char/mem.c . This function takes 4 arguments: a data structure that represents the open file, a pointer to the data to write, the number of bytes to write, and the current position in the file. Note that you only get that far if the caller had permission to open the file in the first place. Device files obey file permissions normally. The normal permissions of /dev/mem are crw-r----- owned by root:kmem , so if you try to open it for writing without being root, you'll just get “permission denied” (EACCESS). But if you're root (or if root has changed the permissions of this file), the opening goes through and then you can attempt a write. The code in the write_mem function makes some sanity checks, but these checks aren't enough to protect against everything bad. The first thing it does is convert the current file position *ppos to a physical address. If that fails (in practice, because you're on a platform with 32-bit physical addresses but 64-bit file offsets and the file offset is larger than 2^32), the write fails with EFBIG (file too large). The next check is whether the range of physical addresses to write is valid on this particular processor architecture, and there a failure results in EFAULT (bad address). Next, on Sparc and m68k, any part of the write in the very first physical page is silently skipped. We've now reached the main loop which iterates over the data in blocks that can fit within one MMU page. /dev/mem accesses physical memory, not virtual memory, but the processor instructions to load and store data in memory use virtual addresses, so the code needs to arrange to map the physical memory at some virtual address. On Linux, depending on the processor architecture and the kernel configuration, this mapping either exists permantently or has to be made on the fly; that's the job of xlate_dev_mem_ptr (and unxlate_dev_mem_ptr undoes whatever xlate_dev_mem_ptr does). Then the function copy_from_user reads from the buffer that was passed to the write system call and just writes to the virtual address where the physical memory is currently mapped. The code emits normal memory store instructions, and what this means is up to the hardware. Before I discuss that a write to a physical address does, I'll discuss a check that happens before this write. Inside the loop, the function page_is_allowed blocks accesses to certain addresses if the kernel configuration option CONFIG_STRICT_DEVMEM is enabled (which is the case by default): only addresses allowed by devmem_is_allowed can be reached through /dev/mem , for others the write fails with EPERM (operation not permitted). The description of this option states: If this option is switched on, and IO_STRICT_DEVMEM=n, the /dev/mem file only allows userspace access to PCI space and the BIOS code and data regions. This is sufficient for dosemu and X and all common users of /dev/mem. This is very x86-centric description. In fact, more generically, CONFIG_STRICT_DEVMEM blocks access to physical memory addresses that map to RAM, but allows access to addresses that don't map to RAM. The details of what ranges of physical address are allowed depend on the processor architecture, but all of them exclude the RAM where data of the kernel and of user land processes is stored.The additional option CONFIG_IO_STRICT_DEVMEM (disabled as of Ubuntu 18.04) blocks accesses to physical addresses that are claimed by a driver. Physical memory addresses that map to RAM . So there are physical memory addresses that don't map to RAM? Yes. That's the discussion I promised above about what it means to write to an address. A memory store instruction does not necessarily write to RAM. The processor decomposes the address and decides which peripheral to dispatch the store to. (When I say “the processor”, I encompass peripheral controllers which may not come from the same manufacturer.) RAM is only one of those peripherals. How the dispatch is done is very dependent on the processor architecture, but the fundamentals are more or less the same on all architectures. The processor basically decomposes the higher bits of the address and looks them up in some tables that are populated based on hard-coded information, information obtained by probing some buses, and information configured by the software. A lot of caching and buffering may be involved, but in a nutshell, after this decomposition, the processor writes something (encoding both the target address and the data that's being stored) on some bus and then it's up to the peripheral to deal with it. (Or the outcome of the table lookup might be that there is no peripheral at this address, in which case the processor enters a trap state where it executes some code in the kernel that normally results in a SIGBUS for the calling process.) A store to an address that maps to RAM doesn't “do” anything other than overwrite the value that was previously stored at this address, with the promise that a later load at the same address will give back the last stored value. But even RAM has a few addresses that don't behave this way: it has a few registers that can control things like refresh rate and voltage. In general, a read or write to a hardware register does whatever the hardware is programmed to do. Most accesses to hardware work this way: the software (normally kernel code) accesses a certain physical address, this reaches the bus that connects the processor to the peripheral, and the peripheral does its thing. Some processors (in particular x86) also have separate CPU instructions that cause reads/writes to peripherals which are distinct from memory load and store, but even on x86, many peripherals are reached through load/store. The command dd if=/dev/urandom of=/dev/mem writes random data to whatever peripheral is mapped at address 0 (and subsequent addresses, as long as the writes succeed). In practice, I expect that on many architectures, physical address 0 doesn't have any peripheral mapped to it, or has RAM, and therefore the very first write attempt fails. But if there is a peripheral mapped at address 0, or if you change the command to write to a different address, you'll trigger something unpredictable in the peripheral. With random data at increasing addresses, it's unlikely to do something interesting, but in principle it could turn off the computer (there's probably an address that does this in fact), overwrite some BIOS setting that makes it impossible to boot, or even hit some buggy peripheral in a way that damages it. alias Russian_roulette='dd if=/dev/urandom of=/dev/mem seek=$((4096*RANDOM+4096*32768*RANDOM))' | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/532381",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/364072/"
]
} |
532,434 | Many times, especially when messing around with boot-loaders, I'll see numerical drive and partition numbers used. For instance, in my /boot/grub/grub.cfg I see set root='hd0,gpt2' , my UEFI boot entries often reference drive/partition numbers, and it seems to crop up in almost any context where bootloaders are concerned. Now that we have UUID and PARTUUID, addressing partitions in this manner seems incredibly unstable (afaik, drives are not guaranteed to be mounted in the same order always, a user may move the order of drives being plugged into their mobo, etc.) My questions therefore are twofold: Is this addressing scheme as unstable as I have outlined above? Am I missing something in the standard that means this scheme is far more reliable than I expect, or will this addressing scheme truly render your system unbootable (until you fix your boot entries at least) as a result of your drives simply being recognized in a different order or plugging them into different slots on your motherboard? If the answer to the question above is yes, then why does this addressing scheme continue to be used? Wouldn't using UUID or PARTUUID for everything be far more stable, and consistent? | The plain numbering scheme is not actually used in recent systems (with "recent" being Ubuntu 9 and later, other distributions may have adapted in that era, too). You are correct in observing the root partition is set with the plain numbering scheme. But this only is a default or fall-back setting which is usually overridden with the very next command, such as: search --no-floppy --fs-uuid --set=root 74686973-6973-616e-6578-616d706c650a This selects the root partition based on the file-system's UUID. In practice, the plain numbering scheme is usually stable (as long as there are no hardware changes). The only instance I observed non-predictable numbering was system with many USB-drives which were enumerated based on a first-come-first serve pattern and then emulated as IDE drives. None of these processes are inherently chaotic, so I assume a problem in that particular systems BIOS implementation. Note: "root partition" in this context means the partition to boot from, it may be different from the partition containing the "root aka. / file system". | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/532434",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/238378/"
]
} |
532,548 | I just edited the .zshrc file to configure Z shell on FreeBSD, for example to update the PATH system variable. path+=/usr/local/openjdk12/bin How do I make the changes take effect? Must I log out and log in again? Is there a way to immediately run that file? | Restart zsh Zsh reads .zshrc when it starts. You don't need to log out and log back in. Just closing the terminal and opening a new one gives you your new .zshrc in this new terminal. But you can make this more direct. Just tell zsh to relaunch itself: exec zsh If you run this at a zsh prompt, this replaces the current instance of zsh by a new one, running in the same terminal. The new instance has the same environment variables as the previous one, but has fresh shell (non-exported) variables, and it starts a new history (so it'll mix in commands from other terminals in typical configurations). Any background jobs are disowned. Reread .zshrc You can also tell zsh to re-read .zshrc . This has the advantage of preserving the shell history, shell variables, and knowledge of background jobs. But depending on what you put in your .zshrc , this may or may not work. Re-reading .zshrc runs commands which may not work, or not work well, if you run them twice. . ~/.zshrc There are just too many things you can do to enumerate everything that's ok and not ok to put in .zshrc if you want to be able to run it twice. Here are just some common issues: If you append to a variable (e.g. fpath+=(~/.config/zsh) or chpwd_functions+=(my_chpwd) ), this appends the same elements again, which may or may not be a problem. If you define aliases, and also use the same name as a command, the command will now run the alias. For example, this works: function foo { … }alias foo='foo --common-option' But this doesn't, because the second time the file is sourced, foo () will expand the alias: foo () { … }alias foo='foo --common-option' If you patch an existing zsh function, you'll now be patching your own version, which will probably make a mess. If you do something like “swap the bindings of two keys”, that won't do what you want the second time. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/532548",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56752/"
]
} |
532,549 | i'm trying to insert variables before specific line in my script. this is the code that i'm using: var1=$(echo "database1=")var2=$(echo "database2=")var3=$(echo "database3=")sed -i "/#variables/i \$var1\$var2\$var3" /data1/create_database i expect that create_database be like this after i run above command: database1=database2=database3=#variables but i get this result: database1= database2= database3=#variables tried few ways nothing worked. what should i do? | Restart zsh Zsh reads .zshrc when it starts. You don't need to log out and log back in. Just closing the terminal and opening a new one gives you your new .zshrc in this new terminal. But you can make this more direct. Just tell zsh to relaunch itself: exec zsh If you run this at a zsh prompt, this replaces the current instance of zsh by a new one, running in the same terminal. The new instance has the same environment variables as the previous one, but has fresh shell (non-exported) variables, and it starts a new history (so it'll mix in commands from other terminals in typical configurations). Any background jobs are disowned. Reread .zshrc You can also tell zsh to re-read .zshrc . This has the advantage of preserving the shell history, shell variables, and knowledge of background jobs. But depending on what you put in your .zshrc , this may or may not work. Re-reading .zshrc runs commands which may not work, or not work well, if you run them twice. . ~/.zshrc There are just too many things you can do to enumerate everything that's ok and not ok to put in .zshrc if you want to be able to run it twice. Here are just some common issues: If you append to a variable (e.g. fpath+=(~/.config/zsh) or chpwd_functions+=(my_chpwd) ), this appends the same elements again, which may or may not be a problem. If you define aliases, and also use the same name as a command, the command will now run the alias. For example, this works: function foo { … }alias foo='foo --common-option' But this doesn't, because the second time the file is sourced, foo () will expand the alias: foo () { … }alias foo='foo --common-option' If you patch an existing zsh function, you'll now be patching your own version, which will probably make a mess. If you do something like “swap the bindings of two keys”, that won't do what you want the second time. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/532549",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/314925/"
]
} |
532,578 | I've used the mkfifo <file> command to create named FIFOs, where one process writes to the file, and another process reads from the file. Now, I know the mknod command is able to create named pipes. Are these named pipes equivalent to the FIFOs created by mkfifo , or do they have different features? | Yes, it's equivalent, but obviously only if you tell mknod to actually create a FIFO, and not a block or character device (rarely done these days as devtmpfs/udev does it for you). mkfifo foobar# same differencemknod foobar p In strace it's identical for both commands: mknod("foobar", S_IFIFO|0666) = 0 So in terms of syscalls, mkfifo is actually shorthand for mknod . The biggest difference, then, is in semantics. With mkfifo you can create a bunch of FIFOs in one go: mkfifo a b c With mknod , since you have to specify the type, it only ever accepts one argument: # wrong:$ mknod a b c pmknod: invalid major device number ‘c’# right:mknod a pmknod b pmknod c p In general, mknod can be difficult to use correctly. So if you want to work with FIFO, stick to mkfifo . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/532578",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128739/"
]
} |
532,599 | if I have a csv file in the following format: column1,column2,column3,column4,column5,column6,column7,column8 and I want awk to only print columns 2 till 7 I would use: awk -F',' '{print $2 "," $3 "," $4 "," $5 "," $6 "," $7}' file.csv and get: column2,column3,column4,column5,column6,column7 is there a way to concatenate the columns 2-7 to simplify the command. As I'm thinking of a file with quite a bit more columns, my awk command would get horribly long. | The utility cut has a compact notation: cut -d, -f2-7 <input-file> producing: column2,column3,column4,column5,column6,column7 Answering the comment by @PlasmaBinturong: my intent was address the issue of a short calling sequence: "... my awk command would get horribly long ...". However, one can also find codes that arrange the fields as one might desire. As much as I like awk, perl, python, I have often found it useful to build a specific utility to extend the capabilities of standard *nix. So here is an excerpt from a test script, s2, showing utilities recut and arrange, both allow re-arrangement and duplication, with arrange also allowing decreasing field ranges: FILE=${1-data1}# Utility functions: print-as-echo, print-line-with-visual-space.pe() { for _i;do printf "%s" "$_i";done; printf "\n"; }pl() { pe;pe "-----" ;pe "$*"; }pl " Input data file $FILE:"head $FILEpl " Results, cut:"cut -d, -f2-7 $FILEpl " Results, recut (modified as my-recut):"my-recut -d "," 7,6,2-5 < $FILEpl " Results, arrange:"arrange -s "," -f 5,3-1,7,5,3-4,5 $FILE producing results from these versions: OS, ker|rel, machine: Linux, 3.16.0-10-amd64, x86_64Distribution : Debian 8.11 (jessie) bash GNU bash 4.3.30cut (GNU coreutils) 8.23recut - ( local: RepRev 1.1, ~/bin/recut, 2010-06-10 )arrange (local) 1.15----- Input data file data1:column1,column2,column3,column4,column5,column6,column7,column8----- Results, cut:column2,column3,column4,column5,column6,column7----- Results, recut (modified as my-recut):column7,column6,column2,column3,column4,column5----- Results, arrange:column5,column3,column2,column1,column7,column5,column3,column4,column5 The my-recut is a slight modification the textutils code recut, and arrange is our version of an extended cut. More information: recut Process fields like cut, allow repetitions and re-ordering. (what)Path : ~/bin/recutVersion : - ( local: RepRev 1.1, ~/bin/recut, 2010-06-10 )Length : 56 linesType : Perl script, ASCII text executableShebang : #!/usr/bin/perlHome : http://www1.cuni.cz/~obo/textutils/ (doc)Modules : (for perl codes) Getopt::Long 2.42arrange Arrange fields, like cut, but in user-specified order. (what)Path : ~/bin/arrangeVersion : 1.15Length : 355 linesType : Perl script, ASCII text executableShebang : #!/usr/bin/perlModules : (for perl codes) warnings 1.23 strict 1.08 Carp 1.3301 Getopt::Euclid 0.4.5 Best wishes ... cheers, drl | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/532599",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240990/"
]
} |
532,603 | I'm trying to boot some new Linux virtual machines (Ubuntu) via QEMU/KVM. They boot up fine up until the point where I move my mouse on the screen (VNC connection). When I do, the display freezes and I can't use the desktop environment until I reboot. I've tried 2 different versions of Ubuntu, version 18.04, 19.04 with the same results. Windows virtual machines do not exhibit the problem. When switching the Graphics model from QXL to VGA, moving the mouse doesn't freeze the VM, but the resolution is low and can't be changed. I can still ssh into the VM even when the display is frozen, and have tried killing/restarting X and lightdm but the VNC session is still stuck. I also attempted to tail all logs while I reproduce the issue but I don't see anything that sticks out. Is there any way to know why QXL is causing the display to freeze? Qemu package versions - Host ii ipxe-qemu 1.0.0+git-20180124.fbe8c52d-0ubuntu2.2 all PXE boot firmware - ROM images for qemuii ipxe-qemu-256k-compat-efi-roms 1.0.0+git-20150424.a25a16d-0ubuntu2 all PXE boot firmware - Compat EFI ROM images for qemuii qemu-block-extra:amd64 1:2.11+dfsg-1ubuntu7.15 amd64 extra block backend modules for qemu-system and qemu-utilsii qemu-kvm 1:2.11+dfsg-1ubuntu7.15 amd64 QEMU Full virtualization on x86 hardwareii qemu-system-common 1:2.11+dfsg-1ubuntu7.15 amd64 QEMU full system emulation binaries (common files)ii qemu-system-x86 1:2.11+dfsg-1ubuntu7.15 amd64 QEMU full system emulation binaries (x86)ii qemu-utils 1:2.11+dfsg-1ubuntu7.15 amd64 QEMU utilities VM information - Guest CPU: 2RAM: 8gbHDD: 100GBDisplay VNCVideo: QXL Xorg.0.log on Guest [ 8.411] Module class: X.Org Video Driver[ 8.411] ABI class: X.Org Video Driver, version 20.0[ 8.411] (II) LoadModule: "fbdev"[ 8.411] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so[ 8.411] (II) Module fbdev: vendor="X.Org Foundation"[ 8.411] compiled for 1.18.1, module version = 0.4.4[ 8.411] Module class: X.Org Video Driver[ 8.411] ABI class: X.Org Video Driver, version 20.0[ 8.411] (II) LoadModule: "vesa"[ 8.411] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so[ 8.412] (II) Module vesa: vendor="X.Org Foundation"[ 8.412] compiled for 1.18.1, module version = 2.3.4[ 8.412] Module class: X.Org Video Driver[ 8.412] ABI class: X.Org Video Driver, version 20.0[ 8.412] (II) qxl: Driver for QXL virtual graphics: QXL 1[ 8.412] (II) modesetting: Driver for Modesetting Kernel Drivers: kms[ 8.412] (II) FBDEV: driver for framebuffer: fbdev[ 8.412] (II) VESA: driver for VESA chipsets: vesa[ 8.412] (II) [KMS] Kernel modesetting enabled.[ 8.412] (WW) Falling back to old probe method for modesetting[ 8.412] (WW) Falling back to old probe method for fbdev[ 8.412] (II) Loading sub module "fbdevhw"[ 8.412] (II) LoadModule: "fbdevhw"[ 8.412] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so[ 8.412] (II) Module fbdevhw: vendor="X.Org Foundation"[ 8.412] compiled for 1.18.4, module version = 0.0.2[ 8.412] ABI class: X.Org Video Driver, version 20.0[ 8.412] (WW) Falling back to old probe method for vesa[ 8.412] (II) qxl(0): Creating default Display subsection in Screen section "Default Screen Section" for depth/fbbpp 24/32[ 8.412] (==) qxl(0): Depth 24, (--) framebuffer bpp 32[ 8.412] (==) qxl(0): RGB weight 888[ 8.412] (==) qxl(0): Default visual is TrueColor[ 8.412] (==) qxl(0): Using gamma correction (1.0, 1.0, 1.0)[ 8.412] (II) qxl(0): Deferred Frames: Disabled[ 8.412] (II) qxl(0): Offscreen Surfaces: Enabled[ 8.412] (II) qxl(0): Image Cache: Enabled[ 8.412] (II) qxl(0): Fallback Cache: Enabled[ 8.412] (==) qxl(0): DPI set to (96, 96)[ 8.412] (II) Loading sub module "fb"[ 8.412] (II) LoadModule: "fb"[ 8.413] (II) Loading /usr/lib/xorg/modules/libfb.so[ 8.413] (II) Module fb: vendor="X.Org Foundation"[ 8.413] compiled for 1.18.4, module version = 1.0.0[ 8.413] ABI class: X.Org ANSI C Emulation, version 0.4[ 8.413] (II) Loading sub module "ramdac"[ 8.413] (II) LoadModule: "ramdac"[ 8.413] (II) Module "ramdac" already built-in[ 8.413] (II) qxl(0): Output Virtual-0 has no monitor section[ 8.413] (II) qxl(0): Output Virtual-1 has no monitor section[ 8.413] (II) qxl(0): Output Virtual-2 has no monitor section[ 8.413] (II) qxl(0): Output Virtual-3 has no monitor section[ 8.413] (II) qxl(0): EDID for output Virtual-0[ 8.413] (II) qxl(0): Printing probed modes for output Virtual-0[ 8.413] (II) qxl(0): Modeline "1024x768"x59.9 63.50 1024 1072 1176 1328 768 771 775 798 -hsync +vsync (47.8 kHz P)[ 8.413] (II) qxl(0): Modeline "1920x1200"x59.9 193.25 1920 2056 2256 2592 1200 1203 1209 1245 -hsync +vsync (74.6 kHz)[ 8.413] (II) qxl(0): Modeline "1920x1080"x60.0 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync (67.2 kHz)[ 8.413] (II) qxl(0): Modeline "1600x1200"x59.9 161.00 1600 1712 1880 2160 1200 1203 1207 1245 -hsync +vsync (74.5 kHz)[ 8.413] (II) qxl(0): Modeline "1680x1050"x60.0 146.25 1680 1784 1960 2240 1050 1053 1059 1089 -hsync +vsync (65.3 kHz)[ 8.413] (II) qxl(0): Modeline "1400x1050"x60.0 121.75 1400 1488 1632 1864 1050 1053 1057 1089 -hsync +vsync (65.3 kHz)[ 8.413] (II) qxl(0): Modeline "1280x1024"x59.9 109.00 1280 1368 1496 1712 1024 1027 1034 1063 -hsync +vsync (63.7 kHz)[ 8.413] (II) qxl(0): Modeline "1440x900"x59.9 106.50 1440 1528 1672 1904 900 903 909 934 -hsync +vsync (55.9 kHz)[ 8.413] (II) qxl(0): Modeline "1280x960"x59.9 101.25 1280 1360 1488 1696 960 963 967 996 -hsync +vsync (59.7 kHz)[ 8.413] (II) qxl(0): Modeline "1280x854"x59.9 89.25 1280 1352 1480 1680 854 857 867 887 -hsync +vsync (53.1 kHz)[ 8.413] (II) qxl(0): Modeline "1280x800"x59.8 83.50 1280 1352 1480 1680 800 803 809 831 -hsync +vsync (49.7 kHz)[ 8.413] (II) qxl(0): Modeline "1280x720"x59.9 74.50 1280 1344 1472 1664 720 723 728 748 -hsync +vsync (44.8 kHz)[ 8.413] (II) qxl(0): Modeline "1152x768"x59.8 71.75 1152 1216 1328 1504 768 771 781 798 -hsync +vsync (47.7 kHz)[ 8.413] (II) qxl(0): Modeline "800x600"x59.9 38.25 800 832 912 1024 600 603 607 624 -hsync +vsync (37.4 kHz)[ 8.413] (II) qxl(0): Modeline "848x480"x59.7 31.50 848 872 952 1056 480 483 493 500 -hsync +vsync (29.8 kHz)[ 8.413] (II) qxl(0): Modeline "720x480"x59.7 26.75 720 744 808 896 480 483 493 500 -hsync +vsync (29.9 kHz)[ 8.413] (II) qxl(0): Modeline "640x480"x59.4 23.75 640 664 720 800 480 483 487 500 -hsync +vsync (29.7 kHz)[ 8.413] (II) qxl(0): EDID for output Virtual-1[ 8.413] (II) qxl(0): EDID for output Virtual-2[ 8.413] (II) qxl(0): EDID for output Virtual-3[ 8.413] (II) qxl(0): Output Virtual-0 connected[ 8.413] (II) qxl(0): Output Virtual-1 disconnected[ 8.413] (II) qxl(0): Output Virtual-2 disconnected[ 8.413] (II) qxl(0): Output Virtual-3 disconnected[ 8.413] (II) qxl(0): Using exact sizes for initial modes[ 8.413] (II) qxl(0): Output Virtual-0 using initial mode 1024x768 +0+0[ 8.413] (II) qxl(0): Using default gamma of (1.0, 1.0, 1.0) unless otherwise stated.[ 8.413] (II) qxl(0): PreInit complete[ 8.414] (II) UnloadModule: "modesetting"[ 8.414] (II) Unloading modesetting[ 8.414] (II) UnloadModule: "fbdev"[ 8.414] (II) Unloading fbdev[ 8.414] (II) UnloadSubModule: "fbdevhw"[ 8.414] (II) Unloading fbdevhw[ 8.414] (II) UnloadModule: "vesa"[ 8.414] (II) Unloading vesa[ 8.414] (--) Depth 24 pixmap format is 32 bpp[ 8.414] (II) UXA(0): Driver registered support for the following operations:[ 8.414] (II) solid[ 8.414] (II) copy[ 8.414] (II) composite (RENDER acceleration)[ 8.414] (II) put_image[ 8.415] (II) qxl(0): RandR 1.2 enabled, ignore the following RandR disabled message.[ 8.415] resizing primary to 1024x768[ 8.415] primary is 0x5599314bee30[ 8.415] (--) RandR disabled[ 8.418] (II) SELinux: Disabled on system[ 8.420] (II) AIGLX: Screen 0 is not DRI2 capable[ 8.420] (EE) AIGLX: reverting to software rendering[ 8.516] (II) AIGLX: enabled GLX_MESA_copy_sub_buffer[ 8.516] (II) AIGLX: Loaded and initialized swrast[ 8.516] (II) GLX: Initialized DRISWRAST GL provider for screen 0[ 8.516] (II) qxl(0): Setting screen physical size to 270 x 203[ 8.624] (II) config/udev: Adding input device Power Button (/dev/input/event0)[ 8.624] (**) Power Button: Applying InputClass "evdev keyboard catchall"[ 8.624] (II) LoadModule: "evdev"[ 8.625] (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so[ 8.627] (II) Module evdev: vendor="X.Org Foundation"[ 8.627] compiled for 1.18.1, module version = 2.10.1[ 8.627] Module class: X.Org XInput Driver[ 8.627] ABI class: X.Org XInput driver, version 22.1[ 8.627] (II) Using input driver 'evdev' for 'Power Button'[ 8.627] (**) Power Button: always reports core events[ 8.627] (**) evdev: Power Button: Device: "/dev/input/event0"[ 8.627] (--) evdev: Power Button: Vendor 0 Product 0x1[ 8.627] (--) evdev: Power Button: Found keys[ 8.627] (II) evdev: Power Button: Configuring as keyboard[ 8.627] (**) Option "config_info" "udev:/sys/devices/LNXSYSTM:00/LNXPWRBN:00/input/input0/event0"[ 8.627] (II) XINPUT: Adding extended input device "Power Button" (type: KEYBOARD, id 6)[ 8.627] (**) Option "xkb_rules" "evdev"[ 8.627] (**) Option "xkb_model" "pc105"[ 8.627] (**) Option "xkb_layout" "us"[ 8.628] (II) config/udev: Adding input device AT Translated Set 2 keyboard (/dev/input/event1)[ 8.628] (**) AT Translated Set 2 keyboard: Applying InputClass "evdev keyboard catchall"[ 8.628] (II) Using input driver 'evdev' for 'AT Translated Set 2 keyboard'[ 8.628] (**) AT Translated Set 2 keyboard: always reports core events[ 8.628] (**) evdev: AT Translated Set 2 keyboard: Device: "/dev/input/event1"[ 8.628] (--) evdev: AT Translated Set 2 keyboard: Vendor 0x1 Product 0x1[ 8.628] (--) evdev: AT Translated Set 2 keyboard: Found keys[ 8.628] (II) evdev: AT Translated Set 2 keyboard: Configuring as keyboard[ 8.628] (**) Option "config_info" "udev:/sys/devices/platform/i8042/serio0/input/input1/event1"[ 8.628] (II) XINPUT: Adding extended input device "AT Translated Set 2 keyboard" (type: KEYBOARD, id 7)[ 8.628] (**) Option "xkb_rules" "evdev"[ 8.628] (**) Option "xkb_model" "pc105"[ 8.628] (**) Option "xkb_layout" "us"[ 8.629] (II) config/udev: Adding input device ImExPS/2 Generic Explorer Mouse (/dev/input/event2)[ 8.629] (**) ImExPS/2 Generic Explorer Mouse: Applying InputClass "evdev pointer catchall"[ 8.629] (II) Using input driver 'evdev' for 'ImExPS/2 Generic Explorer Mouse'[ 8.629] (**) ImExPS/2 Generic Explorer Mouse: always reports core events[ 8.629] (**) evdev: ImExPS/2 Generic Explorer Mouse: Device: "/dev/input/event2"[ 8.629] (--) evdev: ImExPS/2 Generic Explorer Mouse: Vendor 0x2 Product 0x6[ 8.629] (--) evdev: ImExPS/2 Generic Explorer Mouse: Found 9 mouse buttons[ 8.629] (--) evdev: ImExPS/2 Generic Explorer Mouse: Found scroll wheel(s)[ 8.629] (--) evdev: ImExPS/2 Generic Explorer Mouse: Found relative axes[ 8.629] (--) evdev: ImExPS/2 Generic Explorer Mouse: Found x and y relative axes[ 8.629] (II) evdev: ImExPS/2 Generic Explorer Mouse: Configuring as mouse[ 8.629] (II) evdev: ImExPS/2 Generic Explorer Mouse: Adding scrollwheel support[ 8.629] (**) evdev: ImExPS/2 Generic Explorer Mouse: YAxisMapping: buttons 4 and 5[ 8.629] (**) evdev: ImExPS/2 Generic Explorer Mouse: EmulateWheelButton: 4, EmulateWheelInertia: 10, EmulateWheelTimeout: 200[ 8.629] (**) Option "config_info" "udev:/sys/devices/platform/i8042/serio1/input/input3/event2"[ 8.629] (II) XINPUT: Adding extended input device "ImExPS/2 Generic Explorer Mouse" (type: MOUSE, id 8)[ 8.629] (II) evdev: ImExPS/2 Generic Explorer Mouse: initialized for relative axes.[ 8.629] (**) ImExPS/2 Generic Explorer Mouse: (accel) keeping acceleration scheme 1[ 8.629] (**) ImExPS/2 Generic Explorer Mouse: (accel) acceleration profile 0[ 8.629] (**) ImExPS/2 Generic Explorer Mouse: (accel) acceleration factor: 2.000[ 8.629] (**) ImExPS/2 Generic Explorer Mouse: (accel) acceleration threshold: 4[ 8.630] (II) config/udev: Adding input device ImExPS/2 Generic Explorer Mouse (/dev/input/mouse0)[ 8.630] (II) No input driver specified, ignoring this device.[ 8.630] (II) This device may have been added with another device file. /var/log/gpu-manager.log - Guest Is nvidia egl available? noIs fglrx available? noIs fglrx-core available? noIs mesa available? yesIs mesa egl available? yesIs pxpress available? noIs prime available? noIs prime egl available? noSingle card detectedNo change - nothing to do /var/log/kern.log - Guest Jul 28 15:19:48 localhost NetworkManager[803]: <info> [1564327188.3306] device (docker0): state change: ip-config -> ip-check (reason 'none') [70 80 0]Jul 28 15:19:48 localhost NetworkManager[803]: <info> [1564327188.3315] device (docker0): state change: ip-check -> secondaries (reason 'none') [80 90 0]Jul 28 15:19:48 localhost NetworkManager[803]: <info> [1564327188.3319] device (docker0): state change: secondaries -> activated (reason 'none') [90 100 0]Jul 28 15:19:48 localhost NetworkManager[803]: <info> [1564327188.3433] device (docker0): Activation: successful, device activated.Jul 28 15:19:48 localhost kernel: [ 9.768070] aufs au_opts_verify:1597:dockerd[1395]: dirperm1 breaks the protection by the permission bits on the lower branchJul 28 15:19:50 localhost gnome-session-binary[1999]: Entering running stateJul 28 15:19:50 localhost kernel: [ 11.805526] ISO 9660 Extensions: Microsoft Joliet Level 3Jul 28 15:19:50 localhost kernel: [ 11.814958] ISO 9660 Extensions: RRIP_1991AJul 28 15:19:51 localhost NetworkManager[803]: <info> [1564327191.6905] manager: WiFi hardware radio set enabledJul 28 15:19:51 localhost NetworkManager[803]: <info> [1564327191.6905] manager: WWAN hardware radio set enabled /var/log/syslog - Guest Jul 28 15:20:11 localhost systemd-timesyncd[432]: Synchronized to time server 91.189.89.199:123.Jul 28 15:20:11 localhost systemd[1]: Time has been changedJul 28 15:20:11 localhost systemd[1502]: Time has been changedJul 28 15:20:14 localhost systemd[1]: Started Session 1 of user sansforensics.Jul 28 15:20:14 localhost pulseaudio[2145]: [pulseaudio] bluez5-util.c: GetManagedObjects() failed: org.freedesktop.DBus.Error.TimedOut: Failed to activate service 'org.bluez': timed out .xsession-errors - Guest openConnection: connect: No such file or directorycannot connect to brltty at :0 | The utility cut has a compact notation: cut -d, -f2-7 <input-file> producing: column2,column3,column4,column5,column6,column7 Answering the comment by @PlasmaBinturong: my intent was address the issue of a short calling sequence: "... my awk command would get horribly long ...". However, one can also find codes that arrange the fields as one might desire. As much as I like awk, perl, python, I have often found it useful to build a specific utility to extend the capabilities of standard *nix. So here is an excerpt from a test script, s2, showing utilities recut and arrange, both allow re-arrangement and duplication, with arrange also allowing decreasing field ranges: FILE=${1-data1}# Utility functions: print-as-echo, print-line-with-visual-space.pe() { for _i;do printf "%s" "$_i";done; printf "\n"; }pl() { pe;pe "-----" ;pe "$*"; }pl " Input data file $FILE:"head $FILEpl " Results, cut:"cut -d, -f2-7 $FILEpl " Results, recut (modified as my-recut):"my-recut -d "," 7,6,2-5 < $FILEpl " Results, arrange:"arrange -s "," -f 5,3-1,7,5,3-4,5 $FILE producing results from these versions: OS, ker|rel, machine: Linux, 3.16.0-10-amd64, x86_64Distribution : Debian 8.11 (jessie) bash GNU bash 4.3.30cut (GNU coreutils) 8.23recut - ( local: RepRev 1.1, ~/bin/recut, 2010-06-10 )arrange (local) 1.15----- Input data file data1:column1,column2,column3,column4,column5,column6,column7,column8----- Results, cut:column2,column3,column4,column5,column6,column7----- Results, recut (modified as my-recut):column7,column6,column2,column3,column4,column5----- Results, arrange:column5,column3,column2,column1,column7,column5,column3,column4,column5 The my-recut is a slight modification the textutils code recut, and arrange is our version of an extended cut. More information: recut Process fields like cut, allow repetitions and re-ordering. (what)Path : ~/bin/recutVersion : - ( local: RepRev 1.1, ~/bin/recut, 2010-06-10 )Length : 56 linesType : Perl script, ASCII text executableShebang : #!/usr/bin/perlHome : http://www1.cuni.cz/~obo/textutils/ (doc)Modules : (for perl codes) Getopt::Long 2.42arrange Arrange fields, like cut, but in user-specified order. (what)Path : ~/bin/arrangeVersion : 1.15Length : 355 linesType : Perl script, ASCII text executableShebang : #!/usr/bin/perlModules : (for perl codes) warnings 1.23 strict 1.08 Carp 1.3301 Getopt::Euclid 0.4.5 Best wishes ... cheers, drl | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/532603",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/364401/"
]
} |
532,811 | I find that when writing text as input to another program, any command substitutions in double quotes within the intended text are interpreted and expanded by the shell The links in the answer here states that single quotes can be used to prevent parameter expansion or command substitution. However I'm finding that enclosing a command substitution in single-quotes also fails to stop the shell from expanding the command substitution How do you prevent the shell from interpreting command substitutions that are intended as text rather than a command to be executed? A demonstration $ echo "`wc -l *`" attempts to count lines in all files in the current directory $ echo "'`wc -l *`'" Same result, i.e. counts lines in all files in the current directory update From this demonstration I've spotted that the problem seems to be that I am quoting the single quotes. I think enclosing single quotes and ` (backtick) in double quotes preserves the literal meaning of (i.e. suppresses) the single quotes but does not preserve the literal meaning of the backquote (i.e. backtick) that introduces the command substitution. In my use case the input for another command needs to be quoted. With this document saying that: A single-quote cannot occur within single quotes How do you prevent a single-quoted command substitution from being expanded when the single-quoted command substitution is within a (double) quoted string? There should be a way to do it other than using backslash escapes Actual situation In a program I'm using the only way to split a description of a task into separate lines is to enclose the description in double-quotes: $ task add "first line doesn\'t say muchSecond line says a lot but part of this line does not appear in the resulting description 'truncate -s0 !(temp_file | temp_dir)' truncates all files to 0 bytes as shown by: '`wc -l *`'" The resulting description: first line doesn\ -s0 !(temp_file | temp_dir)' truncates all files to 0 bytes as shown by: 0 file1 10 file2 0 directory1 0 directory2 502 file3 123 file4 162 file5 0 directory3 As you can see 't say muchSecond line says a lot but part of this line does not appear in the resulting description 'truncate is missing from the description and the shell has interpreted 'wc -l *' as a command substitution, thereby including the line counts of all files in the current directory as part of the description What's causing the shell to remove the part of the argument to task between \ (backslash) and -s , and how do you prevent the shell from interpreting the above single-quoted command substitution (i.e. '`wc -l *`' )? | Use single-quote strong quoting: printf '%s\n' '`wc -l *`' And if you want to also include single quotes in that argument passed to printf , you'd need to use different quotes for ' itself like: printf '%s\n' '`wc -l *` and a '"'"' character' Or: printf '%s\n' '`wc -l *` and a '\'' character' Other alternatives include escaping the ` with backslash inside double quotes: printf '%s\n' "\`wc -l *\` and a ' character" Or have ` be the result of some expansion: backtick='`'printf '%s\n' "${backtick}wc -l *${backtick} and a ' character" Also note: cat << 'EOF'`wc -l *` and a ' character and a " characterEOF to output arbitrary text without having to worry about quoting (note the quotes around the first EOF ). You can also do: var=$(cat << 'EOF'echo '`wc -l *`'EOF) Which with ksh93 or mksh you can optimise to: var=$(<<'EOF'echo '`wc -l *`'EOF) (also works in zsh , but still runs cat in a subshell there) for $var to contain literally echo '`wc -l *`' . In the fish shell, you can embed ' within '...' with \' : printf '%s\n' '`wc -l *` and a \' character' but anyway ` is not special there, so: printf '%s\n' "`wc -l *` and a ' character" would work as well. In rc, es or zsh -o rcquotes , you can insert a ' within '...' with '' : printf '%s\n' '`wc -l *` and a '' character' See How to use a special character as a normal one? for more details. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/532811",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/289865/"
]
} |
532,848 | I have a two-column file; the file is sorted the way I want it on column 1 already. I would like to sort on column 2, within each column 1 category. However, sort does not understand the sort order of column 1. The normal way (from similar questions here on stack) would be this: sort --stable -k1,1 -k2,2n But I cannot specify the sort on k1, because it is arbitrary. Example input: C 2C 1A 2A 1B 2 B 1 and output: C 1C 2A 1A 2B 1 B 2 | You could use awk to start a new sort for each block: % awk -v cmd="sort -k2,2" '$1 != prev {close(cmd); prev=$1} {print | cmd}' fooC 1C 2A 1A 2B 1B 2 $1 != prev {close(cmd); prev=$1} - when the saved value is different, we have a new block, so we close any previously started sort {print | "sort -k2,2"}' pipes the output to sort , starting it if it isn't already running (awk can keep track of commands it starts) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/532848",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/285525/"
]
} |
532,985 | When I have a private key loaded for a user, I can run ssh-copy-id user@remotehostname and will be prompted for a password. When I enter the correct password, I can now log in with the key. I have previously always run this command with -i <path to public key> as an argument, but now realize I don't need the path to the public key. When I log in as user@hostname, I cat the .ssh/authorized_keys file, and see the public key that my private key matches, which is what confuses me - I never provided a public key. How does ssh-copy-id know which public key matches the private key I have loaded locally when I run the command? How does it know what to add to the authorized keys file when I don't provide it? I hope I am being clear - I've always run ssh-copy-id -i <public key> and it makes sense to me how this works - it logs in and copies the public key to the authorized keys file. But if I DON'T provide the public key (i.e. I run ssh-add <private key> before running ssh-copy-id), it still works as long as I have the private key loaded, and I don't understand how it gets the public key. edit: To clarify, I am not keeping the default id*.pub naming convention. So the logic that I'm seeing about searching for an id*.pub in the man page doesn't seem to apply. In fact, I can create a keypair called randompair, load randompair, rename rendompair.pub as newname.pub, run ssh-copy-id and it still loads the correct public key. Looking at the bash script itself leaves me a little confused as to how it accomplishes this. | This is pretty well documented in the manual page on recent systems. Note that there are several different versions of the script; Arch Linux and RHEL/CentOS seem to have the same version as Debian/Ubuntu , but FreeBSD has slightly different options. By default, ssh-copy-id calls ssh-add -L to list the keys that you have registered in the SSH agent. ssh-add -L outputs a list of public keys for which you have the private key in the agent. You might wonder how the agent can do this since you don't pass it a public key either. The answer is that it's always possible to reconstruct the public key from the private key (this is true of all the cryptosystems that SSH supports and most that it doesn't). This is only true of the “mathematical” part of the key, however. The public key file can also contain a comment (which you can set with ssh-keygen -C ), and the agent does not load this comment, so if you use ssh-copy-id and it takes a key via the agent, the remote host won't have this comment in authorized_keys . If there is no running agent or it doesn't have any key, recent Linux ssh-copy-id look for ( straight from the man page ) the most recent file that matches: ~/.ssh/id*.pub , (excluding those that match ~/.ssh/*-cert.pub ) so if you create a key that is not the one you want ssh-copy-id to use, just use touch(1) on your preferred key's .pub file to reinstate it as the most recent. Older versions of the script and non-Linux versions don't have this most-recent-file behavior. As far as I remember, even older versions didn't probe the agent and just read the default path ~/.ssh/id_rsa.pub by default. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/532985",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/364729/"
]
} |
533,156 | If I try to start python in a bash script, the script will stop running and no commands will execute after "Python" is called. In this simple example, "TESTPRINT" will not be printed. It seems like the script just stops. #!/bin/bashpythonprint("TESTPRINT")Echo How do I make the script continue running after going into Python? I believe I had the same problem a few years ago after writing a script that first needed to shell into an Android Phone. I can't remember how I fixed it that time. | To run a set of Python commands from a bash script, you must give the Python interpreter the commands to run, either from a file (Python script) that you create in the script, as in #!/bin/bash -e# Create script as "script.py"cat >script.py <<'END_SCRIPT'print("TESTPRINT")END_SCRIPT# Run script.pypython script.pyrm script.py (this creates a new file called script.py or overwrites that file if it already exists, and then instructs Python to run it; it is then deleted) ... or directly via some form of redirection, for example a here-document: #!/bin/bashpython - <<'END_SCRIPT'print("TESTPRINT")END_SCRIPT What this does is running python - which instructs the Python interpreter to read the script from standard input. The shell then sends the text of the Python script (delimited by END_SCRIPT in the shell script) to the Python process' standard input stream. Note that the two bits of code above are subtly different in that the second script's Python process has its standard input connected to the script that it's reading, while the first script's Python process is free to read data other than the script from standard input. This matters if your Python code reads from standard input. Python can also take a set of commands from the command line directly with its -c option: #!/bin/bashpython -c 'print("TESTPRINT")' What you can't do is to "switch to Python" in the middle of a bash script. The commands in a script is executed by bash one after the other, and while a command is executing, the script itself waits for it to terminate (if it's not a background job). This means that your original script would start Python in interactive mode, temporarily suspending the execution of the bash script until the Python process terminates. The script would then try to execute print("TESTPRINT") as a shell command. It's a similar issue with using ssh like this in a script: ssh user@servercd /tmpls (which may possibly be similar to what you say you tried a few years ago). This would not connect to the remote system and run the cd and ls commands there. It would start an interactive shell on the remote system, and once that shell has terminated (giving control back to the script), cd and ls would be run locally. Instead, to execute the commands on a remote machine, use ssh user@server "cd /tmp; ls" (This is a lame example, but you may get the point). The below example shows how you may actually do what you propose. It comes with several warning label and caveats though, and you should never ever write code like this (because it's obfuscated and therefore unmaintainable and, dare I say it, downright bad). python -print("TESTPRINT") Running it: $ sh -s <script.shTESTPRINT What happens here is that the script is being run by sh -s . The -s option to sh (and to bash ) tells the shell to execute the shell script arriving over the standard input stream. The script then starts python - , which tells Python to run whatever comes in over the standard input stream. The next thing on that stream, since it's inherited from sh -s by Python (and therefore connected to our script text file), is the Python command print("TESTPRINT") . The Python interpreter would then continue reading and executing commands from the script file until it runs out or executes the Python command exit() . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/533156",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/364877/"
]
} |
533,161 | This is more about finding an elegant solution to a problem, I think I have a working solution. I have the following input file format, tab-separated, on an Ubuntu machine: AC003665.1 17 47813266 AGCAGGCGCA 83RIOK3 18 23453502 GCAAGGCCCC 52UBE2Z 17 48910880 CTAAGGATCC 48CSNK1D 17 82251379 AATTTAGCCA 68CSNK1D 17 82251379 AATTTCTTGT 38SMURF1 7 99143726 GACAGATTGG 74SMURF1 7 99143726 GACAGATTGG 61RIOK3 18 23453502 GCAAGACTTT 69 I want to get only one line per occurence of field 3, the one that has the highest value in field 5. Output should therefore be : AC003665.1 17 47813266 AGCAGGCGCA 83CSNK1D 17 82251379 AATTTAGCCA 68UBE2Z 17 48910880 CTAAGGATCC 48SMURF1 7 99143726 GACAGATTGG 74RIOK3 18 23453502 GCAAGACTTT 69 Order is irrelevant for my purposes. I have found a solution that involves sorting first on field 5, and then on field 3, that I think works: sort -k 5,5nr input | sort -u -k 3,3n > output It works with all my test files and I think should work in any case, as this should ensure that for every value of field 3 the sorting will see first (and therefore keep) the line with the highest value for field 5. I however feel that there should be a more elegant (and maybe more foolproof) solution to that problem ? Any help is appreciated. | To run a set of Python commands from a bash script, you must give the Python interpreter the commands to run, either from a file (Python script) that you create in the script, as in #!/bin/bash -e# Create script as "script.py"cat >script.py <<'END_SCRIPT'print("TESTPRINT")END_SCRIPT# Run script.pypython script.pyrm script.py (this creates a new file called script.py or overwrites that file if it already exists, and then instructs Python to run it; it is then deleted) ... or directly via some form of redirection, for example a here-document: #!/bin/bashpython - <<'END_SCRIPT'print("TESTPRINT")END_SCRIPT What this does is running python - which instructs the Python interpreter to read the script from standard input. The shell then sends the text of the Python script (delimited by END_SCRIPT in the shell script) to the Python process' standard input stream. Note that the two bits of code above are subtly different in that the second script's Python process has its standard input connected to the script that it's reading, while the first script's Python process is free to read data other than the script from standard input. This matters if your Python code reads from standard input. Python can also take a set of commands from the command line directly with its -c option: #!/bin/bashpython -c 'print("TESTPRINT")' What you can't do is to "switch to Python" in the middle of a bash script. The commands in a script is executed by bash one after the other, and while a command is executing, the script itself waits for it to terminate (if it's not a background job). This means that your original script would start Python in interactive mode, temporarily suspending the execution of the bash script until the Python process terminates. The script would then try to execute print("TESTPRINT") as a shell command. It's a similar issue with using ssh like this in a script: ssh user@servercd /tmpls (which may possibly be similar to what you say you tried a few years ago). This would not connect to the remote system and run the cd and ls commands there. It would start an interactive shell on the remote system, and once that shell has terminated (giving control back to the script), cd and ls would be run locally. Instead, to execute the commands on a remote machine, use ssh user@server "cd /tmp; ls" (This is a lame example, but you may get the point). The below example shows how you may actually do what you propose. It comes with several warning label and caveats though, and you should never ever write code like this (because it's obfuscated and therefore unmaintainable and, dare I say it, downright bad). python -print("TESTPRINT") Running it: $ sh -s <script.shTESTPRINT What happens here is that the script is being run by sh -s . The -s option to sh (and to bash ) tells the shell to execute the shell script arriving over the standard input stream. The script then starts python - , which tells Python to run whatever comes in over the standard input stream. The next thing on that stream, since it's inherited from sh -s by Python (and therefore connected to our script text file), is the Python command print("TESTPRINT") . The Python interpreter would then continue reading and executing commands from the script file until it runs out or executes the Python command exit() . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/533161",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/364876/"
]
} |
533,183 | I have created this bash function to recursively find a file. ff() { find . -type f -name '$1'} However, it returns no results. When I execute the command directly on the commandline, I do get results. I am not sure why it behaves differently, as shown below. mattr@kiva-mattr:~/src/protocol$ ff *.somattr@kiva-mattr:~/src/protocol$ find . -type f -name '*.so' ./identity_wallet_service/resources/libindystrgpostgres.so Why does my function not work as expected? I am using MacOS and bash shell. ThnxMatt | To run a set of Python commands from a bash script, you must give the Python interpreter the commands to run, either from a file (Python script) that you create in the script, as in #!/bin/bash -e# Create script as "script.py"cat >script.py <<'END_SCRIPT'print("TESTPRINT")END_SCRIPT# Run script.pypython script.pyrm script.py (this creates a new file called script.py or overwrites that file if it already exists, and then instructs Python to run it; it is then deleted) ... or directly via some form of redirection, for example a here-document: #!/bin/bashpython - <<'END_SCRIPT'print("TESTPRINT")END_SCRIPT What this does is running python - which instructs the Python interpreter to read the script from standard input. The shell then sends the text of the Python script (delimited by END_SCRIPT in the shell script) to the Python process' standard input stream. Note that the two bits of code above are subtly different in that the second script's Python process has its standard input connected to the script that it's reading, while the first script's Python process is free to read data other than the script from standard input. This matters if your Python code reads from standard input. Python can also take a set of commands from the command line directly with its -c option: #!/bin/bashpython -c 'print("TESTPRINT")' What you can't do is to "switch to Python" in the middle of a bash script. The commands in a script is executed by bash one after the other, and while a command is executing, the script itself waits for it to terminate (if it's not a background job). This means that your original script would start Python in interactive mode, temporarily suspending the execution of the bash script until the Python process terminates. The script would then try to execute print("TESTPRINT") as a shell command. It's a similar issue with using ssh like this in a script: ssh user@servercd /tmpls (which may possibly be similar to what you say you tried a few years ago). This would not connect to the remote system and run the cd and ls commands there. It would start an interactive shell on the remote system, and once that shell has terminated (giving control back to the script), cd and ls would be run locally. Instead, to execute the commands on a remote machine, use ssh user@server "cd /tmp; ls" (This is a lame example, but you may get the point). The below example shows how you may actually do what you propose. It comes with several warning label and caveats though, and you should never ever write code like this (because it's obfuscated and therefore unmaintainable and, dare I say it, downright bad). python -print("TESTPRINT") Running it: $ sh -s <script.shTESTPRINT What happens here is that the script is being run by sh -s . The -s option to sh (and to bash ) tells the shell to execute the shell script arriving over the standard input stream. The script then starts python - , which tells Python to run whatever comes in over the standard input stream. The next thing on that stream, since it's inherited from sh -s by Python (and therefore connected to our script text file), is the Python command print("TESTPRINT") . The Python interpreter would then continue reading and executing commands from the script file until it runs out or executes the Python command exit() . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/533183",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/364914/"
]
} |
533,194 | I'd like to know what is the best way to extract serial number from a SSL certificate formatted in PEM format. After that I'd like to format the certificate in following format hexhex:hexhex:...:hexhex so for example if my serial number of the SSL certificate in hexadecimal is 0123456709AB the output should be 01:23:45:67:09:AB For preference I'd like to acomplish this using openssl with the x509 option using one single line UNIX command | Try: openssl x509 -noout -serial -in cert.pem | cut -d'=' -f2 | sed 's/../&:/g;s/:$//' openssl x509 -noout -serial -in cert.pem will output the serial number of the certificate, but in the format serial=0123456709AB . It is therefore piped to cut -d'=' -f2 which splits the output on the equal sign and outputs the second part - 0123456709AB . That is sent to sed . The first part of the sed command s/../&:/g splits the string every two characters ( .. ) and inserts a colon ( : ). This results in 01:23:45:67:89:AB: (note the colon on the end). The second part of the sed command ( s/:$// ) searches for a colon at the end of the output and replaces it with an empty string, resulting in the desired output. Or for a openssl and sed only answer: openssl x509 -noout -serial -in test2.crt | sed 's/.*=//g;s/../&:/g;s/:$//' The addition of s/.*=//g at the start of the sed command replaces the cut in the first version. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/533194",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/357716/"
]
} |
533,201 | I am using Nemo on an Mint Cinnamon Distribution. I use Nemo to search for files with the shortcut ctrl + f . Typing the file name gives me the search result. But how do I switch the focus to the result panel, so I can cycle through the results and select the appropriate? Even when there is only one result ctrl + o doesn't open the file. F6 also doesn't work. Thanks! | Try: openssl x509 -noout -serial -in cert.pem | cut -d'=' -f2 | sed 's/../&:/g;s/:$//' openssl x509 -noout -serial -in cert.pem will output the serial number of the certificate, but in the format serial=0123456709AB . It is therefore piped to cut -d'=' -f2 which splits the output on the equal sign and outputs the second part - 0123456709AB . That is sent to sed . The first part of the sed command s/../&:/g splits the string every two characters ( .. ) and inserts a colon ( : ). This results in 01:23:45:67:89:AB: (note the colon on the end). The second part of the sed command ( s/:$// ) searches for a colon at the end of the output and replaces it with an empty string, resulting in the desired output. Or for a openssl and sed only answer: openssl x509 -noout -serial -in test2.crt | sed 's/.*=//g;s/../&:/g;s/:$//' The addition of s/.*=//g at the start of the sed command replaces the cut in the first version. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/533201",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/220392/"
]
} |
533,271 | I would like to use git to track changes in crontab . I have initialized a new git repository in /var/spool/cron/crontabs/ Now the problem is, when crontab is saved, the second line of the header changes because it contains timestamp. # DO NOT EDIT THIS FILE - edit the master and reinstall.# (/tmp/crontab.ubNueW/crontab installed on Thu Aug 1 06:29:24 2019) What would be the easiest way to ignore these irrelevant changes ? The possible duplicate question does not address the main point of my question: How to ignore the first 2 irrelevant lines from crontab. Instead, it addresses some other questions which I have not asked, such as some hooks. | You could use a filter: git config filter.dropSecondLine.clean "sed '2d'" Edit/create .git/info/attributes and add: * filter=dropSecondLine If you don't want the filter acting on all the files in the repo, modify the * to match an appropriate pattern or filename. The effect will be the working directory will remain the same, but the repo blobs will not have the second line in the files. So if you pull it down elsewhere the second line would not appear (the result of the sed 'd2'). And if you change the second line of your log file you will be able to add it, but not commit it, as the change to the blob happens on add, at which point it will be the same file as the one in the repo. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/533271",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
533,331 | My goal is to get the disks greater than 100G from lsblk. I have it working, but it's awkward. I'm pretty sure it can be shortened. Either by using something totally different than lsblk, or maybe I can filter human readable numbers directly with awk. Here's what I put together: lsblk | grep disk | awk '{print$1,$4}' | grep G | sed 's/.$//' | awk '{if($2>100)print$1}' It outputs only the sdx and nvmexxx part of the disks larger than 100G. Exactly what I need. I am happy with it, but am eager to learn more from you Gurus | You can specify the form of output you want from lsblk : % lsblk -nblo NAME,SIZEmmcblk0 15931539456mmcblk0p1 268435456mmcblk0p2 15662038528 Options used : -b, --bytes Print the SIZE column in bytes rather than in human-readable format.-l, --list Use the list output format.-n, --noheadings Do not print a header line.-o, --output list Specify which output columns to print. Use --help to get a list of all supported columns. Then the filtering is easier: % lsblk -nblo NAME,SIZE | awk '$2 > 4*2^30 {print $1}' # greater than 4 GiBmmcblk0mmcblk0p2 In your case, that'd be 100*2^30 for 100GiB or 100e9 / 1e11 for 100GB. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/533331",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/358708/"
]
} |
533,463 | I am trying to tar the current directory and stream to stdout (ultimately to Amazon S3)...I have this command: tar -cf - . but I get this error: tar: Refusing to write archive contents to terminal (missing -f option?) tar: Error is not recoverable: exiting now from what I can tell -f - means the file is to stdout, although -f /dev/stdout is probably more explicit. does anyone know how to form the command correct? | Like many programs, tar checks to see whether its output is going to a terminal device (tty) and modifies its behavior accordingly. In GNU tar , we can find the relevant code in buffer.c : static voidcheck_tty (enum access_mode mode){ /* Refuse to read archive from and write it to a tty. */ if (strcmp (archive_name_array[0], "-") == 0 && isatty (mode == ACCESS_READ ? STDIN_FILENO : STDOUT_FILENO)) { FATAL_ERROR ((0, 0, mode == ACCESS_READ ? _("Refusing to read archive contents from terminal " "(missing -f option?)") : _("Refusing to write archive contents to terminal " "(missing -f option?)"))); }} You will find that once you connect stdout to something, it will happily write to it: $ tar -cf- .tar: Refusing to write archive contents to terminal (missing -f option?)tar: Error is not recoverable: exiting now whereas $ tar -cf - . | tar -tf -././001.gif./02.gif./1234.gif./34.gif | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/533463",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
533,512 | TARGET How to delete all lines in a text file before a matching one, including this one? Input file example: applepearbananaHI_THERElemoncoconutorange Desired output: lemoncoconutorange The aim is to do it in sed to use the "-i" option (direct editing). CLEAN SOLUTION ? Most answers for similar problems propose something like: sed -n '/HI_THERE/,$p' input_file But the matched line is not deleted: HI_THERElemoncoconutorange Then, knowing this will delete all from matched line (including it) to end of file: sed '/HI_THERE/,$d' input_file I tried something like this: sed '^,/HI_THERE/d' input_file But then sed complains: sed: -e expression #1, char 1: unknown command: `^' DIRTY SOLUTION The last (dirty) solution is using pipeline: sed -n '/HI_THERE/,$p' input_file | tail -n +2 but then, direct edit of the file doesn't work: sed -n '/HI_THERE/,$p' input_file | tail -n +2 > input_filecat input_file # returns nothing and one must use a temporary file like that... sed -n '/HI_THERE/,$p' input_file | tail -n +2 > tmp_filemv tmp_file input_file | Similar to your "clean solution": sed -e '1,/HI_THERE/d' input_file The first line in the file is line 1 - there's no special ^ address because you always know that, while $ is needed for the end because you don't (necessarily) know which line that is. This does fall over if the matching line is the first line of the file. With GNU sed you can use 0 instead of 1 to deal with that. For POSIX sed and for portability (which seem to be different in this case) it's more complex (see comments below and this follow-up question ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/533512",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/236725/"
]
} |
533,563 | On Ubuntu 18.04 I create a RAID 1 array like this: mdadm --create /dev/md/myarray --level=1 --run --raid-devices=2 /dev/sdc /dev/sdd I then add the output of mdadm --detail --scan /dev/md/myarray to /etc/mdadm/mdadm.conf. It looks like this: ARRAY /dev/md/myarray metadata=1.2 name=MYHOSTNAME:myarray UUID=... The device name has been prefix with "MYHOSTNAME:". At this point the symlink /dev/md/myarray still exists, but after the first time I reboot it becomes /dev/md/MYHOSTNAME:myarray , breaking things. To make it worse, this happens only on some machines - on others the symlink remains /dev/md/myarray . All are running Ubuntu 18.04, so I have no idea why. How do I get a consistent device path for my MD device, ideally the exact one I specified ("/dev/md/myarray")? I tried editing mdadm.conf to remove the hostname, but even if the line says ARRAY /dev/md/myarray metadata=1.2 name=myarray UUID=... the symlink still changes on reboot - on machines that "want" the hostname. I also tried going the other way and adding the hostname in both place: ARRAY /dev/md/HOSTNAME:myarray metadata=1.2 name=HOSTNAME:myarray UUID=... but again on machines that "don't want" the hostname the symlink becomes /dev/md/myarray after a reboot! I can't use the numeric device (/dev/md127) either because when there are multiple MD devices created like this they tend to alternate between md126 and md127 as well! This is crazy! | How do I get a consistent device path for my MD device, ideally the exact one I specified ("/dev/md/myarray")? After mdadm --create /dev/md/foobar ... , both hostname and name are stored in the mdadm metadata, as you should verify with mdadm --examine or mdadm --detail : # mdadm --detail /dev/md/foobar Name : ALU:foobar (local to host ALU) ALU happens to be the hostname of my ArchLinux machine: # hostnameALU You can specify the host that should be stored at create time: # mdadm --create /dev/md/foobar --homehost=barfoo# mdadm --detail /dev/md/foobar Name : barfoo:foobar ...but usually nobody remembers to do that. And that's already where the problems start... you might have created your RAID array from some LiveCD or other, and the hostname in that environment didn't match your main install at all. And then the metadata stores some completely unrelated hostname. Similarly if you set everything up correctly, but then encounter problems with your RAID and boot a rescue system to check things out, yet again there's a mismatch with the hostnames. Or the other way around, the hostname may match even if it's the wrong machine - if you used the same hostname for two independent systems and then migrate drives. Then the alien arrays take over the names of the original ones... Now, the metadata can also be changed later using mdadm --assemble --update=homehost or --update=name , that is one way to deal with problem. It should be set correctly but it's difficult to change as (for some reason) short of hexediting metadata directly, it can only be done at assembly time. Another way is to ignore the systems hostname and instead specify --homehost on assembly or set HOMEHOST in mdadm.conf . This is described in some detail in the mdadm.conf manpage. HOMEHOST The homehost line gives a default value for the --homehost= option to mdadm. There should normally be only one other word on the line. It should either be a host name, or one of the special words <system> , <none> and <ignore> . If <system> is given, then the gethostname(2) systemcall is used to get the host name. This is the default. [...] When arrays are created, this host name will be stored in the metadata. When arrays are assembled using auto-assembly, arrays which do not record the correct homehost name in their metadata will be assembled using a "foreign" name. A "foreign" name alway ends with a digit string preceded by an underscore to differentiate it from any possible local name. e.g. /dev/md/1_1 or /dev/md/home_0. So you can try to set HOMEHOST ALU (in my case), or the more generic HOMEHOST <ignore> (or HOMEHOST <none> ) in the mdadm.conf . But it will only work when that mdadm.conf is present. And again if you set ignore and then hook up an array from another machine, you might run into name conflicts. So it'd be best to set the hostname correctly in metadata and mdadm.conf and not ignore it, and better yet set the actual hostname in initramfs before assembly but it can be hard to put into practice. My personal preference is to just stick to the classic numeric style. Identify by UUID and nothing else: ARRAY /dev/md1 UUID=8fe790ca:f3fa3388:4ae125b6:2c3a5d44ARRAY /dev/md2 UUID=f14bef5b:a5356e51:25fde128:09983091ARRAY /dev/md3 UUID=0639c68d:4c844bb1:5c02b33e:00ab4a93 This is also consistent (but also depends on it to have been created this way and/or set accordingly in the metadata, otherwise you also might have to --update it). And alien arrays that don't match the given UUIDs should end up as /dev/md127+ . At the end of the day no matter what you do, you should not blindly rely on /dev/mdX or /dev/md/name s the same way you don't blindly rely on /dev/sdX letters. Always use filesystem UUIDs to identify whatever is on those arrays. There's too many corner cases where names might unexpectedly change, so at best, this can be an orientation help or hint to the sysadmin, it's not the answer to all problems. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/533563",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107961/"
]
} |
533,580 | What is the pacman option to search for a package that owns a file? Like dpkg -S in Debian-based distros. | It is pacman -Qo <filename> . Example % pacman -Qo x86_64-pc-linux-gnu-pkg-config/usr/bin/x86_64-pc-linux-gnu-pkg-config is owned by pkgconf 1.6.3-1 From pacman(8) : Query Options (apply to -Q ) -o, --owns <file> Search for packages that own the specified file(s). The path can be relative or absolute, and one or more files can be specified. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/533580",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/178265/"
]
} |
533,585 | I'm wondering if anybody could suggest a way to duplicate the contents of a file, in such a way that the duplicates have been altered? For example, file pre duplication: Nempar|EIJ87098.1 Ecanceri|ORD93056.1 File post duplication: Nempar|EIJ87098.1 Ecanceri1|ORD93056.1 Nempardup|EIJ87098.1 Ecanceridup|ORD93056.1 I don't need the alteration to be in any particular location or any particular character. Just that it marks the duplicates. Currently, I'm simply using: cat file.txt file.txt > file.dup.txt Is there any way I can just add on top of this, or is cat too simple? | It is pacman -Qo <filename> . Example % pacman -Qo x86_64-pc-linux-gnu-pkg-config/usr/bin/x86_64-pc-linux-gnu-pkg-config is owned by pkgconf 1.6.3-1 From pacman(8) : Query Options (apply to -Q ) -o, --owns <file> Search for packages that own the specified file(s). The path can be relative or absolute, and one or more files can be specified. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/533585",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/365277/"
]
} |
533,739 | Suppose a program asks for some memory, but there is not enough free memory left. There are several different ways Linux could respond. One response is to select some other used memory, which has not been accessed recently, and move this inactive memory to swap. However, I see many articles and comments that go beyond this. They say even when there is a large amount of free memory, Linux will sometimes decide to write inactive memory to swap. Writing to swap in advance means that when we eventually want to use this memory, we do not have to wait for a disk write. They say this is a deliberate strategy to optimize performance. Are they right? Or is it a myth? Cite your source(s). Please understand this question using the following definitions: swap free memory - the "free" memory displayed by the free command. This is the MemFree value from /proc/meminfo . /proc/meminfo is a virtual text file provided by the kernel. See proc(5) , or RHEL docs . even when there is a large amount of free memory - for the purpose of argument, imagine there is more than 10% free memory. References Here are some search terms: linux "opportunistic swapping" OR (swap "when the system has nothing better to do" OR "when it has nothing better to do" OR "when the system is idle" OR "during idle time") In the second-highest result on Google, a StackExchange user asks "Why use swap when there is more than enough free space in RAM?", and copies the results of the free command showing about 20% free memory. In response to this specific question, I see this answer is highly voted: Linux starts swapping before the RAM is filled up. This is done to improve performance and responsiveness: Performance is increased because sometimes RAM is better used for disk cache than to store program memory. So it's better to swap out a program that's been inactive for a while, and instead keep often-used files in cache. Responsiveness is improved by swapping pages out when the system is idle, rather than when the memory is full and some program is running and requesting more RAM to complete a task. Swapping does slow the system down, of course — but the alternative to swapping isn't not swapping, it's having more RAM or using less RAM. The first result on Google has been marked as a duplicate of the question above :-). In this case, the asker copied details showing 7GB MemFree , out of 16GB. The question has an accepted and upvoted answer of its own: Swapping only when there is no free memory is only the case if you set swappiness to 0. Otherwise, during idle time, the kernel will swap memory. In doing this the data is not removed from memory, but rather a copy is made in the swap partition. This means that, should the situation arise that memory is depleted, it does not have to write to disk then and there. In this case the kernel can just overwrite the memory pages which have already been swapped, for which it knows that it has a copy of the data. The swappiness parameter basically just controls how much it does this. The other quote does not explicitly claim the swapped data is retained in memory as well. But it seems like you would prefer that approach, if you are swapping even at times when you have 20% free memory, and the reason you are doing so is to improve performance. As far as I know, Linux does support keeping a copy of the same data in both main memory and swap space. I also noticed the common claim that "opportunistic swapping" happens "during idle time". I understand it's supposed to help reassure me that this feature is generally good for performance. I don't include this in my definition above, because I think it already has enough details to make a nice clear question. I don't want to make this more complicated than it needs to be. Original motivation atop shows `swout` (swapping) when I have gigabytes of free memory. Why? There are a couple of reports like this, of Linux writing to swap when there is plenty of free memory. "Opportunistic swapping" might explain these reports. At the same time, at least one alternative cause was suggested. As a first step in looking at possible causes: Does Linux ever perform "opportunistic swapping" as defined above? In the example I reported, the question has now been answered. The cause was not opportunistic swapping. | Linux does not do "opportunistic swapping" as defined in this question. The following primary references do not mention the concept at all: Understanding the Linux Virtual Memory Manager . An online book by Mel Gorman. Written in 2003, just before the release of Linux 2.6.0. Documentation/admin-guide/sysctl/vm.rst . This is the primary documentation of the tunable settings of Linux virtual memory management. More specifically: 10.6 Pageout Daemon (kswapd) Historically kswapd used to wake up every 10 seconds but now it is only woken by the physical page allocator when the pages_low number of free pages in a zone is reached. [...] Under extreme memory pressure, processes will do the work of kswapd synchronously. [...] kswapd keeps freeing pages until the pages_high watermark is reached. Based on the above, we would not expect any swapping when the number of free pages is higher than the "high watermark". Secondly, this tells us the purpose of kswapd is to make more free pages. When kswapd writes a memory page to swap, it immediately frees the memory page. kswapd does not keep a copy of the swapped page in memory . Linux 2.6 uses the " rmap " to free the page. In Linux 2.4, the story was more complex. When a page was shared by multiple processes, kswapd was not able to free it immediately. This is ancient history. All of the linked posts are about Linux 2.6 or above. swappiness This control is used to define how aggressive the kernel will swap memory pages. Higher values will increase aggressiveness, lower values decrease the amount of swap. A value of 0 instructs the kernel not to initiate swap until the amount of free and file-backed pages is less than the high water mark in a zone. This quote describes a special case: if you configure the swappiness value to be 0 . In this case, we should additionally not expect any swapping until the number of cache pages has fallen to the high watermark. In other words, the kernel will try to discard almost all file cache before it starts swapping. (This might cause massive slowdowns. You need to have some file cache! The file cache is used to hold the code of all your running programs :-) What are the watermarks? The above quotes raise the question: How large are the "watermark" memory reservations on my system? Answer: on a "small" system, the default zone watermarks might be as high as 3% of memory. This is due to the calculation of the "min" watermark. On larger systems the watermarks will be a smaller proportion, approaching 0.3% of memory. So if the question is about a system with more than 10% free memory, the exact details of this watermark logic are not significant. The watermarks for each individual "zone" are shown in /proc/zoneinfo , as documented in proc(5) . An extract from my zoneinfo: Node 0, zone DMA32 pages free 304988 min 7250 low 9062 high 10874 spanned 1044480 present 888973 managed 872457 protection: (0, 0, 4424, 4424, 4424)...Node 0, zone Normal pages free 11977 min 9611 low 12013 high 14415 spanned 1173504 present 1173504 managed 1134236 protection: (0, 0, 0, 0, 0) The current "watermarks" are min , low , and high . If a program ever asks for enough memory to reduce free below min , the program enters "direct reclaim". The program is made to wait while the kernel frees up memory. We want to avoid direct reclaim if possible. So if free would dip below the low watermark, the kernel wakes kswapd . kswapd frees memory by swapping and/or dropping caches, until free is above high again. Additional qualification: kswapd will also run to protect the full lowmem_reserve amount, for kernel lowmem and DMA usage. The default lowmem_reserve is about 1/256 of the first 4GiB of RAM (DMA32 zone), so it is usually around 16MiB. Linux code commits mm: scale kswapd watermarks in proportion to memory [...] watermark_scale_factor: This factor controls the aggressiveness of kswapd. It defines the amount of memory left in a node/system before kswapd is woken up and how much memory needs to be free before kswapd goes back to sleep. The unit is in fractions of 10,000. The default value of 10 means the distances between watermarks are 0.1% of the available memory in the node/system. The maximum value is 1000, or 10% of memory. A high rate of threads entering direct reclaim (allocstall) or kswapd going to sleep prematurely (kswapd_low_wmark_hit_quickly) can indicate that the number of free pages kswapd maintains for latency reasons is too small for the allocation bursts occurring in the system. This knob can then be used to tune kswapd aggressiveness accordingly. proc: meminfo: estimate available memory more conservatively The MemAvailable item in /proc/meminfo is to give users a hint of how much memory is allocatable without causing swapping, so it excludes the zones' low watermarks as unavailable to userspace. However, for a userspace allocation, kswapd will actually reclaim until the free pages hit a combination of the high watermark and the page allocator's lowmem protection that keeps a certain amount of DMA and DMA32 memory from userspace as well. Subtract the full amount we know to be unavailable to userspace from the number of free pages when calculating MemAvailable. Linux code It is sometimes claimed that changing swappiness to 0 will effectively disable "opportunistic swapping". This provides an interesting avenue of investigation. If there is something called "opportunistic swapping", and it can be tuned by swappiness, then we could chase it down by finding all the call-chains that read vm_swappiness . Note we can reduce our search space by assuming CONFIG_MEMCG is not set (i.e. "memory cgroups" are disabled). The call chain goes: vm_swappiness mem_cgroup_swappiness get_scan_count shrink_node_memcg shrink_node shrink_node_memcg is commented "This is a basic per-node page freer. Used by both kswapd and direct reclaim". I.e. this function increases the number of free pages. It is not trying to duplicate pages to swap so they can be freed at a much later time. But even if we discount that: The above chain is called from three different functions, shown below. As expected, we can divide the call-sites into direct reclaim v.s. kswapd. It would not make sense to perform "opportunistic swapping" in direct reclaim. /* * This is the direct reclaim path, for page-allocating processes. We only * try to reclaim pages from zones which will satisfy the caller's allocation * request. * * If a zone is deemed to be full of pinned pages then just give it a light * scan then give up on it. */static void shrink_zones * kswapd shrinks a node of pages that are at or below the highest usable * zone that is currently unbalanced. * * Returns true if kswapd scanned at least the requested number of pages to * reclaim or if the lack of progress was due to pages under writeback. * This is used to determine if the scanning priority needs to be raised. */static bool kswapd_shrink_node * For kswapd, balance_pgdat() will reclaim pages across a node from zones * that are eligible for use by the caller until at least one zone is * balanced. * * Returns the order kswapd finished reclaiming at. * * kswapd scans the zones in the highmem->normal->dma direction. It skips * zones which have free_pages > high_wmark_pages(zone), but once a zone is * found to have free_pages <= high_wmark_pages(zone), any page in that zone * or lower is eligible for reclaim until at least one usable zone is * balanced. */static int balance_pgdat So, presumably the claim is that kswapd is woken up somehow, even when all memory allocations are being satisfied immediately from free memory. I looked through the uses of wake_up_interruptible(&pgdat->kswapd_wait) , and I am not seeing any wakeups like this. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/533739",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29483/"
]
} |
533,740 | I am running Linux mint and since some time, my cinnamon settings apps no longer work [e.g. Display, Backgrounds, etc]. Running it in a terminal I found the following issue. $ cinnamon-settingsTraceback (most recent call last): File "/usr/share/cinnamon/cinnamon-settings/cinnamon-settings.py", line 724, in <module>window = MainWindow() File "/usr/share/cinnamon/cinnamon-settings/cinnamon-settings.py", line 305, in __init__for module in modules: File "/usr/share/cinnamon/cinnamon-settings/modules/cs_applets.py", line 4, in <module> from ExtensionCore import ManageSpicesPage, DownloadSpicesPage File "/usr/share/cinnamon/cinnamon-settings/bin/ExtensionCore.py", line 19, in <module> from Spices import Spice_Harvester, ThreadedTaskManager File "/usr/share/cinnamon/cinnamon-settings/bin/Spices.py", line 23, in <module>from http.client import HTTPSConnectionImportError: cannot import name 'HTTPSConnection' I have tried reinstalling python, but that did not change anything. I read cinnamon now uses Python 3, and it seems the code could be run by the default python2 version. However, not clear how to fix this? | Linux does not do "opportunistic swapping" as defined in this question. The following primary references do not mention the concept at all: Understanding the Linux Virtual Memory Manager . An online book by Mel Gorman. Written in 2003, just before the release of Linux 2.6.0. Documentation/admin-guide/sysctl/vm.rst . This is the primary documentation of the tunable settings of Linux virtual memory management. More specifically: 10.6 Pageout Daemon (kswapd) Historically kswapd used to wake up every 10 seconds but now it is only woken by the physical page allocator when the pages_low number of free pages in a zone is reached. [...] Under extreme memory pressure, processes will do the work of kswapd synchronously. [...] kswapd keeps freeing pages until the pages_high watermark is reached. Based on the above, we would not expect any swapping when the number of free pages is higher than the "high watermark". Secondly, this tells us the purpose of kswapd is to make more free pages. When kswapd writes a memory page to swap, it immediately frees the memory page. kswapd does not keep a copy of the swapped page in memory . Linux 2.6 uses the " rmap " to free the page. In Linux 2.4, the story was more complex. When a page was shared by multiple processes, kswapd was not able to free it immediately. This is ancient history. All of the linked posts are about Linux 2.6 or above. swappiness This control is used to define how aggressive the kernel will swap memory pages. Higher values will increase aggressiveness, lower values decrease the amount of swap. A value of 0 instructs the kernel not to initiate swap until the amount of free and file-backed pages is less than the high water mark in a zone. This quote describes a special case: if you configure the swappiness value to be 0 . In this case, we should additionally not expect any swapping until the number of cache pages has fallen to the high watermark. In other words, the kernel will try to discard almost all file cache before it starts swapping. (This might cause massive slowdowns. You need to have some file cache! The file cache is used to hold the code of all your running programs :-) What are the watermarks? The above quotes raise the question: How large are the "watermark" memory reservations on my system? Answer: on a "small" system, the default zone watermarks might be as high as 3% of memory. This is due to the calculation of the "min" watermark. On larger systems the watermarks will be a smaller proportion, approaching 0.3% of memory. So if the question is about a system with more than 10% free memory, the exact details of this watermark logic are not significant. The watermarks for each individual "zone" are shown in /proc/zoneinfo , as documented in proc(5) . An extract from my zoneinfo: Node 0, zone DMA32 pages free 304988 min 7250 low 9062 high 10874 spanned 1044480 present 888973 managed 872457 protection: (0, 0, 4424, 4424, 4424)...Node 0, zone Normal pages free 11977 min 9611 low 12013 high 14415 spanned 1173504 present 1173504 managed 1134236 protection: (0, 0, 0, 0, 0) The current "watermarks" are min , low , and high . If a program ever asks for enough memory to reduce free below min , the program enters "direct reclaim". The program is made to wait while the kernel frees up memory. We want to avoid direct reclaim if possible. So if free would dip below the low watermark, the kernel wakes kswapd . kswapd frees memory by swapping and/or dropping caches, until free is above high again. Additional qualification: kswapd will also run to protect the full lowmem_reserve amount, for kernel lowmem and DMA usage. The default lowmem_reserve is about 1/256 of the first 4GiB of RAM (DMA32 zone), so it is usually around 16MiB. Linux code commits mm: scale kswapd watermarks in proportion to memory [...] watermark_scale_factor: This factor controls the aggressiveness of kswapd. It defines the amount of memory left in a node/system before kswapd is woken up and how much memory needs to be free before kswapd goes back to sleep. The unit is in fractions of 10,000. The default value of 10 means the distances between watermarks are 0.1% of the available memory in the node/system. The maximum value is 1000, or 10% of memory. A high rate of threads entering direct reclaim (allocstall) or kswapd going to sleep prematurely (kswapd_low_wmark_hit_quickly) can indicate that the number of free pages kswapd maintains for latency reasons is too small for the allocation bursts occurring in the system. This knob can then be used to tune kswapd aggressiveness accordingly. proc: meminfo: estimate available memory more conservatively The MemAvailable item in /proc/meminfo is to give users a hint of how much memory is allocatable without causing swapping, so it excludes the zones' low watermarks as unavailable to userspace. However, for a userspace allocation, kswapd will actually reclaim until the free pages hit a combination of the high watermark and the page allocator's lowmem protection that keeps a certain amount of DMA and DMA32 memory from userspace as well. Subtract the full amount we know to be unavailable to userspace from the number of free pages when calculating MemAvailable. Linux code It is sometimes claimed that changing swappiness to 0 will effectively disable "opportunistic swapping". This provides an interesting avenue of investigation. If there is something called "opportunistic swapping", and it can be tuned by swappiness, then we could chase it down by finding all the call-chains that read vm_swappiness . Note we can reduce our search space by assuming CONFIG_MEMCG is not set (i.e. "memory cgroups" are disabled). The call chain goes: vm_swappiness mem_cgroup_swappiness get_scan_count shrink_node_memcg shrink_node shrink_node_memcg is commented "This is a basic per-node page freer. Used by both kswapd and direct reclaim". I.e. this function increases the number of free pages. It is not trying to duplicate pages to swap so they can be freed at a much later time. But even if we discount that: The above chain is called from three different functions, shown below. As expected, we can divide the call-sites into direct reclaim v.s. kswapd. It would not make sense to perform "opportunistic swapping" in direct reclaim. /* * This is the direct reclaim path, for page-allocating processes. We only * try to reclaim pages from zones which will satisfy the caller's allocation * request. * * If a zone is deemed to be full of pinned pages then just give it a light * scan then give up on it. */static void shrink_zones * kswapd shrinks a node of pages that are at or below the highest usable * zone that is currently unbalanced. * * Returns true if kswapd scanned at least the requested number of pages to * reclaim or if the lack of progress was due to pages under writeback. * This is used to determine if the scanning priority needs to be raised. */static bool kswapd_shrink_node * For kswapd, balance_pgdat() will reclaim pages across a node from zones * that are eligible for use by the caller until at least one zone is * balanced. * * Returns the order kswapd finished reclaiming at. * * kswapd scans the zones in the highmem->normal->dma direction. It skips * zones which have free_pages > high_wmark_pages(zone), but once a zone is * found to have free_pages <= high_wmark_pages(zone), any page in that zone * or lower is eligible for reclaim until at least one usable zone is * balanced. */static int balance_pgdat So, presumably the claim is that kswapd is woken up somehow, even when all memory allocations are being satisfied immediately from free memory. I looked through the uses of wake_up_interruptible(&pgdat->kswapd_wait) , and I am not seeing any wakeups like this. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/533740",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/365408/"
]
} |
533,784 | I have the following strings in a very large document: 1.test.html#2.test.md#3.http://test.html#4.https://test.md#5.http://test.md#6.test2.md# Now I want to replace every .md# with .html# but ONLY if there is no http in the string. So only 2 and 6 should have a replacement. How can I do this in a shell script? | With GNU sed. If current line (pattern space) contains http jump to end of script ( b ). Otherwise do search and replace. sed '/http/b; s/\.md#/.html#/' file Output: 1.test.html#2.test.html#3.http://test.html#4.https://test.md#5.http://test.md#6.test2.html# If you want to edit your file "in place" use sed's option -i . See: man sed | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/533784",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/365442/"
]
} |
533,839 | I'm trying to compile a bunch of PDFs into a single document. ls shows the files are in the directory that I'm in and they're readable ( -rw-r--r-- ). But when I try to run pdftk 2017.pdf cat output test.pdf I get an error: Error: Unexpected Exception in open_reader()java.io.FileNotFoundException: 2017.pdf (Permission denied) at gnu.java.nio.channels.FileChannelImpl.open(libgcj.so.16) at gnu.java.nio.channels.FileChannelImpl.<init>(libgcj.so.16) at gnu.java.nio.channels.FileChannelImpl.create(libgcj.so.16) at java.io.RandomAccessFile.<init>(libgcj.so.16) at pdftk.com.lowagie.text.pdf.RandomAccessFileOrArray.<init>(pdftk) at pdftk.com.lowagie.text.pdf.PRTokeniser.<init>(pdftk) at pdftk.com.lowagie.text.pdf.PdfReader.<init>(pdftk) at pdftk.com.lowagie.text.pdf.PdfReader.<init>(pdftk)Error: Failed to open PDF file: 2017.pdfErrors encountered. No output created.Done. Input errors, so no output created. If I add more files to that operation I just get the error for each of them. I can rename the PDFs from the command line mv 2017.pdf foo.pdf and I get the same error. Error: Unexpected Exception in open_reader()java.io.FileNotFoundException: foo.pdf (Permission denied) If I try to call a non-existent file, eg. pdftk 123.pdf cat output test.pdf I get a different error: Error: Unable to find file.Error: Failed to open PDF file: 123.pdfErrors encountered. No output created.Done. Input errors, so no output created. Even tail 2017.pdf shows the last few lines of 2017.pdf: <</Info 63 0 R/ID [<cc59759cedaf07420bbe3250ba5d8971><f259ad128310d106c7aa80b673c4bd70>]/Root 62 0 R/Size 64>>startxref42883%%EOF If I can see the file and read it with tail , why would pdftk not be able to read it? | TL;DR Snaps access right management appears to be the source of the issue . To solve this, you can either: Do your work from your $HOME folder. Note that symlinks will not work. Install pdftk from another source than the one of your distribution. For instance, ppa:malteworld/ppa has version 3.0.0 of pdftk-java . Original reply I am having the same issue. I was doing it from a folder on a USB drive. And indeed, doing this from a subfolder of my home directory works. That puzzled me as I tried to do it from a subfolder under /tmp and it did not work neither (with a different error, less verbose, "Failed to open PDF file").Same if I try from a subfolder on a secondary disk mounted under /mnt. I suspect it could be related to limitation with snaps (I am on an up-to-date Ubuntu 18.04.3). But I have very little experience dealing with snaps, so I cannot explore further. If so, that would be quite broken as that prevent Ubuntu users from using pdftk from anywhere else than their home folder. Eg. a USB drive, a extra disk, a shared network drive. (sorry I could not reply as comment, not enough reputation...) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/533839",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141494/"
]
} |
533,846 | I'm running Debian 10, after very recently switching from Ubuntu. I'm trying to install Meson, but coming up with some errors. What am I doing wrong? After cloning Meson, python3 setup.py install gives me this error: Traceback (most recent call last): File "setup.py", line 24, in <module> from setuptools import setupModuleNotFoundError: No module named 'setuptools' Installing setuptools: sudo apt-get install python3-setuptoolsReading package lists... DoneBuilding dependency tree Reading state information... DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: python3-setuptools : Depends: python3-pkg-resources (= 33.1.1-1) but 40.8.0-1 is to be installedE: Unable to correct problems, you have held broken packages. Tried fixing missing packages sudo apt-get update --fix-missingIgn:1 http://ftp.debian.org/debian stretch InReleaseHit:2 http://ftp.debian.org/debian stretch Release Tried installing python3-pkg-resources $ sudo apt install python3-pkg-resourcesReading package lists... DoneBuilding dependency tree Reading state information... Donepython3-pkg-resources is already the newest version (40.8.0-1).python3-pkg-resources set to manually installed.0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. At this point I'm completely clueless. It's clear I'm not understanding something important. Any help would be greatly appreciated. --Edit:After trying pip3 install meson I received a traceback error. $ pip3 install mesonTraceback (most recent call last): File "/usr/bin/pip3", line 9, in <module> from pip import main File "/usr/lib/python3/dist-packages/pip/__init__.py", line 26, in <module> from pip.utils import get_installed_distributions, get_prog File "/usr/lib/python3/dist-packages/pip/utils/__init__.py", line 23, in <module> from pip.locations import ( File "/usr/lib/python3/dist-packages/pip/locations.py", line 9, in <module> from distutils import sysconfigImportError: cannot import name 'sysconfig' from 'distutils' (/usr/lib/python3.7/distutils/__init__.py) | TL;DR Snaps access right management appears to be the source of the issue . To solve this, you can either: Do your work from your $HOME folder. Note that symlinks will not work. Install pdftk from another source than the one of your distribution. For instance, ppa:malteworld/ppa has version 3.0.0 of pdftk-java . Original reply I am having the same issue. I was doing it from a folder on a USB drive. And indeed, doing this from a subfolder of my home directory works. That puzzled me as I tried to do it from a subfolder under /tmp and it did not work neither (with a different error, less verbose, "Failed to open PDF file").Same if I try from a subfolder on a secondary disk mounted under /mnt. I suspect it could be related to limitation with snaps (I am on an up-to-date Ubuntu 18.04.3). But I have very little experience dealing with snaps, so I cannot explore further. If so, that would be quite broken as that prevent Ubuntu users from using pdftk from anywhere else than their home folder. Eg. a USB drive, a extra disk, a shared network drive. (sorry I could not reply as comment, not enough reputation...) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/533846",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/339819/"
]
} |
533,886 | I would like to install a command line tool within a Docker image in order to quickly convert *html files into *pdf files. I am surprised there is not a Unix tool to do something like this. | pandoc is a great command-line tool for file format conversion. The disadvantage is for PDF output, you’ll need LaTeX.The usage is pandoc test.html -t latex -o test.pdf If you don't have LaTeX installed, then I recommend htmldoc . Cited from Creating a PDF By default, pandoc will use LaTeX to create the PDF, which requires that a LaTeX engine be installed. Alternatively, pandoc can use ConTeXt, pdfroff, or any of the following HTML/CSS-to-PDF-engines, to create a PDF: wkhtmltopdf, weasyprint or prince. To do this, specify an output file with a .pdf extension, as before, but add the --pdf-engine option or -t context, -t html, or -t ms to the command line (-t html defaults to --pdf-engine=wkhtmltopdf). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/533886",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/268073/"
]
} |
533,889 | Within a Qt5 application I have a bash script which runs to check version numbers from certain files on other remote machines (that I ssh into). I have over 100 machines that I can run this script on. If the machine I have sshed into has the file I am looking for the script output is nicely displayed, but if the file on the remote machine does not exist then my 2 lines join together. e.g Should look like this: Operating System: 1.5.64 sw_install: 1.16Kate 1.1 but if remote files don't exist I get Operating System: sw_installKate: 1.1 Any ideas to get the lines to be separate if the remote files don't exist (if does happen). I don't just want to put an 'echo' line in between the 2 ssh commands (or remove the -n) as the output is not the desired look when the files do exist. Hoping there is a really simple answer out there please. Thank you very much for your help!! echo -n "Operating System: "ssh -t -o LogLevel=QUIET -o '''StrictHostKeyChecking no''' $NODENAME "cat /home/user/Version.txt"echo -n "sw_intall: "ssh -t -o LogLevel=QUIET -o '''StrictHostKeyChecking no''' $NODENAME "grep VERSION= /home/user/sw_install | cut -d'=' -f2 | tr -d '\"' | head -1"ssh -t -o LogLevel=QUIET -o '''StrictHostKeyChecking no''' $NODENAME "rpm -qv kate --qf \" Kate: %{VERSION}.%{RELEASE}\"" Centos 7.2 | pandoc is a great command-line tool for file format conversion. The disadvantage is for PDF output, you’ll need LaTeX.The usage is pandoc test.html -t latex -o test.pdf If you don't have LaTeX installed, then I recommend htmldoc . Cited from Creating a PDF By default, pandoc will use LaTeX to create the PDF, which requires that a LaTeX engine be installed. Alternatively, pandoc can use ConTeXt, pdfroff, or any of the following HTML/CSS-to-PDF-engines, to create a PDF: wkhtmltopdf, weasyprint or prince. To do this, specify an output file with a .pdf extension, as before, but add the --pdf-engine option or -t context, -t html, or -t ms to the command line (-t html defaults to --pdf-engine=wkhtmltopdf). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/533889",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/365526/"
]
} |
533,933 | (Dist: Debian 10) I have a reoccurring error message that mainly pops up when using systemctl (also when installing a package, and occasionally in a few other places that escape me), Unit -.mount is masked. Sometimes (depending on what command called the error message) it is more verbose, such as Error: GDBus.Error:org.freedesktop.systemd1.UnitMasked: Unit -.mount is masked. This error doesn't impede installing packages or any systemd services which are enabled already (and as such are loaded at boot), but using systemctl or service to restart, start or stop a service fails. This means I have to reboot the whole server to restart a service, which can be a little annoying. Trying to unmask the root mount with systemctl unmask -- -.mount appears to work (nothing is returned), but systemctl status -- -.mount still outputs the following after: ● -.mount - Root Mount Loaded: masked (Reason: Unit -.mount is masked.) Active: active (mounted) since Mon 2019-08-05 15:03:38 AEST; 4h 8min ago Where: / What: /dev/sde1 Tasks: 0 (limit: 4915) Memory: 0B CGroup: /system.slice/-.mount Any ideas? I'm don't want to start from a fresh install for this server, so either I find a fix or just deal with having to restart if I need to reload a service. | I was getting the same while performing step 6 in this answer: https://askubuntu.com/a/1028709/1003629 . By trial and error I found this was no longer an issue if I closed GParted. Edit after I got three upvotes: it would appear gparted locks something, perhaps access to the partition table or a file that holds it, it would be great if someone can edit my answer to clarify this. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/533933",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/365564/"
]
} |
533,955 | I'm using a MacOS. I tried to enter this environment variable inside my .bash_profile : export CLOUD_PASSWORD=Pass4Aa$ditya But when I do source .bash_profile and try echo $CLOUD_PASSWORD , I get this as output: Pass4Aa Anything after that $ sign is getting ignored. Even when I tried adding quotes like: export CLOUD_PASSWORD="Pass4Aa$ditya" and did source later, it is still showing the same as before. How do I create environment variables with Special Characters like $ and @ present in the value? | $ export CLOUD_PASSWORD='Pass4Aa$ditya'$ printf '%s\n' "$CLOUD_PASSWORD"Pass4Aa$ditya$ export CLOUD_PASSWORD="Pass4Aa\$ditya"$ printf '%s\n' "$CLOUD_PASSWORD"Pass4Aa$ditya single quotes or escaping with backslashes :) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/533955",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/365589/"
]
} |
534,000 | When I use sudo to do some activities with files, these files change ownership.How can I use commands with sudo without changing owner of the files? Example file archivos35.sh is from apache but I use sed (with usr admin sudo) $ ls -l-rwxr-xrw-. 1 apache apache 181 Aug 5 11:56 archivos35.sh User admin with sudo --- sudo sed -i s/old/new/g archivos35.sh But doing that command with sudo changes the owner of the file $ ls -l-rwxr-xrw-. 1 admin apache 181 Aug 5 11:56 archivos35.sh How can I avoid using the command with sudo to change the owner of the file?I just want to make changes to the file without modifying its owner. | If you need to use sudo to modify the file, then use it to switch to the right user. You don't need to switch to root, that's just the default. So, in your case, you'd want to do: sudo -iu apache sed -i 's/old/new/g' archivos35.sh That will run the sed command as the user apache . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/534000",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/365631/"
]
} |
534,011 | I'm trying to use ffmpeg 's signature function to perform a duplicate analysis on several thousand video files that are listed in the text file vids.list . I need to have it so that every file is compared with every other file then that line of the list is removed. The following is what I have so far: #!/bin/bashhome="/home/user/"declare -i lineno=0while IFS="" read -r i; do /usr/bin/ffmpeg -hide_banner -nostats -i "${i}" -i "$(while IFS="" read -r f; do echo "${f}" done < ${home}/vids.list)" \ -filter_complex signature=detectmode=full:nb_inputs=2 -f null - < /dev/null let ++lineno sed -i "1 d" ${home}/vids.listdone < vids.list 2> ${home}/out.log ffmpeg is outputting a "too many arguments" because the inside while loop is dumping all the filenames into the second -i . I'm not sure if I need a wait somewhere (or a formatting option) to hold to loop open while the top while loop finishes. Just to clarify, I would need the loop to start at line 1 of the text file with paths, compare that file with the file from line 2, 3, 4...2000 (or whatever), remove line 1, and continue. | If you need to use sudo to modify the file, then use it to switch to the right user. You don't need to switch to root, that's just the default. So, in your case, you'd want to do: sudo -iu apache sed -i 's/old/new/g' archivos35.sh That will run the sed command as the user apache . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/534011",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/296889/"
]
} |
534,137 | I have this setup file: function latest { count=${1:-5} ; echo "Just changed" ls -lrtd * | tail -$count ; } I call it: . setup Then I ask bash if the latest function is defined: >type latestlatest is a functionlatest () { count=${1:-5}; echo "Just changed"; ls --color=auto -lrtd * | tail -$count} Just changed is an arbitrary string that I used to make sure I was not looking at a definition of latest from another file. And the question is: why is Bash adding the --color=auto to the ls command (where it is of no use since the output is piped anyway). And yes, on my shell ls is aliased to ls --color=auto , and if I remove the alias this doesn't happen. But I thought aliases where not used in functions and in any case this substitution happened at function definition time? | You've observed documented behavior; in the Alias section of the bash manual : Aliases are expanded when a function definition is read, not when the function is executed, because a function definition is itself a command. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/534137",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171560/"
]
} |
534,164 | I am finding a lot of posts on this forum that have to do with finding various values in a text file and outputting text surrounding it. However, I don't seem to find any "stream oriented". I want to find a particular string in a file, and output only the text that follows it until the end of the file is reached. In other words, I want something that acts like a filter that ignores the text in a file until a specific string value is reached, and then from that point on outputs text to stdout until the end of the file. I want to use stdout so I can pipe output to a file if I so chose. Is there a Linux text utility that will help me do this and if so, how? Or if I need to write a bash shell script to accomplish that, what are the general steps and command line utilities I would use to do this? For example, given a sample file below: onetwothreefourfive Suppose I wanted to output all the text after the "three" so the result would be: fourfive NOTE: I did find this seemingly related post but as you can see it's a bit of a mess: how to find a text and copy the text after? | Use awk : awk 's;/^three$/{s=1}' file or awk 's;$0=="three"{s=1}' file s; will print the line if variable s is true, which is the case first time after the pattern/word has been found ... /^three$/{s=1} will set variable s to true (1) if pattern/word is found. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/534164",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40546/"
]
} |
534,190 | I want to find all subfolders, that contains a markdown file with the same name (and extension .md ). For example: I want to Find following subfolders: Apple/Banana/Orange #Apple/Banana/Orange/Orange.md existsApple/Banana #Apple/Banana/Banana.md existsApple/Banana/Papaya #Apple/Banana/Papaya/Papaya.md exists Note: There can be other files or subdirectory in the directory. Any suggestions? The solutions to the problem can be tested using the following code: #!/usr/bin/env bash# - goal: "Test"# - author: Nikhil Agarwal# - date: Wednesday, August 07, 2019# - status: P T' (P: Prototyping, T: Tested)# - usage: ./Test.sh# - include:# 1.# - refer:# 1. [directory - Find only those folders that contain a File with the same name as the Folder - Unix & Linux Stack Exchange](https://unix.stackexchange.com/questions/534190/find-only-those-folders-that-contain-a-file-with-the-same-name-as-the-folder)# - formatting:# shellcheck disable=#clearmain() { TestData ExpectedOutput TestFunction "${1:?"Please enter a test number, as the first argument, to be executed!"}"}TestFunction() { echo "Test Function" echo "=============" "Test${1}" echo ""}Test1() { echo "Description: Thor" find . -type f -regextype egrep -regex '.*/([^/]+)/\1\.md$' | sort echo "Observation: ${Green:=}Pass, but shows filepath instead of directory path${Normal:=}"}Test2() { echo "Description: Kusalananda1" find . -type d -exec sh -c ' dirpath=$1 set -- "$dirpath"/*.md [ -f "$dirpath/${dirpath##*/}.md" ] && [ "$#" -eq 1 ]' sh {} \; -print | sort echo "Observation: ${Red:=}Fails as it ignores B.md${Normal:=}"}Test3() { echo "Description: Kusalananda2" find . -type d -exec sh -c ' for dirpath do set -- "$dirpath"/*.md if [ -f "$dirpath/${dirpath##*/}.md" ] && [ "$#" -eq 1 ] then printf "%s\n" "$dirpath" fi done' sh {} + | sort echo "Observation: ${Red:=}Fails as it ignores B.md${Normal:=}"}Test4() { echo "Description: steeldriver1" find . -type d -exec sh -c '[ -f "$1/${1##*/}.md" ]' find-sh {} \; -print | sort echo "Observation: ${Green:=}Pass${Normal:=}"}Test5() { echo "Description: steeldriver2" find . -type d -exec sh -c ' for d do [ -f "$d/${d##*/}.md" ] && printf "%s\n" "$d" done' find-sh {} + | sort echo "Observation: ${Green:=}Pass${Normal:=}"}Test6() { echo "Description: Stéphane Chazelas" find . -name '*.md' -print0 \ | gawk -v RS='\0' -F/ -v OFS=/ ' {filename = $NF; NF-- if ($(NF)".md" == filename) include[$0] else exclude[$0] } END {for (i in include) if (!(i in exclude)) print i}' echo "Observation: ${Red:=}Fails as it ignores B.md${Normal:=}"}Test7() { echo "Description: Zach" #shellcheck disable=2044 for fd in $(find . -type d); do dir=${fd##*/} if [ -f "${fd}/${dir}.md" ]; then ls "${fd}/${dir}.md" fi done echo "Observation: ${Green:=}Pass but shows filepath instead of directory${Normal:=}"}ExpectedOutput() { echo "Expected Output" echo "===============" cat << EOT./GeneratedTest/A./GeneratedTest/A/AA./GeneratedTest/B./GeneratedTest/C/CC1./GeneratedTest/C/CC2EOT}TestData() { rm -rf GeneratedTest mkdir -p GeneratedTest/A/AA touch GeneratedTest/index.md touch GeneratedTest/A/A.md touch GeneratedTest/A/AA/AA.md mkdir -p GeneratedTest/B touch GeneratedTest/B/B.md touch GeneratedTest/B/index.md mkdir -p GeneratedTest/C/CC1 touch GeneratedTest/C/index.md touch GeneratedTest/C/CC1/CC1.md mkdir -p GeneratedTest/C/CC2 touch GeneratedTest/C/CC2/CC2.md mkdir -p GeneratedTest/C/CC3 touch GeneratedTest/C/CC3/CC.md mkdir -p GeneratedTest/C/CC4}main "$@" | Assuming your files are sensibly named, i.e. no need for -print0 etc. You can do this with GNU find like this: find . -type f -regextype egrep -regex '.*/([^/]+)/\1\.md$' Output: ./Apple/Banana/Orange/Orange.md./Apple/Banana/Papaya/Papaya.md./Apple/Banana/Banana.md If you only want the directory name, add a -printf argument: find . -type f -regextype egrep -regex '.*/([^/]+)/\1\.md$' -printf '%h\n' Output when run on your updated test data: GeneratedTest/A/AAGeneratedTest/AGeneratedTest/C/CC2GeneratedTest/C/CC1GeneratedTest/B | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/534190",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/220462/"
]
} |
534,197 | There seem to be plenty of similar questions, for example: https://stackoverflow.com/questions/5374255/how-to-write-data-to-existing-processs-stdin-from-external-process However the answer given is to prepare the program beforehand by making its stdin a pipe. There's also: https://serverfault.com/questions/178457/can-i-send-some-text-to-the-stdin-of-an-active-process-running-in-a-screen-sessi Which is answered by commanding a terminal multiplexer under which the program in question is running. That means it also needs prior preparation. This question is about how to do this without having done any preparation beforehand. At first I thought it might be as simple as: echo foo > /proc/$p/fd/0 but that just writes to the terminal. So then I tried: echo foo > /proc/$terminal_emulator/fd/$ptmx_fd but that also fails because it just opens up a new terminal device slave for echo. I already have an answer using gdb ( sigh ) which I'll be posting below, but I wonder if anyone knows of a simpler and better alternative. | Assuming your files are sensibly named, i.e. no need for -print0 etc. You can do this with GNU find like this: find . -type f -regextype egrep -regex '.*/([^/]+)/\1\.md$' Output: ./Apple/Banana/Orange/Orange.md./Apple/Banana/Papaya/Papaya.md./Apple/Banana/Banana.md If you only want the directory name, add a -printf argument: find . -type f -regextype egrep -regex '.*/([^/]+)/\1\.md$' -printf '%h\n' Output when run on your updated test data: GeneratedTest/A/AAGeneratedTest/AGeneratedTest/C/CC2GeneratedTest/C/CC1GeneratedTest/B | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/534197",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189744/"
]
} |
534,210 | I'm trying to abbreviate a regex in a grep. I need to match exactly six spaces followed by an alpha character. This works: grep "^\s\s\s\s\s\s[[:alpha:]]" <filename> This does not: grep "^[[:space:]]{6}[[:alpha:]]" <filename> What am I doing wrong? | {6} is an extended regular expression "bound" that won't work in basic regular expressions (it would match {6} literally). The grep utility is using basic regular expressions by default. Two solutions: Use \{6\} instead, which is how you'd write it in a basic regular expression. Use grep -E , which enables the use of extended regular expressions in grep . Also, if you want to match spaces (and no other characters; [[:space:]] , as well as \s in GNU grep , matches space, vertical/horizontal tab, form feed, newline, and carriage return), use a literal space. For example, grep -E '^ {6}[[:alpha:]]' Related: Why does my regular expression work in X but not in Y? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/534210",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/365815/"
]
} |
534,220 | I tried rebooting my CentOS 7 server but it gives ridiculous error messages. As root (of course): # systemctl rebootAuthorization not available. Check if polkit service is running or see debug message for more information.Failed to start reboot.target: Connection timed outSee system logs and 'systemctl status reboot.target' for details.Exit 1 Does polkit need to check whether root has the right to reboot the machine??? If so, why? # systemctl status reboot.target● reboot.target - Reboot Loaded: loaded (/usr/lib/systemd/system/reboot.target; disabled; vendor preset: disabled) Active: inactive (dead) Docs: man:systemd.special(7)Exit 3 Do I need to enable the reboot target? Why would this be disabled by default? Perhaps this will work? # systemctl start reboot.targetAuthorization not available. Check if polkit service is running or see debug message for more information.Failed to start reboot.target: Connection timed outSee system logs and 'systemctl status reboot.target' for details.Exit 1 OK, force it, then: # systemctl --force rebootAuthorization not available. Check if polkit service is running or see debug message for more information.Failed to execute operation: Connection timed outExit 1 And the server is still up. | As weird as it may seem, trying running sudo systemctl --force reboot It has popped up in a couple of searches I made. It may be related to issues with a DBus service restarting. Can't reboot. Slow and timing out. Failed to start reboot.target: Connection timed out | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/534220",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115138/"
]
} |
534,230 | I have a large tab-delimited input file listed as below SF_0000000555_RDNAU_58_10293 10873 11041 + ID=match41;Target=SF_0000000005 99 267 168SF_0000000555_RDNAU_58_10293 188079 188215 + ID=match2617;Target=SF_0000000020 3 138 135SF_0000000555_RDNAU_58_10293 137594 137704 - ID=match4142;Target=SF_0000000048 16 126 110SF_0000000555_RDNAU_58_10293 70582 71504 - ID=match45147;Target=SF_0000000350 8970 9886 916SF_0000000555_RDNAU_58_10293 100212 101204 - ID=match45148;Target=SF_0000000350 9584 10597 1013SF_0000000555_RDNAU_58_10293 101165 101747 - ID=match45149;Target=SF_0000000350 9005 9581 576SF_0000000555_RDNAU_58_10293 82434 82891 - ID=match45150;Target=SF_0000000350 9273 9730 457 I would like the output as given below SF_0000000555 10873 11041 + SF_0000000005 99 267 168SF_0000000555 188079 188215 + SF_0000000020 3 138 135SF_0000000555 137594 137704 - SF_0000000048 16 126 110SF_0000000555 70582 71504 - SF_0000000350 8970 9886 916SF_0000000555 100212 101204 - SF_0000000350 9584 10597 1013SF_0000000555 101165 101747 - SF_0000000350 9005 9581 576SF_0000000555 82434 82891 - SF_0000000350 9273 9730 457 Can you please let me know how to edit the file in-place using awk or perl. I have tried using cut command to edit each individual columns and try merging them together using the following command. awk '{print $1}' |cut -d "_" -f 1-2awk '{print $5}' |cut -d ";" -f 2- | cut -d "=" -f 2 Thanks in advance. | As weird as it may seem, trying running sudo systemctl --force reboot It has popped up in a couple of searches I made. It may be related to issues with a DBus service restarting. Can't reboot. Slow and timing out. Failed to start reboot.target: Connection timed out | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/534230",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/365828/"
]
} |
534,355 | I'm using Linux 4.15, and this happens to me many times when the RAM usage reaches its top - The whole OS becomes unresponsive, frozen and useless. The only thing I see it to be working is the disk (main system partition), which is massively in use. I don't know whether this issue is OS-specific, hardware-specific, or configuration-specific. Any ideas? | What can make Linux so unresponsive? Overcommitting available RAM, which causes a large amount of swapping, can definitely do this. Remember that random access I/O on your mechanical HDD requires moving a read/write head, which can only do around 100 seeks per second. It's usual for Linux to go totally out to lunch, if you overcommit RAM "too much". I also have a spinny disk and 8GB RAM. I have had problems with a couple of pieces of software with memory leaks. I.e. their memory usage keeps growing over time and never shrinks, so the only way to control it would have been to stop the software and then restart it. Based on the experiences I had during this, I am not very surprised to hear delays over ten minutes, if you are generating 3GB+ of swap. You won't necessarily see this in all cases where you have more than 3GB of swap. Theory says the key concept is thrashing . On the other hand, if you are trying to switch between two different working sets, and it requires swapping 3GB in and out, at 100MB/s it will take at least 60 seconds even if the I/O pattern can be perfectly optimized. In practice, the I/O pattern will be far from optimal. After the difficulty I had with this, I reformatted my swap space to 2GB (several times smaller than before), so the system would not be able to swap as deeply. You can do this even without messing around resizing the partition, because mkswap takes an optional size parameter. The rough balance is between running out of memory and having processes get killed, and having the system hang for so long that you give up and reboot anyway. I don't know if a 4GB swap partition is too large; it might depend what you're doing. The important thing is to watch out for when the disk starts churning, check your memory usage, and respond accordingly. Checking memory usage of multi-process applications is difficult. To see memory usage per-process without double-counting shared memory, you can use sudo atop -R , press M and m , and look in the PSIZE column. You can also use smem . smem -t -P firefox will show PSS of all your firefox processes, followed by a line with total PSS. This is the correct approach to measure total memory usage of Firefox or Chrome based browsers. (Though there are also browser-specific features for showing memory usage, which will show individual tabs). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/534355",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228308/"
]
} |
534,360 | Sorry for this very basic question. I work in a very small company as a developer and we are trying to help the infra team to setup a new environment. We are migrating from RHEL 5 32bits to RHEL 7 64 bists. We could install and parametrize properly the installation of SSH. Everything works except what I will call 'output tag' when we stop, start or restart the service. See photo below for a better understanding. I mean the [OK], [FAILED] that appears on the screen after using service sshd restart for example. The photo is just an example showing the tags. On RHEL 5 it works flawless. On RHEL 7 it works but I do NOT have the same output ([OK], [FAILED], etc) I think I am missing something. Did searches on google but could not find anything related to that. | What can make Linux so unresponsive? Overcommitting available RAM, which causes a large amount of swapping, can definitely do this. Remember that random access I/O on your mechanical HDD requires moving a read/write head, which can only do around 100 seeks per second. It's usual for Linux to go totally out to lunch, if you overcommit RAM "too much". I also have a spinny disk and 8GB RAM. I have had problems with a couple of pieces of software with memory leaks. I.e. their memory usage keeps growing over time and never shrinks, so the only way to control it would have been to stop the software and then restart it. Based on the experiences I had during this, I am not very surprised to hear delays over ten minutes, if you are generating 3GB+ of swap. You won't necessarily see this in all cases where you have more than 3GB of swap. Theory says the key concept is thrashing . On the other hand, if you are trying to switch between two different working sets, and it requires swapping 3GB in and out, at 100MB/s it will take at least 60 seconds even if the I/O pattern can be perfectly optimized. In practice, the I/O pattern will be far from optimal. After the difficulty I had with this, I reformatted my swap space to 2GB (several times smaller than before), so the system would not be able to swap as deeply. You can do this even without messing around resizing the partition, because mkswap takes an optional size parameter. The rough balance is between running out of memory and having processes get killed, and having the system hang for so long that you give up and reboot anyway. I don't know if a 4GB swap partition is too large; it might depend what you're doing. The important thing is to watch out for when the disk starts churning, check your memory usage, and respond accordingly. Checking memory usage of multi-process applications is difficult. To see memory usage per-process without double-counting shared memory, you can use sudo atop -R , press M and m , and look in the PSIZE column. You can also use smem . smem -t -P firefox will show PSS of all your firefox processes, followed by a line with total PSS. This is the correct approach to measure total memory usage of Firefox or Chrome based browsers. (Though there are also browser-specific features for showing memory usage, which will show individual tabs). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/534360",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/365938/"
]
} |
534,745 | I want to delete all the words before a pattern for example: I want to delete all the words before STAC . Input: asdasddasdddSTACasdas Output: STACasdas I have this code sed -ni "s/^.*STAC//d" myfile | sed works linewise, that's why your try will not work. So how to do it with sed ? Define an address range, starting from the STAC line ( /^STAC$/ ) to the end of the file ( $ ). Those should be printed, so everything else ( ! ) should get d eleted: sed -i '/^STAC$/,$!d' myfile | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/534745",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/366266/"
]
} |
534,755 | I want to append a variable in a text file after a word "begin". echo $variable gives rn=45fg=12cd=6 My text file looks like this mixture Molefraction beginEND_OF_FILE My desired output mixture Molefraction begin rn=45 fg=12 cd=6END_OF_FILE I am using this sed command sed -i "/begin/a $variable" file.txt This is giving an error message like this sed: -e expression #1, char 56: extra characters after command | Using sed and a shell that knows about "here-strings": $ sed '/begin/r/dev/stdin' file <<<"$variable"mixture Molefraction beginrn=45fg=12cd=6END_OF_FILE This looks for the string begin in file and when this is matched, whatever is on standard input is inserted. We pass the value of $variable on standard input via a here-string. For other shells, the here-string is trivially replaced by printf over a pipe: $ printf '%s\n' "$variable" | sed '/begin/r/dev/stdin' filemixture Molefraction beginrn=45fg=12cd=6END_OF_FILE To save this to a new file, use a redirection at the end. If your sed supports in-place editing with sed -i , this could be used to modify the original file (testing this on a copy of the file would be advised; and running it several times would add the data to the file several times). To get the correct indentation in e.g. bash : $ ( set -f; IFS=$'\n'; printf ' %s\n' $variable ) | sed '/begin/r/dev/stdin' filemixture Molefraction begin rn=45 fg=12 cd=6END_OF_FILE Here, we rely on the shell's word splitting to format the value of the variable. We use set -f to make sure that no filename globbing occurs, then we set $IFS to a newline, and let the shell split the value of the variable into newline-delimited words. The printf is slightly modified to insert two spaces in front of each word. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/534755",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/352260/"
]
} |
534,760 | I'm using zsh, and I have this function: function timer-raw() { # Just think `loop fsayd` is `echo hi` for the purposes of this question. eval "sleep $((($1)*60))" && eval ${(q+@)@[2,-1]:-loop fsayd}} timer-raw 's first argument tells it how many minutes to wait; The rest of the arguments, if present, would be eval ed as if typed on the command line, after the wait: timer 3 echo sth# After 3 minutes, echoes `sth`. However, if no further arguments are supplied, I want it to run the default command loop fsayd : timer 4# runs `loop fsayd` after 4 minutes The problem is that I want loop fsayd to be substituted as two words, but I don't know how. I tried these variations, too, but they didn't work either: "${(q+@)${@[2,-1]:-loop fsayd}}""${(q@)${(q@)=@[2,-1]:-loop fsayd}}""${(qq@)=@[2,-1]:-loop fsayd}" By "didn't work", I mean either a simple timer 0 would fail by a command not found , or that timer 0 ff hi 'man hi jk k' failed to return the correct number of input arguments. ( ff() echo "$#" ) Note: I do not want to use test -z "${@[2,-1]}" . | Using sed and a shell that knows about "here-strings": $ sed '/begin/r/dev/stdin' file <<<"$variable"mixture Molefraction beginrn=45fg=12cd=6END_OF_FILE This looks for the string begin in file and when this is matched, whatever is on standard input is inserted. We pass the value of $variable on standard input via a here-string. For other shells, the here-string is trivially replaced by printf over a pipe: $ printf '%s\n' "$variable" | sed '/begin/r/dev/stdin' filemixture Molefraction beginrn=45fg=12cd=6END_OF_FILE To save this to a new file, use a redirection at the end. If your sed supports in-place editing with sed -i , this could be used to modify the original file (testing this on a copy of the file would be advised; and running it several times would add the data to the file several times). To get the correct indentation in e.g. bash : $ ( set -f; IFS=$'\n'; printf ' %s\n' $variable ) | sed '/begin/r/dev/stdin' filemixture Molefraction begin rn=45 fg=12 cd=6END_OF_FILE Here, we rely on the shell's word splitting to format the value of the variable. We use set -f to make sure that no filename globbing occurs, then we set $IFS to a newline, and let the shell split the value of the variable into newline-delimited words. The printf is slightly modified to insert two spaces in front of each word. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/534760",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/282382/"
]
} |
534,788 | I'm on an ubuntu server and want to check a summary of disk usage of directories inside /var/lib/docker I'm confused why I'm not able to check the disk usage of all the directories inside /var/lib/docker using a glob * . The directory /var/lib/docker clearly exists and has directories inside it: $ sudo du -s /var/lib/* | grep docker865644 /var/lib/docker8 /var/lib/docker-engine$ sudo du -s /var/lib/docker/*du: cannot access '/var/lib/docker/*': No such file or directory$ sudo file /var/lib/docker/var/lib/docker: directory$ sudo ls /var/lib/docker | head -n 1builder$ sudo du -s /var/lib/docker/builder20 /var/lib/docker/builder Why am I getting an error from du ? du: cannot access '/var/lib/docker/*': No such file or directory My error seems related to being a non-root user because if I switch to the root user then I'm able to issue the du command: # du -s /var/lib/docker/* | sort -n4 /var/lib/docker/runtimes4 /var/lib/docker/swarm4 /var/lib/docker/tmp4 /var/lib/docker/trust20 /var/lib/docker/builder20 /var/lib/docker/plugins36 /var/lib/docker/volumes60 /var/lib/docker/network72 /var/lib/docker/buildkit208 /var/lib/docker/containers1880 /var/lib/docker/image863328 /var/lib/docker/overlay2 | You get the error because your (non-root) shell tried to expand the glob /var/lib/docker/* and was unable (because /var/lib/docker isn't readable by your user). Your shell then left the glob intact, leaving a literal asterisk for sudo , which is what du is complaining about: du: cannot access '/var/lib/docker/*': No such file or directory ... because there is no file or directory named * under /var/lib/docker/. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/534788",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/173557/"
]
} |
534,843 | Given podman is installed on a linux system and a systemd unit named baz.service: # /etc/systemd/system/baz.service[Service]ExecStart=/usr/bin/podman run --rm --tty --name baz alpine sh -c 'while true; do date; sleep 1; done'ExecStop=/usr/bin/podman stop baz And the baz.service started: # systemctl daemon-reload# systemctl start baz.service Then when I check the status of the unit I don't see the sh or sleep process in the /system.slice/baz.service cgroup # systemctl status baz● baz.service Loaded: loaded (/etc/systemd/system/baz.service; static; vendor preset: enabl Active: active (running) since Sat 2019-08-10 05:50:18 UTC; 14s ago Main PID: 16910 (podman) Tasks: 9 Memory: 7.3M CPU: 68ms CGroup: /system.slice/baz.service └─16910 /usr/bin/podman run --rm --tty --name baz alpine sh -c while# ... I was expecting to see the sh and sleep children in my baz.service status because I've heard people from redhat say podman uses a traditional fork-exec model. If podman did fork and exec, then wouldn't my sh and sleep process be children of podman and be in the same cgroup as the original podman process? I was expecting to be able to use systemd and podman to be able to manage my containers without the children going off to a different parent and escape from my baz.service ssystemd unit. Looking at the output of ps I can see that sh and sleep are actually children of a different process called conmon . I'm not sure where conmon came from, or how it was started but systemd didn't capture it. # ps -Heo user,pid,ppid,comm# ...root 17254 1 podmanroot 17331 1 conmonroot 17345 17331 shroot 17380 17345 sleep From the output it's clear that my baz.service unit is not managing the conmon -> sh -> sleep chain. How is podman different from the docker client server model? How is podman's conmon different from docker's containerd? Maybe they are both container runtimes and the the dockerd daemon is what people people want to get rid of. So maybe docker is like: dockerd daemon docker cli containerd container runtime And podman is like: podman cli conmon container runtime So maybe podman uses a traditional fork exec model but it's not the podman cli that's forking and exec, it's the conmon process. I feel confused. | The whole idea behind podman is to go away from the centralized architecture with the super-powerful overseer (e.g. dockerd ), where the centralized daemon is a single point of failure. There even is a hashtag about this - " #nobigfatdaemons ". How to avoid the centralized container management? You remove the single main daemon (again, dockerd ) and start the containers independently (at the end of the day, containers are just processes, so you don't need the daemon to spawn them). However, you still need the way to collect container's logs - someone has to hold stdout and stderr of the container; collect container's exit code - someone has to wait(2) on container's PID 1; For this purpose, each podman container is still supervised by a small daemon, called conmon (from "container monitor"). The difference with the Docker daemon is that this daemon is as small as possible (check the size of the source code ), and it is spawned per-container. If conmon for one container crashes, the rest of the system stays unaffected. Next, how the container gets spawned? Considering that the user may want to run the container in the background, like with Docker, the podman run process forks twice and only then executes conmon : $ strace -fe trace=fork,vfork,clone,execve -qq podman run alpineexecve("/usr/bin/podman", ["podman", "run", "alpine"], 0x7ffeceb01518 /* 30 vars */) = 0...[pid 8480] clone(child_stack=0x7fac6bffeef0, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tid=[8484], tls=0x7fac6bfff700, child_tidptr=0x7fac6bfff9d0) = 8484...[pid 8484] clone(child_stack=NULL, flags=CLONE_VM|CLONE_VFORK|SIGCHLD <unfinished ...>[pid 8491] execve("/usr/bin/conmon", ... <unfinished ...>[pid 8484] <... clone resumed>) = 8491 The middle process between podman run and conmon (i.e. the direct parent of conmon - in the example above it is PID 8484) will exit and conmon will be reparented by init , thus becoming self-managed daemon. After this, conmon also forks off the runtime (e.g. runc ) and, finally, the runtime executes the container's entrypoint (e.g. /bin/sh ). When the container is running, podman run is no longer required and may exit, but in your case it stays online, because you did not ask it to detach from the container. Next, podman makes use of cgroups to limit the containers. This means that it creates new cgroups for new containers and moves the processes there . By the rules of cgroups, the process may be the member of only one cgroup at a time, and adding the process to some cgroup removes it from other cgroup (where it was previously) within the same hierarchy. So, when the container is started, the final layout of cgroups looks like the following: podman run remains in cgroups of the baz.service , created by systemd , the conmon process is placed in its own cgroups, and containerized processes are placed in their own cgroups: $ ps axf<...> 1660 ? Ssl 0:01 /usr/bin/podman run --rm --tty --name baz alpine sh -c while true; do date; sleep 1; done 1741 ? Ssl 0:00 /usr/bin/conmon -s -c 2f56e37a0c5ca6f4282cc4c0f4c8e5c899e697303f15c5dc38b2f31d56967ed6 <...> 1753 pts/0 Ss+ 0:02 \_ sh -c while true; do date; sleep 1; done13043 pts/0 S+ 0:00 \_ sleep 1<...>$ cd /sys/fs/cgroup/memory/machine.slice$ ls -d1 libpod*libpod-2f56e37a0c5ca6f4282cc4c0f4c8e5c899e697303f15c5dc38b2f31d56967ed6.scopelibpod-conmon-2f56e37a0c5ca6f4282cc4c0f4c8e5c899e697303f15c5dc38b2f31d56967ed6.scope$ cat libpod-2f56e37a0c5ca6f4282cc4c0f4c8e5c899e697303f15c5dc38b2f31d56967ed6.scope/cgroup.procs 175313075$ cat libpod-conmon-2f56e37a0c5ca6f4282cc4c0f4c8e5c899e697303f15c5dc38b2f31d56967ed6.scope/cgroup.procs 1741 Note: PID 13075 above is actually a sleep 1 process, spawned after the death of PID 13043. Hope this helps. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/534843",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/173557/"
]
} |
534,929 | I have a string like: schedule="0.25" and I want to replace the 0.25 lets say with 0.50 I could achieve this with: sed 's/\"....\"/\"0\.50\"/g' the problem is I don't know the value between the double quotes, and therefore I do not know the length. It could be any value, but it will be always preceded by schedule= . | You can use [^"]* to match a sequence of zero or more non- " characters. So $ echo 'schedule="0.25"' | sed 's/"[^"]*"/"0.5"/'schedule="0.5" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/534929",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240990/"
]
} |
535,111 | I have a directory which contains a lot of CSV files. The CSV files have many columns the first of which is a timestamp (as number of seconds since the UNIX Epoch). I want to categorize files in the directory based on the value of that timestamp column in the first line of each file. (There is no header row in the files). I want a bash script that run on the directory every two minutes and categorize files in sub-directories in the following layout: YYYY/ └── MM/ └── DD/ Is it possible? How can I do that? Content of CSV file is like below: timestamp,A,B,C,D,E,F,G,H,I for example: 1565592149,A,B,C,D,E,F,G,H,I | Maybe something like: #! /bin/bash -for f in *.csv; do IFS=, read -r timestamp rest < "$f" && printf -v dir '%(%Y/%m/%d)T' "$timestamp" && mkdir -p -- "$dir" && mv -- "$f" "$dir/"done Example: $ head -- *.csv==> test2.csv <==1328012580,A,B,C,D,E,F,G,H,I==> test.csv <==1565592149,A,B,C,D,E,F,G,H,I$ that-script$ tree.├── 2012│ └── 01│ └── 31│ └── test2.csv└── 2019 └── 08 └── 12 └── test.csv6 directories, 2 files | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/535111",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/366546/"
]
} |
535,113 | files will generate every 5th minute in /kit directory.Want to tar all the files and move that files to /kit/bkp directory every 2 hr | Maybe something like: #! /bin/bash -for f in *.csv; do IFS=, read -r timestamp rest < "$f" && printf -v dir '%(%Y/%m/%d)T' "$timestamp" && mkdir -p -- "$dir" && mv -- "$f" "$dir/"done Example: $ head -- *.csv==> test2.csv <==1328012580,A,B,C,D,E,F,G,H,I==> test.csv <==1565592149,A,B,C,D,E,F,G,H,I$ that-script$ tree.├── 2012│ └── 01│ └── 31│ └── test2.csv└── 2019 └── 08 └── 12 └── test.csv6 directories, 2 files | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/535113",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/366549/"
]
} |
535,118 | I have this function, rpargs () { local i args=() for i in "$@" do test -e "$i" && args+="$(realpath --canonicalize-existing -- "$i")" || args+="$i" done} And I want to return args . The only ways I can think of are either to printf '%s\0' and then split it via expansion flags (0@) , or to use a global like the code above. | zsh 's return builtin can only return a 32bit signed integer like the _exit() system call. While that's better than most other Bourne-like shells, that still can't return arbitrary strings or list of strings like the rc / es shells. The return status is more about returning a success/failure indication. Here, alternatively, you can have the function take the name of the array to fill in as argument, like: myfunc() { local arrayname=$1; shift # ... eval $arrayname'=("$elements[@]")' # the returned $? will be 0 here for success unless that eval command # fails.}myfunc myarray other args Your printf '%s\0' approach wouldn't work for array elements that contain NULs. Instead you could use the qq parameter expansion flag to quote elements on output, and the z (to parse quotes) and Q (to remove quoting) on input like: myfunc() { # ... print -r -- ${(qq)elements}}myarray=("${(@Q)${(z)$(myfunc)}}") But in addition to being less legible, it's also less efficient as it means forking a process and transfering the output of myfunc through a pipe in addition to the quoting/unquoting. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/535118",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/282382/"
]
} |
535,214 | I am using bash and most interested in an answer for bash , but you can answer about how to do it in other shells if you want. I have an environment variable, KEY , and I want to store the value of that environment variable in a a file. Currently I am doing echo "${KEY}" > ./key.pem In practice this seems to be OK, but in theory this breaks when KEY=-n . $ export KEY=-n$ echo BEGIN "${KEY}" ENDBEGIN -n END$ echo "${KEY}" ENDEND$ So, is there a better way to store the value of a single environment variable in a file? (Note that export and declare will include the name of the variable in their outputs, which is no good for me.) | If it's storing for the sake of reading it in later (in a bash script), just use declare -p KEY and then source the file to read it in again. If you just want to store the value, use printf '%s\n' "$KEY" as you would do when you output any variable data. So, printf '%s\n' "$KEY" >key.pem or printf 'BEGIN %s END\n' "$KEY" >key.pem or whatever you need to output. Your issue occurs since -n is a valid option to echo in bash . The strings -e and -E (and combinations like -neEne ) would also cause issues in bash , for the same reason. Depending on how bash is built or the environment or options, backslash characters in arguments may also be a problem. These issues and more are outlined in the following Q/A: Why is printf better than echo? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/535214",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17945/"
]
} |
535,434 | Upon installing Nvidia drivers I was promoted to set up a MOK password or third party drivers may not work properly, so I created one. After reboot I was presented with a blue MOK management screen with a few options in it, the first one being continue boot. So I chose this and when boot was finished, my second monitor wasn't being recognized. Remembering reading something about secure boot when initially prompted about MOK, I booted into the BIOS and turned secure boot off. Now I have my second screen back. Several questions come to mind. First, what is MOK? Do I need it, and if not, how do I get rid of it? Was losing recognition of my second screen due to installing Nvidia drivers, or setting up MOK? Can I just keep secure boot off? Thanks for any help! | ad 1) MOK (Machine Owner Key) is about securing the boot process by only allowing approved OS components and drivers to run. MOK must be implemented by the "BIOS" - or some startup code inside the computer, anyway. The main idea is that only code which is signed is allowed to run while loading the operating system (OS). Once that is booted, the OS can take over responsibility from the BIOS for securing the system. The MOK system uses public key cryptography, which means that you can create a key pair, then sign, with your private/secret key, all components that are allowed to run. This includes the GRUB boot loader itself. The BIOS then uses your public key (you need to install it) to check signatures before running the code. Here are some docs on Secure Boot and MOK The beauty of MOK, in my personal opinion, is that you can create the keys yourself and sign those components that you trust. In the past, the EFI BIOS had only Microsoft's public key installed and they were hesitant to sign Linux boot loaders :-) That's why you needed SHIM in the past (a go-between between EFI BIOS and GRUB). All Secure Boot methods hope to secure the system from hackers and viruses by guaranteeing a cleanly booted system which is not tampered by malware. If startup code or drivers have been tampered with, it is detected so that you can act accordingly. There are not many options to defend your machine if the attacker has physical access to your computer ("evil maid attack") - even if for example your disk with all the important data is encrypted an attacker can modify the boot code to read your password while you enter it, then transmit or store it for them to read later. Secure Boot works against such a modification. Kyle Rankin has done a lot of work on securing the boot process for the Librem range of Laptops, and here is a good article on his work . I believe it is well worth reading even if it is not directly applicable to your system - the idea is just the same. ad 2) and 4) Do you need MOK and Secure Boot? Not if you will never be successfully attacked by a hacker, especially one who might have physical access to your laptop or gains root access from the Internet through browser/office/Linux bugs. As for disabling, you have done the right thing - disable Secure Boot in your BIOS. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/535434",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/331585/"
]
} |
535,629 | On the Raspbian Stretch, usbmount can be made to work by changing the MountFlags option in /lib/systemd/system/systemd-udevd.service from slave to shared . On Raspbian Buster (Kernel 4.19.58-v7l+), the MountFlags option has been removed from the service file, despite adding it back in as shared , usbmount no longer works. I have also set PrivateMounts=no without success. I have also tried using udev-media-automount without success. I am using the 'lite' version of Raspbian, so the regular graphical auto-mounting is not available. What is the best solution to automatically mount and unmount USB drives? | Looks like PrivateMounts now defaults to yes . This fixed it for me: sudo systemctl edit systemd-udevd Add the following to the service: [Service]PrivateMounts=no Then restart udevd : sudo systemctl restart systemd-udevd Now usbmount works again for me (drives are mounted to /media/usb* as expected). Answer credit: https://raspberrypi.stackexchange.com/a/100375/45183 Further reading: https://github.com/systemd/systemd/issues/9873 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/535629",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/243832/"
]
} |
535,662 | Given a bash variable with the value 2019-08-15 , is there some utility that can convert that date to the format August 15, 2019 ? | On Linux, or any system that uses GNU date : $ thedate=2019-08-15$ date -d "$thedate" +'%B %e, %Y'August 15, 2019 On macOS, OpenBSD and FreeBSD, where GNU date is not available by default: $ thedate=2019-08-15$ date -j -f '%Y-%m-%d' "$thedate" +'%B %e, %Y'August 15, 2019 The -j option disables setting the system clock, and the format string used with -f describes the input date format (should be a strptime(3) format string describing the format used by your variable's value). Then follows the value of your variable and the format that you want your output to be in (should be a strftime(3) format string). NetBSD users may use something similar to the above but without the -f input_fmt option, as their date implementation uses parsedate(3) . Note also the -d option to specify the input date string: $ thedate=2019-08-15$ date -j -d "$thedate" +'%B %e, %Y'August 15, 2019 See also the manual for date on your system. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/535662",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13099/"
]
} |
535,726 | I have 2 files, and column 1 of file 1 must replace with column 2 of file 2, after column 2,3,4-5 or 5-4 (cross-match) of file 1 match with the column 1,4,5-6 or 6-5 of file 2. file 1 SNP Chr Pos EA NEA EAF Beta SE Pvalue Neff1:79137 1 79137 A T 0.25 -0.026 0.0073 4.0e-04 2314201:79033 1 79033 A G 0.0047 -0.038 0.056 4.9e-01 2254291:118630 1 118630 C T 0.99 -0.033 0.055 5.5e-01 2263111:533179 1 533179 A G 1 -0.098 0.19 6.1e-01 185906 file 2 1 1:79033_A_G 0 79033 A G1 1:79137_A_T 0 79137 T A1 1:118630_C_T 0 118630 T C1 1:533179_A_G 0 533179 G A I need the output to look like this: SNP Chr Pos EA NEA EAF Beta SE Pvalue Neff 1:79137_A_T 1 79137 A T 0.25 -0.026 0.0073 4.0e-04 231420 1:79033_A_G 1 79033 A G 0.0047 -0.038 0.056 4.9e-01 225429 1:118630_C_T 1 118630 C T 0.99 -0.033 0.055 5.5e-01 226311 1:533179_A_G 1 533179 A G 1 -0.098 0.19 6.1e-01 185906 The files don't have the exact number of rows and the files are not tab-delimited. I tried the below code but it doesn't work, Can you correct my code? awk 'NR==FNR{chr[$1]=$1;snp[$2]=$2;pos[$4]=$4;a1[$5]=$5;a2[$6]=$6;next} ($1 in chr)&&($4 in pos)&& ((($5 in a1) && ($6 in a2)) || (($6 in a1) && ($5 in a2))) {$2==snp[$2]}' file 2 file1 Edit 1: the perl code below makes some mistakes and produces duplicate lines in around 20 000 lines, one example is, file 1 SNP Chr Pos EA NEA EAF Beta SE Pvalue Neff7:10100610 7 10100610 A G 0.0002 0.13 0.58 8.2e-01 1206587:10100610 7 10100610 C G 0.0013 0.1 0.13 4.4e-01 13917010:1006107 10 1006107 C G 1 -0.11 0.42 7.9e-01 152016 file 2 7 7:10100610_G_A 0 10100610 A G7 7:10100610_G_C 0 10100610 C G10 10:1006107_C_G 0 1006107 G C Expected Output of these lines: 7:10100610_G_A 7 10100610 A G 0.0002 0.13 0.58 8.2e-01 1206587:10100610_G_C 7 10100610 C G 0.0013 0.1 0.13 4.4e-01 13917010:1006107_C_G 10 1006107 C G 1 -0.11 0.42 7.9e-01 152016 But the output the perl code gives 7:10100610_G_A 7 10100610 A G 0.0002 0.13 0.58 8.2e-01 12065810:1006107_C_G 7 10100610 C G 0.0013 0.1 0.13 4.4e-01 13917010:1006107_C_G 10 1006107 C G 1 -0.11 0.42 7.9e-01 152016 | The join command will do the work of joining up matching lines from multiple files. But it has some requirements on its input files, so you'll need to make some temporary files along the way, with a few extra fields. awk '{printf $2" "$3" "$4" "$5"%"$1"%"; $1="";print $0 "%" NR }' < file1 | sort > 1.tmpawk '{print $1" "$4" "$5" "$6"%"$2} $5 != $6 {print $1" "$4" "$6" "$5"%"$2}' < file2 | sort > 2.tmpjoin -a 1 -t % -o 1.4 2.2 1.2 1.3 1.tmp 2.tmp | sort -t % -n | awk -F % '!$2{$2=$3}{print $2" "$4}' Step by step Preprocessing the first file: awk '{printf $2" "$3" "$4" "$5"%"$1"%"; $1="";print $0 "%" NR }'' Example output: 1 118630 C T%1:118630% 1 118630 C T 0.99 -0.033 0.055 5.5e-01 226311%4 Those 4 fields, separated by % , are: the "key" that has to be matched (input fields 2-5) the original first column (needed in case there's no match) the remainder of the original line the original line number (so we can restore the file order after sort ) This output is piped through sort and into a temporary file, because join requires its inputs to have been sorted. For the second file: awk '{print $1" "$4" "$5" "$6"%"$2} $5 != $6 {print $1" "$4" "$6" "$5"%"$2}' Example output: 1 118630 C T%1:118630_C_T1 118630 T C%1:118630_C_T As you specified that fields 5 and 6 should match either way round, a second line is printed with them swapped (provided that they aren't identical). The % -separated fields here are the "key" to be matched column 2 Again, the output is piped through sort and into another temporary file. Then comes the main "join" step: join -a 1 -t % -o 1.4 2.2 1.2 1.3 1.tmp 2.tmp The -a 1 instructs join to keep lines from the first set when there's no match in the second. -t % sets the separator to % (rather than whitespace). The -o argument produces the following four fields of output: file 1, column 4: the line number file 2, column 2: replacement from file2 (when there's no match, this will be empty) file 1, column 2: the original column1 from file1 file 1, column 3: the rest of the line from file1 Example output line: 4%1:118630_C_T%1:118630% 1 118630 C T 0.99 -0.033 0.055 5.5e-01 226311 Then sort can restore the original file order (sort numerically, field separator % ) sort -t % -n The final awk checks whether the "replacement" field is empty (because no match was found) and if so, uses the original column1 instead. It also discards the line number and all those % s. awk -F % '!$2{$2=$3}{print $2" "$4}' Final output line: 1:118630_C_T 1 118630 C T 0.99 -0.033 0.055 5.5e-01 226311 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/535726",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/367142/"
]
} |
535,772 | I want to make sure I understand the following code: tar xvzf FILE --exclude-from=exclude.me --strip-components 1 -C DESTINATION which was posted in this answer . From man tar : --strip-components=NUMBER strip NUMBER leading components from file names on extraction -C , --directory=DIR change to directory DIR I didn't understand the manual explanation for --strip-components . About -C , I understood that it means something like "put stripped components in a noted directory." What does --strip-components -C mean? | The fragment of manpage you included in your question comes from manfor GNU tar. GNU is a software project that prefers info manualsover manpages. In fact, tar manpage has been added to the GNU tarsource code tree only in2014 and it still is just a reference, not a full-blown manual withexamples. You can invoke a full info manual with info tar , it'salso available online here . It containsseveral examples of --strip-components usage, the relevant fragmentsare: --strip-components=number Strip given number of leading components from file names before extraction. For example, if archive `archive.tar' contained `some/file/name', then running tar --extract --file archive.tar --strip-components=2 would extract this file to file `name'. and: --strip-components=number Strip given number of leading components from file names before extraction. For example, suppose you have archived whole `/usr' hierarchy to a tar archive named `usr.tar'. Among other files, this archive contains `usr/include/stdlib.h', which you wish to extract to the current working directory. To do so, you type: $ tar -xf usr.tar --strip=2 usr/include/stdlib.h The option `--strip=2' instructs tar to strip the two leading components (`usr/' and `include/') off the file name. That said; There are other implementations of tar out there, for example FreeBSDtar manpage has adifferent explanation of this command: --strip-components count Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after checking inclusion/exclusion patterns but before security checks. In other words, you should understand a Unix path as a sequence ofelements separated by / (unless there is only one / ). Here is my own example (other examples are available in the info manual I linked to above): Let's create a new directory structure: mkdir -p a/b/c Path a/b/c is composed of 3 elements: a , b , and c . Create an empty file in this directory and put it into .tar archive: $ touch a/b/c/FILE$ tar -cf archive.tar a/b/c/FILE FILE is a 4th element of a/b/c/FILE path. List contents of archive.tar: $ tar tf archive.tara/b/c/FILE You can now extract archive.tar with --strip-components and anargument that will tell it how many path elements you want to be removed from the a/b/c/FILE when extracted. Remove an original a directory: rm -r a Extract with --strip-components=1 - only a has not been recreated: $ tar xf archive.tar --strip-components=1$ ls -Altotal 16-rw-r--r-- 1 ja users 10240 Mar 26 15:41 archive.tardrwxr-xr-x 3 ja users 4096 Mar 26 15:43 b$ tree bb└── c └── FILE1 directory, 1 file With --strip-components=2 you see that a/b - 2 elements have notbeen recreated: $ rm -r b$ tar xf archive.tar --strip-components=2$ ls -Altotal 16-rw-r--r-- 1 ja users 10240 Mar 26 15:41 archive.tardrwxr-xr-x 2 ja users 4096 Mar 26 15:46 c$ tree cc└── FILE0 directories, 1 file With --strip-components=3 3 elements a/b/c have not been recreatedand we got FILE in the same level directory in which we run tar : $ rm -r c$ tar xf archive.tar --strip-components=3$ ls -Altotal 12-rw-r--r-- 1 ja users 0 Mar 26 15:39 FILE-rw-r--r-- 1 ja users 10240 Mar 26 15:41 archive.tar -C option tells tar to change to a given directory before running arequested operation, extracting but also archiving. In thiscomment you asked: Asking tar to do cd: why cd? I mean to ask, why it's not just mv? Why do you think that mv is better? To what directory would you liketo extract tar archive first: /tmp - what if it's missing or full? "$TMPDIR" - what if it's unset, missing or full? current directory - what if user has no w permission, just r and x ? what if a temporary directory, whatever it is already containedfiles with the same names as in tar archive and extracting wouldoverwrite them? what if a temporary directory, whatever it is didn't support Unixfilesystems and all info about ownership, executable bits etc. wouldbe lost? Also notice that -C is a common change directory option in otherprograms as well, Git and make are first that come to mymind. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/535772",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
535,777 | I have a folder called movies on my Ubuntu, which contains many subfolders. Each subfolder contains 1 mp4 file and may contain other files (jpg, srt). Each subfolder has the same title format: My Subfolder 1 (2001) Bla BlaMy Subfolder 2 (2000) BlaMy Subfolder 3 (1999) How can I rename the mp4 files same as parent folder but without the year and the blabla? For example, the mp4s inside the subfolders above become : My Subfolder 1.mp4My Subfolder 2.mp4My Subfolder 3.mp4 I want the mp4s to stay in their subfolder, just their name will be changed. The year is always in parentheses. | The fragment of manpage you included in your question comes from manfor GNU tar. GNU is a software project that prefers info manualsover manpages. In fact, tar manpage has been added to the GNU tarsource code tree only in2014 and it still is just a reference, not a full-blown manual withexamples. You can invoke a full info manual with info tar , it'salso available online here . It containsseveral examples of --strip-components usage, the relevant fragmentsare: --strip-components=number Strip given number of leading components from file names before extraction. For example, if archive `archive.tar' contained `some/file/name', then running tar --extract --file archive.tar --strip-components=2 would extract this file to file `name'. and: --strip-components=number Strip given number of leading components from file names before extraction. For example, suppose you have archived whole `/usr' hierarchy to a tar archive named `usr.tar'. Among other files, this archive contains `usr/include/stdlib.h', which you wish to extract to the current working directory. To do so, you type: $ tar -xf usr.tar --strip=2 usr/include/stdlib.h The option `--strip=2' instructs tar to strip the two leading components (`usr/' and `include/') off the file name. That said; There are other implementations of tar out there, for example FreeBSDtar manpage has adifferent explanation of this command: --strip-components count Remove the specified number of leading path elements. Pathnames with fewer elements will be silently skipped. Note that the pathname is edited after checking inclusion/exclusion patterns but before security checks. In other words, you should understand a Unix path as a sequence ofelements separated by / (unless there is only one / ). Here is my own example (other examples are available in the info manual I linked to above): Let's create a new directory structure: mkdir -p a/b/c Path a/b/c is composed of 3 elements: a , b , and c . Create an empty file in this directory and put it into .tar archive: $ touch a/b/c/FILE$ tar -cf archive.tar a/b/c/FILE FILE is a 4th element of a/b/c/FILE path. List contents of archive.tar: $ tar tf archive.tara/b/c/FILE You can now extract archive.tar with --strip-components and anargument that will tell it how many path elements you want to be removed from the a/b/c/FILE when extracted. Remove an original a directory: rm -r a Extract with --strip-components=1 - only a has not been recreated: $ tar xf archive.tar --strip-components=1$ ls -Altotal 16-rw-r--r-- 1 ja users 10240 Mar 26 15:41 archive.tardrwxr-xr-x 3 ja users 4096 Mar 26 15:43 b$ tree bb└── c └── FILE1 directory, 1 file With --strip-components=2 you see that a/b - 2 elements have notbeen recreated: $ rm -r b$ tar xf archive.tar --strip-components=2$ ls -Altotal 16-rw-r--r-- 1 ja users 10240 Mar 26 15:41 archive.tardrwxr-xr-x 2 ja users 4096 Mar 26 15:46 c$ tree cc└── FILE0 directories, 1 file With --strip-components=3 3 elements a/b/c have not been recreatedand we got FILE in the same level directory in which we run tar : $ rm -r c$ tar xf archive.tar --strip-components=3$ ls -Altotal 12-rw-r--r-- 1 ja users 0 Mar 26 15:39 FILE-rw-r--r-- 1 ja users 10240 Mar 26 15:41 archive.tar -C option tells tar to change to a given directory before running arequested operation, extracting but also archiving. In thiscomment you asked: Asking tar to do cd: why cd? I mean to ask, why it's not just mv? Why do you think that mv is better? To what directory would you liketo extract tar archive first: /tmp - what if it's missing or full? "$TMPDIR" - what if it's unset, missing or full? current directory - what if user has no w permission, just r and x ? what if a temporary directory, whatever it is already containedfiles with the same names as in tar archive and extracting wouldoverwrite them? what if a temporary directory, whatever it is didn't support Unixfilesystems and all info about ownership, executable bits etc. wouldbe lost? Also notice that -C is a common change directory option in otherprograms as well, Git and make are first that come to mymind. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/535777",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/367197/"
]
} |
535,816 | I have as an input File1 which looks like this: A,22,1,2,3,4,5G,26,5,6,7X,28,10,20,10 I would like to apply an equation to columns 3-end while maintaining file structure. For example if the equation I want use is multiplying by 2 I am looking for the output: A,22,2,4,6,8,10G,26,10,12,14X,28,20,40,20 I attempted to do this with the following command: awk -F ',' '{for(i=1; i<=NF; i++) if (i >= 3) print 2*$i else print $i }' File1 This provides the correct output but gets rid of all file structure. If of use the actual equation I am looking to use is: 2*(2*($i-1)+1) Any explanations accompanying a solution is much appreciated since I am still quite new to this! | You just need to set the output field separator ( OFS ), e.g.: awk '{ for (i=3; i<=NF; i++) $i*=2 } 1' FS=, OFS=, infile Or using your formula: awk '{ for (i=3; i<=NF; i++) $i = 2*(2*($i-1)+1) } 1' FS=, OFS=, infile Output: A,22,2,4,6,8,10G,26,10,12,14X,28,20,40,20 The 1 at the end of the script is a short-hand for { print $0 } | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/535816",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/366885/"
]
} |
535,847 | script.sh: #!/bin/bashmy-bin-file-i-runif [ $? -eq 0 ]then exit 0else if [[ >&2 == *"name_to_handle_at"* ]]; then exit 0 fi echo >&2 exit 1fi I'd like to run my command and if it throws an error which the message includes "name_to_handle_at" it will handle it like the script had no errors, all other errors should be shown as usual. Can't really get it to work. | Your syntax is faulty as you can't just compare the standard error of some previously executed command with == like that. One suggestion is to save the error stream to a file and then parse that: #!/bin/bashif ! my-bin-file-i-run 2>error.log; then if ! grep -q -F 'name_to_handle_at' error.log; then echo 'some error message' >&2 exit 1 fifi This would run the command and redirect the standard error stream to a file called error.log . If the command terminates with an error, grep is used to look for the string name_to_handle_at in the log file. If that can't be found, an error message is printed and the script terminates with a non-zero exit status. In any other case, the script terminates with a zero exit status. If you want the error.log file to be removed when your script terminates, you may do so explicitly with rm error.log in the appropriate places, or with an EXIT trap: #!/bin/bashtrap 'rm -f error.log' EXITif ! my-bin-file-i-run 2>error.log; then if ! grep -q -F 'name_to_handle_at' error.log; then echo 'some error message' >&2 exit 1 fifi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/535847",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63803/"
]
} |
535,862 | we are tryng to install the java-1.8.0-openjdk-devel-1.8.0.161-2.b14.el7.x86_64.rpm yum localinstall java-1.8.0-openjdk-devel-1.8.0.161-2.b14.el7.x86_64.rpmLoaded plugins: langpacks, product-id, search-disabled-repos, subscription-managerThis system is registered to Red Hat Subscription Management, but is not receiving updates. You can use subscription-manager to assign subscriptions.Examining java-1.8.0-openjdk-devel-1.8.0.161-2.b14.el7.x86_64.rpm: 1:java-1.8.0-openjdk-devel-1.8.0.161-2.b14.el7.x86_64Marking java-1.8.0-openjdk-devel-1.8.0.161-2.b14.el7.x86_64.rpm to be installedResolving Dependencies--> Running transaction check---> Package java-1.8.0-openjdk-devel.x86_64 1:1.8.0.161-2.b14.el7 will be installed--> Processing Dependency: java-1.8.0-openjdk(x86-64) = 1:1.8.0.161-2.b14.el7 for package: 1:java-1.8.0-openjdk-devel-1.8.0.161-2.b14.el7.x86_64--> Finished Dependency ResolutionError: Package: 1:java-1.8.0-openjdk-devel-1.8.0.161-2.b14.el7.x86_64 (/java-1.8.0-openjdk-devel-1.8.0.161-2.b14.el7.x86_64) Requires: java-1.8.0-openjdk(x86-64) = 1:1.8.0.161-2.b14.el7 Installed: 1:java-1.8.0-openjdk-1.8.0.171-8.b10.el7_5.x86_64 (installed) java-1.8.0-openjdk(x86-64) = 1:1.8.0.171-8.b10.el7_5 Available: 1:java-1.8.0-openjdk-1.8.0.161-2.b14.el7.x86_64 (local) java-1.8.0-openjdk(x86-64) = 1:1.8.0.161-2.b14.el7 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest we can see from the output that: Requires: java-1.8.0-openjdk(x86-64) = 1:1.8.0.161-2.b14.el7 but what I not understand is that we try to install the same rpm that is required!! java-1.8.0-openjdk-devel-1.8.0.161-2.b14.el7.x86_64.rpm so what is going here? current installed rpm's rpm -qa | grep openjdkjava-1.8.0-openjdk-headless-1.8.0.171-8.b10.el7_5.x86_64java-1.8.0-openjdk-1.8.0.171-8.b10.el7_5.x86_64 java -versionopenjdk version "1.8.0_171"OpenJDK Runtime Environment (build 1.8.0_171-b10)OpenJDK 64-Bit Server VM (build 25.171-b10, mixed mode) the only way to install it is by rpm ( not by yum ) rpm -Va --nofiles --nodigest java-1.7.0-openjdk-devel-1.7.0.171-2.6.13.2.el7.x86_64.rpm | java-1.8.0-openjdk-devel and java-1.8.0-openjdk versions need to match exactly. in short: yum downgrade java-1.8.0-openjdk-1.8.0.161-2.b14.el7yum install java-1.8.0-openjdk-devel-1.8.0.161-2.b14.el7.x86_64.rpm I just wrote a whole answer to this question here: https://stackoverflow.com/questions/57498755/installing-python36-devel-on-rhel7-failing/57519956#57519956 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/535862",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
535,936 | I have a headless server that is logged into remotely by multiple users. None of the other users are in the sudoers file, so they cannot obtain root via sudo . However, since the permissions on su are -rwsr-xr-x there's nothing stopping them from attempting to brute force the root password. One could argue that if a user knows the root password they can compromise the system anyway, but I don't think this is the case. OpenSSH is configured with PermitRootLogin no and PasswordAuthentication no , and none of the other users have physical access to the server. As far as I can tell, the world execute permission on /usr/bin/su is the only avenue for users attempting to gain root on my server. What's further puzzling to me in that it doesn't even seem useful. It allows me to run su directly instead of needing to do sudo su , but this is hardly an inconvenience. Am I overlooking something? Is the world execute permission on su just there for historic reasons? Are there any downsides to removing that permission that I haven't encountered yet? | One point that is missing from ilkkachu's answer is that elevating to root is only one specific use for su . The general purpose of su is to open a new shell under another user's login account. That other user could be root (and perhaps most often is), but su can be used to assume any identity the local system can authenticate. For example, if I'm logged in as user jim , and I want to investigate a problem that mike has reported, but which I am unable to reproduce, I might try logging in as mike , and running the command that is giving him trouble. 13:27:20 /home/jim> su -l mikePassword:(I type mike's password (or have him type it) and press Enter)13:27:22 /home/mike> iduid=1004(mike) gid=1004(mike) groups=1004(mike)13:27:25 /home/mike> exit # this leaves mike's login shell and returns to jim's13:27:29 /home/jim> iduid=1001(jim) gid=1001(jim) groups=1001(jim),0(wheel),5(operator),14(ftp),920(vboxusers) Using the -l option of su causes it to simulate a full login (per the man page). The above requires knowledge of mike 's password, however. If I have sudo access, I can log in as mike even without his password. 13:27:37 /home/jim> sudo su -l mikePassword:(I type my own password, because this is sudo asking)13:27:41 /home/mike> In summary, the reason the permissions on the su executable are as you show, is because su is a general-purpose tool that is available to all users on the system. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/535936",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100178/"
]
} |
535,939 | How can I output the command itself in addition to its output to a file? I know that I can do how to output text to both screen and file inside a shell script? to capture the output. My use case is specific to pytest. pytest /awesome_tests -k test_quick_tests -n auto &> test_output_$(date -u +"%FT%H%MZ").txt It would be really helpful to have the command executed in the output so I knew specifically what the results were for. | One point that is missing from ilkkachu's answer is that elevating to root is only one specific use for su . The general purpose of su is to open a new shell under another user's login account. That other user could be root (and perhaps most often is), but su can be used to assume any identity the local system can authenticate. For example, if I'm logged in as user jim , and I want to investigate a problem that mike has reported, but which I am unable to reproduce, I might try logging in as mike , and running the command that is giving him trouble. 13:27:20 /home/jim> su -l mikePassword:(I type mike's password (or have him type it) and press Enter)13:27:22 /home/mike> iduid=1004(mike) gid=1004(mike) groups=1004(mike)13:27:25 /home/mike> exit # this leaves mike's login shell and returns to jim's13:27:29 /home/jim> iduid=1001(jim) gid=1001(jim) groups=1001(jim),0(wheel),5(operator),14(ftp),920(vboxusers) Using the -l option of su causes it to simulate a full login (per the man page). The above requires knowledge of mike 's password, however. If I have sudo access, I can log in as mike even without his password. 13:27:37 /home/jim> sudo su -l mikePassword:(I type my own password, because this is sudo asking)13:27:41 /home/mike> In summary, the reason the permissions on the su executable are as you show, is because su is a general-purpose tool that is available to all users on the system. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/535939",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24192/"
]
} |
535,941 | I have a csv file: test_1,2,data,hi,cattest_2,3,4,5,6test_1,3,7,8,9 I want to delete column 3 of the rows which begin with test_1 . I used the cut command to delete column 3 but I do not know how to do it only for a row that begins with test_1 . | One point that is missing from ilkkachu's answer is that elevating to root is only one specific use for su . The general purpose of su is to open a new shell under another user's login account. That other user could be root (and perhaps most often is), but su can be used to assume any identity the local system can authenticate. For example, if I'm logged in as user jim , and I want to investigate a problem that mike has reported, but which I am unable to reproduce, I might try logging in as mike , and running the command that is giving him trouble. 13:27:20 /home/jim> su -l mikePassword:(I type mike's password (or have him type it) and press Enter)13:27:22 /home/mike> iduid=1004(mike) gid=1004(mike) groups=1004(mike)13:27:25 /home/mike> exit # this leaves mike's login shell and returns to jim's13:27:29 /home/jim> iduid=1001(jim) gid=1001(jim) groups=1001(jim),0(wheel),5(operator),14(ftp),920(vboxusers) Using the -l option of su causes it to simulate a full login (per the man page). The above requires knowledge of mike 's password, however. If I have sudo access, I can log in as mike even without his password. 13:27:37 /home/jim> sudo su -l mikePassword:(I type my own password, because this is sudo asking)13:27:41 /home/mike> In summary, the reason the permissions on the su executable are as you show, is because su is a general-purpose tool that is available to all users on the system. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/535941",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/367383/"
]
} |
535,947 | I'm trying to configure nginx to return a http 410 ("Resource Gone") code for any path under / My config is below. With this config, if I request /410test, I get a standard nginx 404 Not Found page, and a response status code of 404. So I'm having trouble even getting a response of 410 for one specific path, much less, all paths. user www-data;worker_processes auto;pid /run/nginx.pid;include /etc/nginx/modules-enabled/*.conf;events { worker_connections 768; # multi_accept on;}http { server { location /410test { return 410 "this is my 410 test page"; } } sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; gzip on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*;} | One point that is missing from ilkkachu's answer is that elevating to root is only one specific use for su . The general purpose of su is to open a new shell under another user's login account. That other user could be root (and perhaps most often is), but su can be used to assume any identity the local system can authenticate. For example, if I'm logged in as user jim , and I want to investigate a problem that mike has reported, but which I am unable to reproduce, I might try logging in as mike , and running the command that is giving him trouble. 13:27:20 /home/jim> su -l mikePassword:(I type mike's password (or have him type it) and press Enter)13:27:22 /home/mike> iduid=1004(mike) gid=1004(mike) groups=1004(mike)13:27:25 /home/mike> exit # this leaves mike's login shell and returns to jim's13:27:29 /home/jim> iduid=1001(jim) gid=1001(jim) groups=1001(jim),0(wheel),5(operator),14(ftp),920(vboxusers) Using the -l option of su causes it to simulate a full login (per the man page). The above requires knowledge of mike 's password, however. If I have sudo access, I can log in as mike even without his password. 13:27:37 /home/jim> sudo su -l mikePassword:(I type my own password, because this is sudo asking)13:27:41 /home/mike> In summary, the reason the permissions on the su executable are as you show, is because su is a general-purpose tool that is available to all users on the system. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/535947",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/367391/"
]
} |
535,961 | I'm trying to compile polybar , and I get a long compilation error which is related to xcb (apparently), I have the log file here ; I've read through the polybar wiki and I came upon the solution of downgrading xcb-proto to 1.11 , and so I followed through with the process, although I'm not really sure how to check ther version (the logs tell me that each X-extension has version 1.13 though?) Nonetheless I've tried compiling with both Clang and GCC using build.sh , all to no avail, my question is how I can downgrade packages: -- [X] xcb-randr (1.13.1)-- [X] xcb-randr (monitor support) (1.13.1)-- [X] xcb-composite (1.13.1)-- [X] xcb-xkb (1.13.1)[...] to version 1.11? EDIT I have tried to remove the libxcb* packages from my Debian, and before I wrote yes on the prompt to continue I noticed it would make redundant a lot of packages that would otherwise be beneficial to my system, so I don't see how I can hotplug a downgrade without removing the packages I want to downgrade to begin with. | One point that is missing from ilkkachu's answer is that elevating to root is only one specific use for su . The general purpose of su is to open a new shell under another user's login account. That other user could be root (and perhaps most often is), but su can be used to assume any identity the local system can authenticate. For example, if I'm logged in as user jim , and I want to investigate a problem that mike has reported, but which I am unable to reproduce, I might try logging in as mike , and running the command that is giving him trouble. 13:27:20 /home/jim> su -l mikePassword:(I type mike's password (or have him type it) and press Enter)13:27:22 /home/mike> iduid=1004(mike) gid=1004(mike) groups=1004(mike)13:27:25 /home/mike> exit # this leaves mike's login shell and returns to jim's13:27:29 /home/jim> iduid=1001(jim) gid=1001(jim) groups=1001(jim),0(wheel),5(operator),14(ftp),920(vboxusers) Using the -l option of su causes it to simulate a full login (per the man page). The above requires knowledge of mike 's password, however. If I have sudo access, I can log in as mike even without his password. 13:27:37 /home/jim> sudo su -l mikePassword:(I type my own password, because this is sudo asking)13:27:41 /home/mike> In summary, the reason the permissions on the su executable are as you show, is because su is a general-purpose tool that is available to all users on the system. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/535961",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/367221/"
]
} |
536,125 | I try to copy 2 or more files from one directory to another with cp using an array. I executed: files=( LocalSettings.php robots.txt .htaccess ${domain}.png googlec69e044fede13fdc.htm) filenames indented with tabulations; I aim to execute afterwards: cp -a "source_path/${files[@]}" "/destanation_path" my problem is that while testing the variable itself, echo $files returned only the first filename LocalSettings.php and not the full list of files. How would you explain this? Related: cp in multi line fashion ; | It's Bash feature described in man bash : Referencing an array variable without a subscript is equivalent to referencing the array with a subscript of 0. If you want to print all members of files array: echo "${files[@]}" Also described in man bash : ${name[@]} expands each element of name to a separate word. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/536125",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
536,188 | (Mint 19.1, based on Ubuntu 18.04) I have a directory I frequently access but which has a long path. I am tired of typing it out, so I want to be able to easily jump to this directory. The simplest method I can think of is making an alias in .bashrc , say: alias goto_project="cd /projectdir" This works but only works if I want to use cd . I figure it would be more general to add a symlink to /projectdir in path so that I could globally use commands like cd project or mv file project (move a file to the dir) or some rsync call. I tried placing a symlink to the directory into /usr/local/bin (I used ln -s /projectdir /usr/local/bin/projects ). However, this doesn't seem to enable the use of cd project as expected. For instance, calling which projects produces nothing. Is this approach not possible? Maybe because it would potentially produce conflicts? | Aliases are for commands - what you need is a simple variable that references your long directory name. Add something like this to your ~/.bashrc: shortdir="/super/long/directory/name" Now, commands like ls "$shortdir" or du "$shortdir" will give you what you want. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/536188",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/175112/"
]
} |
536,260 | Since of the latest updates (within the last two weeks) skypeforlinux stopped working and the only thing I find in the logs is [ 324.575813] traps: skypeforlinux[2487] trap int3 ip:555cb8dab847 sp:7fff797c57b0 error:0 in skypeforlinux[555cb6e96000+5016000] .A Google search did not return anything useful, most results deal with an invalid opcode rather than the int3 trap. OS is kali-rolling 2019.3 , no idea which version Skype is since even skypeforlinux --help fails. I tried reinstalling skypeforlinux , I tried running it as non-root as well as root user, I've upgraded everything and rebooted the system a couple of times but nothing fixed the issue. Does anyone have suggestions how to fix the issue or at least get more information to figure out what could be the culprit here? As requested here's the apt-cache output: skypeforlinux: Installed: 8.51.0.86 Candidate: 8.51.0.86 Version table: *** 8.51.0.86 500 500 https://repo.skype.com/deb stable/main amd64 Packages 100 /var/lib/dpkg/status 8.51.0.72 500 500 https://repo.skype.com/deb stable/main amd64 Packages 8.50.0.38 500 500 https://repo.skype.com/deb stable/main amd64 Packages 8.49.0.49 500 500 https://repo.skype.com/deb stable/main amd64 Packages 8.48.0.51 500 500 https://repo.skype.com/deb stable/main amd64 Packages Looking at the log $HOME/.config/skypeforlinux/logs/skype-startup.log I see one single entry: [7784:0821/103123.389602:FATAL:atom_main_delegate.cc(207)] Running as root without --no-sandbox is not supported. See https://crbug.com/638180. I get it when running skypeforlinux as root as well as running it with a non-root user. | So, your skypeforlinux version is 8.51.0.86, the current up-to-date version at this moment - which was released fairly recently. In fact, I have the exact same version on my Debian 10 system, and it works just fine. The int3 is a x86 processor instruction that is used to implement debugging breakpoints. But in your case, the int3 is encountered while skypeforlinux is not being run under a debugger, so the int3 trap vector points to a default kernel routine, which is essentially equivalent to sending a SIGTRAP signal to the program. Why does the skypeforlinux program code include int3 instructions in a production version with no debugger present? Only the people at Microsoft with access to the source code of skypeforlinux could answer that without a significant reverse-engineering effort. Note that Microsoft only promises that skypeforlinux will work on Ubuntu, Debian, OpenSuSE and Fedora. It could be that this most recent version may have accidentally included some debugging code that only gets executed when some condition does not match any of the supported distributions - and causes Skype to crash because the expected debugging environment is not present. You could try downgrading Skype to the previous version (or any of the versions listed in the apt-cache policy output) and seeing if that works better for you: # apt install skypeforlinux=8.51.0.72Reading package lists... DoneBuilding dependency tree Reading state information... DoneThe following packages will be DOWNGRADED: skypeforlinux0 upgraded, 0 newly installed, 1 downgraded, 0 to remove and 0 not upgraded.Need to get 0 B/79.0 MB of archives.After this operation, 1,024 B of additional disk space will be used.Do you want to continue? [Y/n] If downgrading the package version helps, you might want to set the package on hold, so apt upgrade won't upgrade it again until you remove the hold: # apt-mark hold skypeforlinux You might then send a bug report on your experiences to Microsoft, but since they don't make any promises to support Kali, it might get ignored or assigned a very low priority. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/536260",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/205917/"
]
} |
536,302 | I try to pipe a stream from wget to tar and extracting it to a specific location. The file is downloaded by wget but not extracted as desired with tar: war="/var/www/html"domain="example.com"downloaded_file="https://releases.wikimedia.org/mediawiki/1.33/mediawiki-1.33.0.tar.gz"wget -P "${war}" "${downloaded_file}" | tar -xzvf ${downloaded_file} --transform="s,^${downloaded_file},${domain}," set -x error: tar: unrecognized option: `--transform=s,^ https://releases.wikimedia.org/mediawiki/1.33/mediawiki-1.33.0.tar.gz,example.com ,' Why piping of stream from wget to tar, and extracting it to a specific location failed? | You can combine both commands and skip writing a file by instructing wget to write to its standard output: wget https://releases.wikimedia.org/mediawiki/1.33/mediawiki-1.33.0.tar.gz -O - |tar -xzvf - This will cause tar ’s output to be mixed with wget ’s progress indicator, because it will start extracting the tarball while wget is still downloading it, so you may well want to adjust the output options. You can use tar ’s -C option to control where the files are extracted: wget https://releases.wikimedia.org/mediawiki/1.33/mediawiki-1.33.0.tar.gz -O - |tar -xzvf - -C /var/www/html The target directory needs to exist before the command is run, so mkdir it if necessary first. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/536302",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
536,320 | print the name of the most recently accessed file in the directory /var/run/log/ whose name is of the form access-<DDD>.log (here <DDD> represents exactly 3 digits; thus the filename consists of access- followed by exactly 3 digits, followed by .log | You can combine both commands and skip writing a file by instructing wget to write to its standard output: wget https://releases.wikimedia.org/mediawiki/1.33/mediawiki-1.33.0.tar.gz -O - |tar -xzvf - This will cause tar ’s output to be mixed with wget ’s progress indicator, because it will start extracting the tarball while wget is still downloading it, so you may well want to adjust the output options. You can use tar ’s -C option to control where the files are extracted: wget https://releases.wikimedia.org/mediawiki/1.33/mediawiki-1.33.0.tar.gz -O - |tar -xzvf - -C /var/www/html The target directory needs to exist before the command is run, so mkdir it if necessary first. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/536320",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/367724/"
]
} |
536,472 | I have a piece of headphones with a USB-C jack. They're the ones that came with the Google Pixel 3. I'm using Archlinux and I'd love to use them with the USB-C jack on my notebook. However, I don't know what steps I need to undertake to make it work. Here's what I did so far. lsusb shows the device: Bus 003 Device 005: ID 18d1:5033 Google Inc. dmesg shows when the device is being connected: [ 2520.298434] usb 3-1: new full-speed USB device number 5 using xhci_hcd [ 2520.694851] usb 3-1: New USB device found, idVendor=18d1, idProduct=5033, bcdDevice= 0.20 [ 2520.694857] usb 3-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 2520.694861] usb 3-1: Product: Pixel USB-C earbuds [ 2520.694864] usb 3-1: Manufacturer: Google [ 2520.694867] usb 3-1: SerialNumber: 00000089MAJ24397 However, aplay -l does not show any device that is reflecting my headphones. From this, I assume that the problem is ALSA not recognizing the new USB device as a pair of headphones. I would assume that I now have to edit some ALSA config file to teach ALSA about this device. How can I find out what exactly I need to edit in which file to make ALSA discover my headphones? EDIT It seems like a full system update has magically resolved the issue and the buds now work out-of-the box. All help posted here is highly appreciated though for future reference. | Please connect the earbuds to the computer and run this command: lsusb -d 18d1:5033 -v | grep Class If the output includes the words Audio and Streaming anywhere, there is hope: the earbuds have a built-in DAC and may be recognizable as a standard USB audio device. But if that word is not visible, it might be that the only digital electronics in the earbuds is a tiny chip that basically tells "I'm a set of USB-C passive earbuds; feel free to connect specific USB-C pins directly to an analog audio output." And as the standard for was not finished by the time the USB-C earbuds were released to the market, there seems to be a few competing solutions. Essentially, the headphones may be directly wired to specific pins in the USB-C connector, and the controlling device needs to be able to switch those pins to analog audio output mode. The USB-C controller of your smartphone can clearly do that; but I very much doubt that many notebooks have the necessary wiring between the audio chip and the USB-C controller to route the analog audio signal out of USB-C. See also: https://www.reviewgeek.com/11101/dont-bother-with-usb-c-headphones-for-now/ If the USB device information does include the Audio class and the Streaming subclass, then the snd-usb-audio module should be getting loaded. If it works correctly, the audio device should get listed in /proc/asound/cards ; after that, the only remaining problem might be getting a correct PulseAudio profile assigned to the device (if you are using PulseAudio that is). But if the snd-usb-audio module fails to use the device, then it might have some hardware quirks that need to be accounted for; the module already has a number of quirk options you can try. In this case, and particularly if you find that adding a specific quirk option makes the device work, you should also email the lsusb -v output of the device and a description of your findings to the Linux audio driver developers, so that the right quirk(s) will get automatically applied to your device model in future kernel versions. If your distribution uses PulseAudio and it does not see your device after all these checks, there's one more thing that could be wrong: assigning the right PulseAudio profile to the device. The profiles are located at /usr/share/pulseaudio/alsa-mixer/profile-sets/ directory (at least on Debian/Ubuntu), and the default.conf profile has very descriptive comments in it. You might find that one of the non-default profiles is applicable to your device, or you might have to write a new profile. To assign a particular PulseAudio profile to an audio device, you can use a udev rule. It should be something like: SUBSYSTEM=="sound", <conditions to match only your device>, ENV{PULSE_PROFILE_SET}="profilename.conf" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/536472",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88115/"
]
} |
536,483 | First of all, I saw this question and all the others with similar answer and it does not seem to work for me. I use Ubuntu 19.04 and GNOME 3.32.1 I downloaded Postman (a code testing tool) and I want to be able to launch it from the dock. When I launch it from the shortcut, it appears in the dock: But when I right-click it, I cannot add it to my favorites, which usually allows me to pin an application to the dock: I also tried to add a desktop file to /usr/share/applications and ~/.local/share/applications and make it executable, and then restarting gnome, but it did nothing. Desktop file : [Desktop Entry]Type=ApplicationEncoding=UTF-8Name=PostmanIcon=/home/[my user name]/Utilities/Postman/app/resources/assets/icon.pngExec=/home/[my user name]/Utilities/Postman/app/PostmanTerminal=falseCategories=Development; | Ok so I was able to add it to the dock following these steps. Add this desktop file to ~/.local/share/applications as postman.desktop : [Desktop Entry]Type=ApplicationName=PostmanIcon=/home/[my user name]/Utilities/Postman/app/resources/app/assets/icon.pngExec=/home/[my user name]/Utilities/Postman/app/PostmanTerminal=falseCategories=Development; then searching for "postman" via "Activities" (it only shows up there if you already created the .desktop file) and right click it > add to favorites. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/536483",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/367850/"
]
} |
536,490 | I am trying to overwrite a file with command output, but only if there is any output. That is, I usually want mycommand > myfile but if this would overwrite myfile with empty data, I wish to retain the old version of myfile . I thought that something using ifne should be possible, a la mycommand | ifne (cat > myfile) but that does not work ... An indirect approach mycommand | tee mytempfile | ifne mv mytempfile myfile works, but I consider the use of that temp file unelegant. Q: Why does my first idea not work? Can it be made work? Or is there another nice and perhaps completely different solution for my original problem? | Your first approach works, you just need to give a command to ifne (see man ifne ): NAME ifne - Run command if the standard input is not emptySYNOPSIS ifne [-n] commandDESCRIPTION ifne runs the following command if and only if the standard input is not empty. So you need to give it a command to run. You're almost there, tee will work: command | ifne tee myfile > /dev/null If your command doesn't produce an enormous amount of data, if it's small enough to fit in a variable, you can also do: var=$(mycommand)[[ -n $var ]] && printf '%s\n' "$var" > myfile | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/536490",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55031/"
]
} |
536,516 | I've been reading other questions and answers, but none have shown me exactly why mounting is absolutely necessary. They say the drive needs a mounted directory for association with the physical drive, but why, for example, can't files be copied from /home/[user] to /dev/sdb (a USB drive) directly? Does all data copied to the directory where the drive is mounted just get copied to the drive itself immediately after? | /dev/sd* can be accessed without mount. Indeed, I have written a file directly to /dev/sdb before with success, but what I wrote to was the raw USB disk. The file I wrote was a disk image of an Ubuntu install disk when I then used to make a bootable USB stick for installing Ubuntu on a new computer. You can write to /dev/sda as if it were a file, but it's writing to the disk raw. If you try to write a second file, it will write on top of the first file, and, unless it's something like a disk image, most other computers/software won't know what to do with it. What mounting does is attempt to add a file system driver in-between the raw disk and your file system layout. Part of the mount process is selecting the correct filesystem, FAT32, Ext4, NTFS, etc. and initializing that driver to understand the contents of the disk you are mounting. Now, it interprets the disk as a structured file system with folders, files, and metadata about those folders/files. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/536516",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/330309/"
]
} |
536,529 | so when I google all kinds of variations on this, all I see are results on how to update the GitHub app itself or to update the version of the OS, and I can't find instructions on how to update individual apps that I have installed using git clone. so if I downloaded an app like bettercap or hcxtools and I get a response that I should update from the master to fix an error how is that done? if I just go and overwrite the original folder like for example my oh my zsh or my powerlevel9k won't it wipe out all my settings and customizations that I like??? | /dev/sd* can be accessed without mount. Indeed, I have written a file directly to /dev/sdb before with success, but what I wrote to was the raw USB disk. The file I wrote was a disk image of an Ubuntu install disk when I then used to make a bootable USB stick for installing Ubuntu on a new computer. You can write to /dev/sda as if it were a file, but it's writing to the disk raw. If you try to write a second file, it will write on top of the first file, and, unless it's something like a disk image, most other computers/software won't know what to do with it. What mounting does is attempt to add a file system driver in-between the raw disk and your file system layout. Part of the mount process is selecting the correct filesystem, FAT32, Ext4, NTFS, etc. and initializing that driver to understand the contents of the disk you are mounting. Now, it interprets the disk as a structured file system with folders, files, and metadata about those folders/files. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/536529",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/367881/"
]
} |
536,657 | How can I refer to a regex group in awk regex? For example, if I have a regex group (\w) , how can I refer to it later in the same regex like (\w)\1 ? Does awk support this feature? Below example doesn't work. # In this example, I want to change aa to aaa and cc to ccc.echo ab aa cc de mn | gawk '{print gensub(/(\w)\1/, "\\1\\1\\1", "g")}'# The result is: ab aa cc de mn# The expected result is: ab aaa ccc de mn | $ echo ab aa cc de mn | perl -pe 's/(\w)\1/\1\1\1/g'ab aaa ccc de mn Sometimes you just have to accept that there are some things awk can't do, but perl can. On the bright side, if you're skilled enough with awk to be using gensub and wanting to do back-references, you should find perl to be a doddle. i.e. if you can write awk, you can write perl. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/536657",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8839/"
]
} |
536,669 | I want to copy (overwrite) folders from one directory to another set of directories, only if the folders in directory1 exist in directory2. For example, I have stored some folders in my home directory: home |admin |updates |package1 |package2 |package3 I also have another folder with builds of an app: home |builds |build1 |packages |package1 |package2 |package3 |build2 |packages |package1 |package3 |build3 |rev1 |packages |package1 |rev2 |packages |package2 I want the 'package1', 'package2', 'package3' folders found in '/home/updates/package' directory to be copied over to the 'packages' folders found recursively in '/home/builds' directory, but only if the folders already exist. So in the example above, 'package1'/'package2'/'package3' would be copied into '/home/builds/build1/packages'. Only 'package1' would be copied into '/home/builds/build3/rev1/packages' ('package2'/'package3' would not because it doesn't exist there). In addition, 'build1'/'build2'/'build3' may have different owner/group permissions so I would like to retain the relative target directory's permissions. | $ echo ab aa cc de mn | perl -pe 's/(\w)\1/\1\1\1/g'ab aaa ccc de mn Sometimes you just have to accept that there are some things awk can't do, but perl can. On the bright side, if you're skilled enough with awk to be using gensub and wanting to do back-references, you should find perl to be a doddle. i.e. if you can write awk, you can write perl. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/536669",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/368029/"
]
} |
536,675 | I'm currently using watch head -n 17 * which works, but also shows all lines up to the 17th. Basically, I would like to only show the last line for each file that is shown with my current approach. How can I achieve that? Example For the sake of example, let's reduce the line nr. to 7. So: Example file: 12345678 this line: watch head -n 7 * outputs 1234567 where I want: 7 | With GNU awk : watch -x gawk ' FNR == 17 {nextfile} ENDFILE {if (FNR) printf "%15s[%02d] %s\n", FILENAME, FNR, $0}' ./* Which gives an output like: ./file1[17] line17 ./short-file2[05] line 5 is the last Note that the ./* glob is expanded only once at the time watch is invoked. Your watch head -n 17 * was an arbitrary command injection vulnerability as the expansion of that * was actually interpreted as shell code by the shell that watch invokes to interpret the concatenation of its arguments with spaces. If there was a file called $(reboot) in the current directory, it would reboot. With -x , we're telling watch to skip the shell and execute the command directly. Alternatively, you could do: watch 'exec gawk '\'' FNR == 17 {nextfile} ENDFILE {if (FNR) printf "%15s[%02d] %s\n", FILENAME, FNR, $0}'\'' ./*' For watch to run a shell which would expand that ./* glob at each iteration. watch foo bar is in effect the same as watch -x sh -c 'foo bar' . When using watch -x , you can specify which shell you want and for instance pick a more powerful one like zsh that can do recursive globbing and restrict to regular files: watch -x zsh -c 'awk '\''...'\'' ./**/*(.)' Without gawk , you could still do something like: watch ' for file in ./*; do [ -s "$file" ] || continue printf "%s: " "$file" head -n 17 < "$file" | tail -n 1 done' Giving an ouput like: ./file1: line17./short-file2: line 5 is the last But that would be a lot less efficient as it implies running several commands per file. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/536675",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46668/"
]
} |
536,694 | Is there an easy way to show a full list of all the ports that have been opened using firewalld ? I know the command firewall-cmd --list-all , but that just shows service names, not the ports that those services define as being open. For example: [root@myserver log]# firewall-cmd --list-all dmz (active) target: default icmp-block-inversion: no interfaces: ens160 sources: services: ssh squid my-icap ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: I know I can go into the definition files for each of these services to see what ports they are defining as open, but it seems like there should be a single-line way to do this, and I'm just missing it. And I'm not looking for netstat : that will tell me if something is listening on a port, which is a different question from whether that port is accessible from another host. | I've also been looking for this, currently I came up with this bash oneliner for s in $(firewall-cmd --list-services); do firewall-cmd --permanent --service "$s" --get-ports; done; and for regular ports just use $ firewall-cmd --list-ports or just $ firewall-cmd --list-all | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/536694",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48100/"
]
} |
536,727 | When I run git branch (from bash or csh), it automagically pipes the output through less . However, with only a few branches in the repository this is beyond unnecessary, it is annoying, as the branch listing disappears once I quit less. Checking ~/.gitconfig file and the local .git/config files finds nothing about a pager or any thing else that would cause this. Otherwise, nothing I've found in web searches has been helpful or promising. Why is this happening, and what (if anything) can I do to make less run when needed (e.g. when doing a git log when there's a lot of history) but not otherwise (like a git branch with only 2 or 3 branches)? | You can set the following: git config --global core.pager 'less -FRX' This will ensure that less will Exit if the entire file can be displayed on the first screen ( F ) Output the raw control characters for terminal formatting ( R ) Chop long lines ( S ) Don't send the init/de-init strings to the terminal - avoids clearing the screen on exit ( X ) Edit: Removed the S option based on Peter A. Scheider's comment | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/536727",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46948/"
]
} |
536,729 | all experts I have two types of files in the same directory ex1) record1.txt (record2, record3, record4 ...) 11111 absda qwedc11112 uiyds dqeds11113 eqwev jfsec ... ex2) Summary1.txt (Summary2, Summary3, Summary4 ...) ----some data is written---- ..........***RESULT 111.114 30.344 90.3454*** OTHERNUMBER#1 OTHERNUMBER#2 ..... .......... All I want to do is extract RESULT X(number) Y(number) Z(number) of Summary#.txt. And then, put those positions into the corresponding record#.txt, but I want to add some information, like this X Y Z111.114 30.344 90.345911111 absda qwedc11112 uiyds dqeds11113 eqwev jfsec ... So, I want my final file, record#.txt, to look above.I tried sed and cat... all failed. Thanks in advance! | You can set the following: git config --global core.pager 'less -FRX' This will ensure that less will Exit if the entire file can be displayed on the first screen ( F ) Output the raw control characters for terminal formatting ( R ) Chop long lines ( S ) Don't send the init/de-init strings to the terminal - avoids clearing the screen on exit ( X ) Edit: Removed the S option based on Peter A. Scheider's comment | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/536729",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/366887/"
]
} |
536,752 | I installed Debian 10 from the minimal installer I downloaded from this link , and I choose the GNOME Desktop Environment. Everything seemed to be fine during the installation. Now, I found out my desktop has no icons and there is no way to put any kind of file, folder or link in the desktop, but navigating to the desktop trough the file manager correctly shows files and folders. This is all I have when I right click on the desktop. What could I do to get my desktop icons back? I already tried to search into gnome-tweaks and into the settings for some setting hiding the icons, but I didn't find anything. I tried solutions posted on this question but it did not work. Command gnome-shell --version outputs GNOME Shell 3.30.2 Command uname -a outputs Linux debian 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5+deb10u2 (2019-08-08) x86_64 GNU/Linux | You can set the following: git config --global core.pager 'less -FRX' This will ensure that less will Exit if the entire file can be displayed on the first screen ( F ) Output the raw control characters for terminal formatting ( R ) Chop long lines ( S ) Don't send the init/de-init strings to the terminal - avoids clearing the screen on exit ( X ) Edit: Removed the S option based on Peter A. Scheider's comment | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/536752",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/354269/"
]
} |
536,762 | I currently have 50 cron scripting jobs created on crontab -e. These scripts check that various services are running. Sometimes when I'm testing the functionality of one service this requires me to remove a lot of cron jobs. This makes the bells and whistles shooting off constant alerts off an outage. My workaround currently is to remove all of the cron manually by typing out nano crontab -e and running CTRL+K from the top of the list (not very fun to do). I want to know is it possible to disable cron quickly using a command rather than delete all the jobs and place them back in later on? Or can I create an empty text file and run a command to have cron read in that file and replace all the current jobs with that empty text file? Once I'm ready to use all my cron jobs again simply have it read in a text file that contains all my listed cron jobs. | Save your crontab to a file: crontab -l > my-crontab Delete your crontab: crontab -r Then load back the crontab from the file: crontab my-crontab | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/536762",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/342593/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.