source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
274,617 | Suppose I have some text like this (output of objdump -d ): 0: 0f a2 cpuid 2: a9 01 00 00 00 test eax,0x1 7: 74 01 je a <myFunc+0xa> 9: c3 ret a: 0f 0b ud2a I'd like to replace the text like ^ +[0-9a-f]+: with corresponding number of spaces (so as to preserve the length), but only if the part before : isn't mentioned anywhere else (as a word, i.e. enclosed in word boundaries). E.g. in the above example the labels 0 , 2 , 7 , 9 would be replaced with a space and a would remain intact (since it's mentioned in the third line). Here's how the above example would look after processing: 0f a2 cpuid a9 01 00 00 00 test eax,0x1 74 01 je a <myFunc+0xa> c3 ret a: 0f 0b ud2a Is there any nicer way to do it in shell/vim than counting occurrences of the labels and then processing the lines based on these counts? My current code processes 2300-line file in 3 minutes (on Intel Atom CPU), which is much too long: #!/bin/bash -eif [ $# -ne 2 ]; then echo "Usage: $0 infile outfile" >&2 exit 1fifile="$1"outfile="$2"cp "$file" "$outfile"labelLength=$(sed -n '/^ \+\([0-9a-f]\+\):.*/{s@@\1@p;q}' "$file"|wc -c)replacement=$(printf %${labelLength}c ' ')sed 's@^ \+\([0-9a-f]\+\):.*@\1@' "$file" | while read labeldo if [ $(grep -c "\<$label\>" "$file") = 1 ]; then sed -i "s@\<$label\>:@$replacement@" "$outfile" fidone | Use stat for that. In a GNU system: To get the username of the owner: stat -c '%U' file.txt To get the user ID (UID) of the owner: stat -c '%u' file.txt Assuming the file is file.txt . For FreeBSD and Mac OS X (thanks to @cas) : For username: stat -f '%Su' file.txt For UID: stat -f '%u' file.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274617",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27672/"
]
} |
274,656 | I have fully updated Linux Mint 17.3. I need to add a bunch of applications to startup. The problem is, the following dialog doesn't work - It won't add me an application from the list, nor it will add any custom command. Anyway, there must be other way I can add those applications manually, probably by editing some startup file? | I found it at: ~/.config/autostart/ | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/274656",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
274,658 | After downloading the source code for Bash, I was browsing through the doc directory and came across the following files: bash.1 is a regular troff file used to build the man page . bash.0 is like a plain text version of the man page – only that it has the ^H backspace control character liberally distributed throughout it. These control characters are not displayed in the representation provided by the Git web interface but the actual file can be downloaded and examined in text editor such as Vim. Running the file command on bash.0 prints the following output: bash.0: ASCII text, with overstriking I’ve never come across this file format before and I was wondering what its purpose is and how it’s used. Searching the Web for the phrase “ASCII text, with overstriking” hasn’t been very enlightening. | Overstriking is a method used in nroff (see the Troff User’s Manual ) to offer more typographical possibilities than plain ASCII would allow: bold text (by overstriking the same character) underlined text (by overstriking _ ) accents and diacritics ( e.g. é produced by overstriking e with ’ ) and various other symbols, as permitted by the target output device. In bash , these .0 files are produced directly by nroff , with Makefile rules such as .1.0: $(RM) $@ -${NROFF} -man $< > $@ You can view such files using less ; it will process the overstriking sequences and replace them as appropriate: less bash.0 Originally nroff 's output targeted typewriter-style output devices, which would back up every time they received a backspace character; overstriking would produce the desired visual output. As pointed out by chirlu , striking the same character twice would usually result in a bolder appearance thanks to the inevitable misalignment of the successive strikes; the increase in the amount of ink deposited would also help. ( troff targeted typesetting machines.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/274658",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22812/"
]
} |
274,698 | How to search a log file in Linux and escape special characters like square brackets (i.e. [ and ] ) Can someone point me in the right direction on this? I less the log file as per below: less system001A.LOG Once I am in the log, I then press < (i.e. the smaller-than sign), and then I press forward-slash (i.e. / ) and type what I would like to search for: /ERROR [section_NAME] The problem is how to escape these brackets while I search? Because when I run this, it says no pattern/match is found, yet this in fact exists on the log. | The manual does mention you can switch off regular expression search for the whole search string by pressing ^R ( CTRL+R ) after you press / . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274698",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104489/"
]
} |
274,713 | I need to do an exercise at the university, its all about reading 3 values and identify if there is any duplicated value, which value is the greatest and which one is the least, the idea is if you input any duplicated values the shell should print a message and stop doing the calculations but I can´t get this part done. here is the code: #!/bin/shecho " A value "read Aecho " B value "read Becho " C value "read C# Print the inputecho " INPUT "echo " A = $A "echo " B = $B "echo " C = $C "# search duplicated valuesif [[ $A -eq $B ]] || [[ $A -eq $C ]];thenecho " Duplicated values please check "elif [[ $B -eq $A ]] || [[ $B -eq $C ]];thenecho " Duplicated values please check "elif [[ $C -eq $A ]] || [[ $C -eq $B ]];thenecho " Duplicated values please check "fi# greatest valueif [[ $A -gt $B ]] || [[ $A -gt $C ]];thenecho " A $A +"elif [[ $B -gt $A ]] || [[ $B -gt $C ]];thenecho " B $B +"elif [[ $C -gt $A ]] || [[ $C -gt $B ]];thenecho " C $C +"fi# less valueif [[ $A -lt $B ]] || [[ $A -lt $C ]];thenecho " A $A -"elif [[ $B -lt $A ]] || [[ $B -lt $C ]];thenecho " B $B -"elif [[ $C -lt $A ]] || [[ $C -lt $B ]];thenecho " C $C -"fi Right now if I enter a duplicated code for exampleA=3B=5C=3 the code do the calculations and print that there exist duplicated values and I need just the message telling me that I have introduced duplicated values. How can I get this done ? Thank you for your help. | The manual does mention you can switch off regular expression search for the whole search string by pressing ^R ( CTRL+R ) after you press / . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274713",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150693/"
]
} |
274,718 | I have 100 folders inside every folder I have one or two files named as the following: XXX_001_014_max.jpgXXX_001_024_max.jpg I saved the folders names in a file "list.txt" I ran the following code to rename the files inside the folders at once: #!/bin/bashfor i in $(cat list.txt); domv ${i}/XXX_001_014_max.jpg ${i}/image.jpgdone I aim to rename only the file XXX_001_014_max.jpg to image.jpg and if this file does not exist(i.e. I have only one file inside the folder), then I want the code to rename the second file XXX_001_024_max.jpg to image.jpg I know how to rename the files using the command mv but I am not familiar with the exact usage of if statement in this case. | The manual does mention you can switch off regular expression search for the whole search string by pressing ^R ( CTRL+R ) after you press / . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274718",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
274,761 | I stumbled upon a weird behaviour of my BunsenLabs GNU/Linux (which is based on Debian). Sometimes I cannot turn off the OS. I doesn't matter whether I use sudo poweroff or the GUI approach. This is what I get after running sudo poweroff : Failed to start poweroff.target: Transaction is destructive Is there a workaround? Why is it happening? Here is the content of my /lib/udev/rules.d/70-power-switch.rules : ACTION=="remove", GOTO="power_switch_end"SUBSYSTEM=="input", KERNEL=="event*", SUBSYSTEMS=="acpi", TAG+="power-switch"SUBSYSTEM=="input", KERNEL=="event*", KERNELS=="thinkpad_acpi", TAG+="power-switch"LABEL="power_switch_end" | I've been ducking for the solution for a while and finally I've found a solution. It worked for me. I don't know what triggers this weird behaviour though. This is the recipe for shutting down your Debian: Run ps aux | grep suspend . One of the results should be looking like this root 3651 0.0 0.0 8668 1716 ? Ss 07:18 0:00 /lib/systemd/systemd-sleep suspend Run sudo kill 3651 or whatever the pid of your result is. At the first time, I was able to shutdown the PC. The second time the PC went to sleep immediately after the kill command. It is suggested that you log out of the graphical desktop environment before killing the process. Source: Ubuntu Forums . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274761",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128489/"
]
} |
274,767 | I'm writing a quick tool to inspect the contents of a node.js node_modules folder or python virtualenv for native dependencies. As a quick first approximation to this I wrote the following command. find . | xargs file | awk '/C source/ {print $1} /ELF/ {print $1}' I'm okay with false positives but not false negatives (e.g. files literally containing the string ELF or C source can be marked suspicious.), but this script also potentially breaks on long file names (because xargs will split them) and file names containing spaces (because awk will split on whitespace) and file names containing newlines (because find uses newlines to separate paths). Is there a way to filter the paths generated by find by seeing if the output of file {} (possibly with some additional options to remove the path entirely from the output of file ) matches a particular regular expression? | I've been ducking for the solution for a while and finally I've found a solution. It worked for me. I don't know what triggers this weird behaviour though. This is the recipe for shutting down your Debian: Run ps aux | grep suspend . One of the results should be looking like this root 3651 0.0 0.0 8668 1716 ? Ss 07:18 0:00 /lib/systemd/systemd-sleep suspend Run sudo kill 3651 or whatever the pid of your result is. At the first time, I was able to shutdown the PC. The second time the PC went to sleep immediately after the kill command. It is suggested that you log out of the graphical desktop environment before killing the process. Source: Ubuntu Forums . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274767",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86874/"
]
} |
274,785 | ORIGINAL: I've just switched from an old Linux install (Debian squeeze) to a new one (Linux Mint 17.3) for my router (I am using a full desktop PC with a Linux install as my router). The Linux PC connects directly to my DSL modem and negotiates a PPPoE connection, then routes internet connections for all my other devices. As far as I can tell, I've set it up the same as the previous Debian install. I had a simple rc.local script to set up iptables, and it's the same on the new box and it's getting run (I have ensured this by running /etc/rc.local from a root console). I've also setup DNS on the new box. Most of the stuff works the same, but I am having one problem: the VPN on my Windows box no longer manages to connect. Looking at Wireshark, I notice that the initial PPTP packets seem to be successfully sent and received, but then there is a "Set-Link-Info" packet sent from my Windows box, and then the Windows box starts setting "PPP LCP Configuration Request" packets. At this point, it receives no response. The Wireshark capture going over my old Debian setup showed that at that point it got responses, eventually resulting in a "PPP LCP Configuration Ack". I really can't figure out what else to check. I don't understand why the PPTP connection is getting stuck here with my new setup. Any ideas as to how I can troubleshoot? Note: Here's the /etc/rc.local I have (it's the same on both installs) that sets up my entire iptables configuration: #!/bin/sh -eecho "*** Running rc.local ***"# First up, make sure 'net.ipv4.ip_forward=1' exists, uncommented, in /etc/sysctl.conf (just do this manually)echo "MAKE SURE net.ipv4.ip_forward=1 EXISTS, UNCOMMENTED, IN /etc/sysctl.conf OR NAT WILL NOT WORK!!!"echo ""# Firewall variables#WAN_IFACE="eth0" # At the time of writing, this is the NIC built into the moboWAN_IFACE="ppp0" # Virtual PPP interface when using PPPoELAN_IFACE="eth1" # At the time of writing, this is the extension NIC cardLAN_IP="192.168.1.1/24" # Class-C internal network# Setup iptables... flush existing rulesiptables -Fiptables -t nat -Fset +e# Set +e here to continue on error; iptables may give an error if this chain doesn't currently exist and we try to delete itiptables -X LOGGINGset -e# Set default policies for chainsiptables -P INPUT DROPiptables -P OUTPUT ACCEPTiptables -P FORWARD ACCEPT# Allow all local loopback accessiptables -A INPUT -i lo -p all -j ACCEPTiptables -A OUTPUT -o lo -p all -j ACCEPT# Allow incoming traffic for established connectionsiptables -A INPUT -i $WAN_IFACE -p tcp -m state --state RELATED,ESTABLISHED -j ACCEPTiptables -A INPUT -i $WAN_IFACE -p udp -m state --state RELATED,ESTABLISHED -j ACCEPTiptables -A INPUT -i $LAN_IFACE -p tcp -m state --state RELATED,ESTABLISHED -j ACCEPTiptables -A INPUT -i $LAN_IFACE -p udp -m state --state RELATED,ESTABLISHED -j ACCEPT# Allow incoming ICMP trafficiptables -A INPUT -p icmp -j ACCEPT#### Uncomment lines in this section to allow unsolicited incoming traffic on ports## Open common service ports## SSH#iptables -A INPUT -i $WAN_IFACE -p tcp --destination-port 22 -j ACCEPT## HTTP (8080 + 8081)#iptables -A INPUT -i $WAN_IFACE -p tcp --destination-port 8080 -j ACCEPT#iptables -A INPUT -i $WAN_IFACE -p tcp --destination-port 8081 -j ACCEPTiptables -A INPUT -i eth1 -p tcp --destination-port 8080 -j ACCEPTiptables -A INPUT -i eth1 -p tcp --destination-port 8081 -j ACCEPT# DNSiptables -A INPUT -i eth1 -p tcp --destination-port 53 -j ACCEPTiptables -A INPUT -i eth1 -p udp --destination-port 53 -j ACCEPT# Local Samba connectionsiptables -A INPUT -p tcp --syn -s $LAN_IP --destination-port 139 -j ACCEPTiptables -A INPUT -p tcp --syn -s $LAN_IP --destination-port 445 -j ACCEPT#### NAT setup - allow the NAT masqueradingiptables -t nat -A POSTROUTING -o $WAN_IFACE -j MASQUERADE# Allow forwarding of packets between the Internet and local network interface(s)iptables -A FORWARD -i $WAN_IFACE -o $LAN_IFACE -m state --state RELATED,ESTABLISHED -j ACCEPTiptables -A FORWARD -i $LAN_IFACE -o $WAN_IFACE -j ACCEPT# Logging setupiptables -N LOGGINGiptables -A LOGGING -m limit --limit 2/min -j LOG --log-prefix="IPTables-Dropped: " --log-level 4iptables -A LOGGING -j DROP# Logging; uncomment the below to log dropped input packets to syslog (verbose; only use for debugging!)echo "Uncomment the necessary lines in rc.local to enable iptables logging..."#iptables -A INPUT -j LOGGINGecho "*** Finished running rc.local ***"exit 0 UPDATE: I've been doing some more investigation into this, and the Wireshark analysis of what's being put out by my Linux router reveals one very significant difference. Here are the two screenshots, first from my old Debian box whose routing works, and second from my new Mint box where it doesn't: I've replaced the IP addresses with red and blue stripes to indicate my Linux router's public IP address, and the remote address with which we are communicating in order to establish a VPN connection through the PPTP protocol. Also, my Windows machine's IP address on the local network is outlined in green. The thing to notice is what happens after the PPTP protocol finishes and we switch to PPP LCP packets. On the Debian box, it continues to convert the source address of these packets to my public IP address before sending them out to the public internet. But on my Linux Mint box, the source address of packets being sent out is still kept as the local network address of my Windows machine that's trying to establish the connection. It's sending packets out to the internet with a local class C source address - of course they're not getting routed! The question is, what is causing the breakdown of NAT here on my Linux Mint box that isn't happening on the Debian box? The iptables are the same, the /etc/network/interfaces are the same. I don't know... but maybe this discovery will help someone here to help me with the problem. :-) | In order for NAT to work, you need to have a protocol-specific helper module loaded. By default, you're only going to have ones for TCP and UDP loaded. That's why you're seeing your PPTP traffic (which is actually PPP over GRE) escaping without NAT. That module is nf_nat_proto_gre , at least as of Linux 4.4. A similar story applies to connection tracking (without which GRE packets aren't going to be considered part of an established or related connection). That's nf_conntrack_proto_gre . It turns out that PPTP requires special handling too (I'd guess it embeds IP addresses inside the PPP negotiation, but I haven't checked). That special handling is provided by nf_nat_pptp and tracking of PPTP connections is provided by nf_conntrack_pptp . A modprobe ip_nat_pptp should get your VPN working. Dependencies between the modules will wind up loading all four. To make it continue working across boot, add nf_nat_pptp to /etc/modules . (No, I have no idea where this is documented, sorry!) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274785",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8831/"
]
} |
274,820 | Assuming I have a directory with files as shown: $ lsfile0001.txt file0002.txt file0003.txt file0004.txt file0005.txt someotherfile.txt Lets say I want to run the following command: $ cat file0001.txt file0002.txt file0003.txt file0004.txt file0005.txt I could achieve this using a bash shortcut as follows to auto-fill the file names: cat file000 ESC * Now would it be possible to use a shortcut in a similar way to only autofill according to some regex (regular expression)? For example: cat file000[1-3] ESC * to get: $ cat file0001.txt file0002.txt file0003.txt Edit: The regex I should have used above for this example to make more sense: file000[1-3].txt or file000[1-3]* Just to be clear my question is about how to auto-fill on the bash with regex. And NOT how I can cat some files together using a bash script or for / while statements using regex. | The feature you are looking for is there. You are just missing a * in your example. Type cat file000[1-3]* ESC * and it should work. I think this is the case because the readline function insert-completions (which is bound to ESC * ) will not expand the glob pattern if it does not match any files. And without the last * it does not match the files. You can read about this in the man page , section "EXPANSION" subsection "Pathname Expansion". | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274820",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79875/"
]
} |
274,845 | Say I need a Debian with Kernel 3.2.63-2+deb7u1 i686. How do I find the corresponding Debian Version? I assume it would be Debian 7 because this document states that Debian 7 has Kernel version 3.2. Yet, there are several iso images I can download, e.g. debian-7.8.0-i386-netinst.iso , debian-7.9.0-i386-netinst.iso , etc. How do I know which of these isos will result in a system with the exact kernel version 3.2 .63-2+deb7u1 I need? Or, maybe, is this a matter of installing the .deb corresponding to the specific version, i.e. up- or downgrading the kernel as needed? | The Wheezy changelog lists all the package updates in each point release. This shows that Debian 7.7 was released with 3.2.63-2, while Debian 7.8 was released with version 3.2.65-1. So you won't find an installer image with the exact version you're looking for. But you can find the relevant kernel packages in the snapshots ; this will allow you to install the version you're after. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274845",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112606/"
]
} |
274,868 | The command that I am using: find . -type f -name "*.sql" -exec grep -i -l 'schema_name.' {} + What I want to search is all the files which contain schema_name. .But the find command is ignoring the last . and is only looking for schema_name instead of schema_name. | That's grep issue, not find . grep matches pattern using regular expression by default, the pattern schema_name. means any character follows the string schema_name . If you want to match the dot . literally, you have to escape it with a backslash \ : find . -type f -name "*.sql" -exec grep -il 'schema_name\.' {} + or using -F option: find . -type f -name "*.sql" -exec grep -Fil 'schema_name.' {} + | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274868",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164700/"
]
} |
274,916 | Is there a way to call find recursively? I would like to search for files matching a given pattern only in directories matching another pattern. Basically: for each (sub)directory D matching "*.data"do for each file F in directory D (with subdirs) matching "*.important.txt" do run command foo on F donedone Now, if I leave out the innermost requirement ( run command foo in F ), it is pretty simple: find . -type d -name "*.data" \ -exec find \{\} -type f -name "*.important.txt" \; However, I haven't found a way to pass a command to the inner find .E.g. the following prints out find: missing argument to `-exec' each time the inner find is called: find . -type d -name "*.data" \ -exec find \{\} -type f -name "*.important.txt" \ -exec "foo \{\} \;" \; Any solution should be posix compliant (runnable within a /bin/sh script), esp. I am not looking for solutions that wrap the inner find into a separate shell-script wrap the inner find into a bash-function | To run find on its own result, you can use the -c argument to sh (or bash ) to prevent the outer find command from treating the inner {} specially. However, you then need to pass the result from the outer find as an argument to sh , which can be expanded with $0 : find . -type d -name "*.data" \ -exec sh -c 'find "$0" -type f -name "*.important.txt" -exec echo \{\} \;' \{\} \; Note: $0 should be quoted ( "$0" ) to prevent issues with directory names containing whitespace. This is not really a "recursive" solution, because it doesn't allow arbitrarily deep nesting without some hairy escaping, but it does support the two levels of find -exec s you asked for in your example. If your example is similar to your actual problem, you might also experiment with the -path argument to find instead: find . -path '*.data/*important.txt' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274916",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47009/"
]
} |
274,926 | Is it normal, that fsck on an SSD takes a second or two? I am on Linux Mint 17.3 and I called sudo touch /forcefsck EDIT: ext4 filesystem | To run find on its own result, you can use the -c argument to sh (or bash ) to prevent the outer find command from treating the inner {} specially. However, you then need to pass the result from the outer find as an argument to sh , which can be expanded with $0 : find . -type d -name "*.data" \ -exec sh -c 'find "$0" -type f -name "*.important.txt" -exec echo \{\} \;' \{\} \; Note: $0 should be quoted ( "$0" ) to prevent issues with directory names containing whitespace. This is not really a "recursive" solution, because it doesn't allow arbitrarily deep nesting without some hairy escaping, but it does support the two levels of find -exec s you asked for in your example. If your example is similar to your actual problem, you might also experiment with the -path argument to find instead: find . -path '*.data/*important.txt' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274926",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
274,929 | for x in `cat /var/www/vhosts/example.com/statistics/logs/access_log.processed | awk '{print $1}' | sort | uniq -c | sort -nr | awk {'if ($1 > 2000) print $2'}`; do #Works printf "$x" #Does not work printf "$1"done I am trying to block IP addresses that have attempted more than 2000 requests. Actually above code is combination of 2 sections. First, cat /var/www/vhosts/example.com/statistics/logs/access_log.processed | awk '{print $1}' | sort | uniq -c | sort -nr I gather, sort and count the all IP addresses. Below is an example result 4565 8.8.8.83245 7.7.7.7 Then I iterate over each result and check if the attempt number is over 2000. awk {'if ($1 > 2000) print $2'} $1 is attempt number and $2 is IP. So $2 is saved as $x and can be used inside for loop. But how can I also use $1 inside the for loop? | To run find on its own result, you can use the -c argument to sh (or bash ) to prevent the outer find command from treating the inner {} specially. However, you then need to pass the result from the outer find as an argument to sh , which can be expanded with $0 : find . -type d -name "*.data" \ -exec sh -c 'find "$0" -type f -name "*.important.txt" -exec echo \{\} \;' \{\} \; Note: $0 should be quoted ( "$0" ) to prevent issues with directory names containing whitespace. This is not really a "recursive" solution, because it doesn't allow arbitrarily deep nesting without some hairy escaping, but it does support the two levels of find -exec s you asked for in your example. If your example is similar to your actual problem, you might also experiment with the -path argument to find instead: find . -path '*.data/*important.txt' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/274929",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81390/"
]
} |
275,053 | For example: [root@ip-10-0-7-125 ~]# history | grep free 594 free -m 634 free -m | xargs | awk '{print "free/total memory" $17 " / " $ 8}' 635 free -m 636 free -m | xargs | awk '{print "free/total memory" $9 " / " $ 10}' 736 df -h | xargs | awk '{print "free/total disk: " $11 " / " $9}' 740 df -h | xargs | awk '{print "free/total disk: " $11 " / " $8}' 741 free -m | xargs | awk '{print "free/total memory: " $17 " / " $8 " MB"}' I'm just wondering if there any way to execute the 636 command without typing it again, just type something plus the number, like history 636 or something. | In bash, just !636 will be ok. | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/275053",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138782/"
]
} |
275,060 | I'm trying to write a shell script using bash for the following problem: Write a loop to go through three values (A B C) and displays each of these values to the screen (hint use a ‘for’ loop). I figured out it would be something like this but I'm not sure, so any advice would be much appreciated. For (( EXP1; EXP2; EXP3 ))do command1 command2 command3done | In bash, just !636 will be ok. | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/275060",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164697/"
]
} |
275,074 | I am using openSUSE Tumbleweed. I've bought a new PCI-E Wireless card, that supports 5G WiFi (Intel 5100 AGN). It doesn't show up in lspci and even if I take old adapter out it still cannot see my new one. I have tried switching it off and on again in BIOS, but nothing helps. The driver must be installed according to firmware folder /lib/firmware/iwlwifi-100-5.ucode/lib/firmware/iwlwifi-1000-3.ucode/lib/firmware/iwlwifi-1000-5.ucode/lib/firmware/iwlwifi-105-6.ucode/lib/firmware/iwlwifi-135-6.ucode/lib/firmware/iwlwifi-2000-6.ucode/lib/firmware/iwlwifi-2030-6.ucode/lib/firmware/iwlwifi-3160-10.ucode/lib/firmware/iwlwifi-3160-12.ucode/lib/firmware/iwlwifi-3160-13.ucode/lib/firmware/iwlwifi-3160-16.ucode/lib/firmware/iwlwifi-3160-7.ucode/lib/firmware/iwlwifi-3160-8.ucode/lib/firmware/iwlwifi-3160-9.ucode/lib/firmware/iwlwifi-3945-2.ucode/lib/firmware/iwlwifi-4965-2.ucode/lib/firmware/iwlwifi-5000-1.ucode/lib/firmware/iwlwifi-5000-2.ucode/lib/firmware/iwlwifi-5000-5.ucode/lib/firmware/iwlwifi-5150-2.ucode/lib/firmware/iwlwifi-6000-4.ucode/lib/firmware/iwlwifi-6000g2a-5.ucode/lib/firmware/iwlwifi-6000g2a-6.ucode/lib/firmware/iwlwifi-6000g2b-5.ucode/lib/firmware/iwlwifi-6000g2b-6.ucode/lib/firmware/iwlwifi-6050-4.ucode/lib/firmware/iwlwifi-6050-5.ucode/lib/firmware/iwlwifi-7260-10.ucode/lib/firmware/iwlwifi-7260-12.ucode/lib/firmware/iwlwifi-7260-13.ucode/lib/firmware/iwlwifi-7260-16.ucode/lib/firmware/iwlwifi-7260-7.ucode/lib/firmware/iwlwifi-7260-8.ucode/lib/firmware/iwlwifi-7260-9.ucode/lib/firmware/iwlwifi-7265-10.ucode/lib/firmware/iwlwifi-7265-12.ucode/lib/firmware/iwlwifi-7265-13.ucode/lib/firmware/iwlwifi-7265-16.ucode/lib/firmware/iwlwifi-7265-8.ucode /lib/firmware/iwlwifi-7265-9.ucode /lib/firmware/iwlwifi-7265D-10.ucode /lib/firmware/iwlwifi-7265D-12.ucode /lib/firmware/iwlwifi-7265D-13.ucode /lib/firmware/iwlwifi-7265D-16.ucode /lib/firmware/iwlwifi-8000C-13.ucode /lib/firmware/iwlwifi-8000C-16.ucode DMESG: rextuz@linux-c84g:~$ dmesg | grep Firmware[ 0.358267] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored[ 0.401370] acpi PNP0A08:00: [Firmware Info]: MMCONFIG for domain 0000 [bus 00-3f] only partially covers this bridgerextuz@linux-c84g:~$ dmesg | grep firmware[ 5.713117] psmouse serio2: trackpoint: IBM TrackPoint firmware: 0x0e, buttons: 3/3[ 7.639514] iwlwifi 0000:03:00.0: loaded firmware version 39.31.5.1 build 35138 op_mode iwldvm[ 5123.606856] usb 2-1.2: device firmware changed[12107.630137] usb 2-1.2: device firmware changed[12111.314260] usb 2-1.2: device firmware changedrextuz@linux-c84g:~$ dmesg | grep Wireless[ 7.622057] Intel(R) Wireless WiFi driver for Linux[ 7.659264] iwlwifi 0000:03:00.0: Detected Intel(R) Centrino(R) Wireless-N 1000 BGN, REV=0x6C lspci and lshw linux-c84g:/home/rextuz # lspci -vnn | grep -i net 00:19.0 Ethernet controller [0200]: Intel Corporation 82579LM Gigabit Network Connection [8086:1502] (rev 04)03:00.0 Network controller [0280]: Intel Corporation Centrino Wireless-N 1000 [Condor Peak] [8086:0084]linux-c84g:/home/rextuz # lshw -C network*-network description: Ethernet interface product: 82579LM Gigabit Network Connection vendor: Intel Corporation physical id: 19 bus info: pci@0000:00:19.0 logical name: enp0s25 version: 04 serial: f0:de:f1:6f:61:8d capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=3.2.6-k firmware=0.13-3 latency=0 link=no multicast=yes port=twisted pair resources: irq:29 memory:f2500000-f251ffff memory:f252b000-f252bfff ioport:5080(size=32)*-network DISABLED description: Wireless interface product: Centrino Wireless-N 1000 [Condor Peak] vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlp3s0 version: 00 serial: 8c:a9:82:be:c0:9e width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=4.5.0-2-default firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:28 memory:f2400000-f2401fff*-network description: Ethernet interface physical id: 2 logical name: enp0s29u1u2 serial: c6:bc:a4:94:d0:53 capabilities: ethernet physical configuration: broadcast=yes driver=rndis_host driverversion=22-Aug-2005 firmware=RNDIS device ip=192.168.42.209 link=yes multicast=yes How do I make the kernel to use my new adapter instead or together with the old one? | In bash, just !636 will be ok. | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/275074",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140213/"
]
} |
275,157 | I have to parse huge text files where certain lines are of interest and others are not. Within those of interest I have to count the occurrences of a certain keyword. Assumed the file is called input.txt and it looks like this: format300,format250,format300format250,ignore,format160,format300,format300format250,format250,format300 I want to exclude the lines with ignore and count the number of format300 , how do I do that? What I've got so far is this command which only counts ONCE PER LINE (which is not yet good enough): cat input.txt | grep -v ignore | grep 'format300' | wc -l Any suggestions? If possible I want to avoid using perl. | You don't need the first cat , that it is known as a Useless use of cat (UUOC) . Also, very useful is grep -o , that only outputs the matching patterns, one per line. And then, count lines with wc -l . grep -v ignore YOUR_FILE | grep -o format300 | wc -l This prints 3 for your small sample. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/275157",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28018/"
]
} |
275,184 | Lets say I created a program in c/c++, where I manually allocated some variables. Then while running the program, I send an interrupt signal (Ctrl-C). Are those variables freed from memory, or will they take up space until the system shuts down? Also, what if I just created integers that weren't manually allocated, do those variable reside, or do they get deleted immediately. I'm thinking allocated variables will remain, and regular variables will get deleted (because of the stack). If that is the case, is there any way to free the allocated variables from memory after the program has stopped? Just curious. :) | Processes are managed by the kernel. The kernel doesn't care how the programmer allocates variables. All it knows is that certain blocks of memory belong to the process. The C runtime matches C memory management features to kernel features: automatic variables go into a memory block called “stack” and dynamic storage ( malloc and friends) go into a memory block called “heap”. The process calls system calls such as sbrk and mmap to obtain memory with a granularity of MMU pages . Inside those blocks, the runtime determines where to put automatic varibles and dynamically allocated objects. When a process dies, the kernel updates its memory management table to record, for each MMU page, that it is no longer in use by the process. This takes place no matter how the process exits, whether of its own violition (by calling a system call) or not (killed by a signal). Pages that are no longer used by any process are marked as reusable. It's generally good hygiene to free the dynamically allocated storage that you're no longer using, because you never know when a piece of code might be reused in a long-running program. But when a process dies, the operating system will release all of its resources: memory, open file, etc. The only resources that the operating system won't clean up automatically are resources that are designed to have a global scope on the operating system, such as temporary files. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/275184",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164386/"
]
} |
275,223 | When I call cd /Users/mu3/apps the prompt simplifies it like this: mu3 [~/apps]: Is this possible to do the same for custom path like cd /Users/mu3/Development/Web/test : mu3 [DEV/test]: I'm using iTerm + oh-my-zsh. UPD: I wasn't specific enough and also discovered some new information. Since I use agnoster theme for zsh shell, it handles a prompt look by itself. So I ended up with changing this line : prompt_segment blue black '%~' to this: PWDshort="${PWD/#$HOME/~}"PWDshort="${PWDshort/\~\/_cld\/Dropbox\/Dev\/Web/DEV}"prompt_segment blue black $PWDshort Now the problem is that any update apparently breaks this. Is there any better way to achieve the same result? | The standard way to define directory abbreviations for the prompt is to use named directories . Named directories are used when expanding the %~ prompt escape sequence, generalizing ~ to abbreviate your home directory and ~bob to abbreviate Bob's home directory. mu3 [~]: cd /Users/mu3/Development/Web/testmu3 [~/Development/Web/test]: hash -d test=$PWDmu3 [~test]: cd configmu3 [~test/config]: The usual way to do this would be to put hash -d test=~/Development/Web/test in your .zshrc . In addition to being used to abbreviate prompts, the named directory can also be used to abbreviate paths, e.g. you can run cd ~test to switch to that directory. With this method, the abbreviated form always starts with a ~ . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/275223",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116855/"
]
} |
275,243 | Edited: do not run this to test it unless you want to destroy data. Could someone help me understand what I got? dd if=/dev/zero of=/dev/sda bs=4096 count=4096 Q: Why specifically 4096 for count ? dd if=/dev/zero of=/dev/sda bs=512 count=4096 seek=$(expr blockdev --getsz /dev/sda - 4096) Q: What exactly does this do? Warning; Above code will render some/all specified device/disk's data useless! | dd if=/dev/zero of=/dev/sda bs=4096 count=4096 Q: why 4096 is particularly used for counter? This will zero out the first 16 MiB of the drive. 16 MiB is probably more than enough to nuke any "start of disk" structures while being small enough that it won't take very long. dd if=/dev/zero of=/dev/sda bs=512 count=4096 seek=$(expr blockdev --getsz /dev/sda - 4096) Q: What does this exactly? blockdev --getsz gets the size of the block device in "512 byte sectors". So this command looks like it was intended to zero out the last 2 MiB of the drive. Unfortunately this command is broken syntax wise. I expect the command was originally intended to be dd if=/dev/zero of=/dev/sda bs=512 count=4096 seek=$(expr `blockdev --getsz /dev/sda` - 4096) and the backticks got lost somewhere along the line of people copy/pasting it between different environments. Old partition tables, LVM metadata, raid metadata etc can cause problems when reusing a drive. Zeroing out sections at the start and end of the drive will generally avoid these problems while being much faster than zeroing out the whole drive. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/275243",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164982/"
]
} |
275,257 | There are many audio and video players but I prefer to use one tool for many purposes. And so I thought of using ffplay as both audio and video player. To play a file the command is like this. ffplay path_to_audio_file.mp3 Fine, but how do I play a list of audio files or a list of videos? I tried to use: ffplay *.mp3 but to no avail. It gives me the following error: Argument 'audiofileB.mp3' provided as input filename, but 'audiofileA.mp3' was already specified. | ffplay appears to only support a single input file, so you'll need to use code to loop over a list of input files (and possibly to shuffle them); wildly assuming coreutils (for shuf ), perhaps something like: find musicdir -type f -name "*.mp3" | shuf | while read f; do ffplay -autoexit -- "$f"; done This will of course break horribly if there are spaces or newlines in the filenames. (My current music player is fairly similar, find ~/music -type f -name "*.mp3" | mpg123 --shuffle -Z --list - ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/275257",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99757/"
]
} |
275,318 | I am on Linux Mint 17.3. I saw this in the syslog. Processing triggers for initramfs-tools (0.103ubuntu4.3) ...Apr 9 12:01:47 vb-nb-mint updates: update-initramfs: Generating /boot/initrd.img-3.19.0-32-genericApr 9 12:01:51 vb-nb-mint updates: Warning: No support for locale: en_US.utf8 I have just noticed this warning? Shouldn't it be en_US.UTF8? Just an idea, otherwise I don't know what this is about. localeLANG=en_US.UTF-8LANGUAGE=en_US:enLC_CTYPE="en_US.UTF-8"LC_NUMERIC=en_US.UTF-8LC_TIME=en_US.UTF-8LC_COLLATE="en_US.UTF-8"LC_MONETARY=en_US.UTF-8LC_MESSAGES="en_US.UTF-8"LC_PAPER=en_US.UTF-8LC_NAME=en_US.UTF-8LC_ADDRESS=en_US.UTF-8LC_TELEPHONE=en_US.UTF-8LC_MEASUREMENT=en_US.UTF-8LC_IDENTIFICATION=en_US.UTF-8LC_ALL= | Have a look at /usr/lib/locale/ . If your output looks like this, read on: ls /usr/lib/locale/C.UTF-8 locale-archive The warning isn't critical, as far as I can tell, but you may try: sudo locale-gen --purge --no-archive This command deletes the archive file and replaces it with the .utf8 files. Afterwards you may look at /usr/lib/locale/ again, it should look somewhat similat to this: ls /usr/lib/locale/C.UTF-8 de_LI.utf8 en_CA.utf8 en_IN en_US.utf8de_AT.utf8 de_LU.utf8 en_DK.utf8 en_NG en_ZA.utf8de_BE.utf8 en_AG en_GB.utf8 en_NZ.utf8 en_ZMde_CH.utf8 en_AU.utf8 en_HK.utf8 en_PH.utf8 en_ZW.utf8de_DE.utf8 en_BW.utf8 en_IE.utf8 en_SG.utf8 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/275318",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
275,329 | I'm still very new to scripting in bash, and just trying a few what I thought would be basic things. I want to run DDNS that updates from the my server running Ubuntu 14.04. Borrowing some code from dnsimple, this is what I have so far: #!/bin/bashLOGIN="email"TOKEN="token"DOMAIN_ID="domain"RECORD_ID="record"IP=`curl -s http://icanhazip.com/`OUTPUT=`curl -H "Accept: application/json" \ -H "Content-Type: application/json" \ -H "X-DNSimple-Domain-Token: $TOKEN" \ -X "PUT" \ -i "https://api.dnsimple.com/v1/domains/$DOMAIN_ID/records/$RECORD_ID" \ -d "{\"record\":{\"content\":\"$IP\"}}"`if ! echo "$OUTPUT" | grep -q "(Status:\s200)"; thenecho "match"$(echo "$OUTPUT" | grep -oP '(?<="message":")(.[^"]*)' >> /home/ddns/ddns.log)$(echo "$OUTPUT"| grep -P '(Status:\s[0-9]{3}\s)' >> /home/ddns/ddns.log)fi The idea is that it runs every 5 minutes, which I have working using a cronjob. I then want to check the output of the curl to see if the status is "200" or other. If it is something else, then I want to save the output to a file. What I can't get working is the if statement. As I understand it, the -q on the grep command will provide an exit code for the if statement. However I can't seem to get it work. Where have I gone wrong? | You're almost there. Just omit the exclamation mark: OUTPUT='blah blah (Status: 200)'if echo "$OUTPUT" | grep -q "(Status:\s200)"; then echo "MATCH"fi Result: MATCH The if condition is fulfilled if grep returns with exit code 0 (which means a match). The ! exclamation mark will negate this. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/275329",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165027/"
]
} |
275,428 | I have Xubuntu 14.04 installed on my server. network-manager is not installed. It seems to have lost its ability to resolve domains, and I'm not sure where to begin diagnosing the issue. ping 8.8.8.8 pings normally. ping google.com returns ping: unknown host google.com . I tried adding a DNS server to /etc/network/interfaces/ . Now it contains: # interfaces(5) file used by ifup(8) and ifdown(8)auto loiface lo inet loopbackauto eth0iface eth0 inet staticaddress 192.168.0.100gateway 192.168.0.1netmask 255.255.255.0dns-nameservers 8.8.8.8 8.8.4.4 This didn't fix the problem, so I tried running: hesse@galois:~$ sudo service networking restartstop: Job failed while stoppingstart: Job is already running: networking I tried stop then start and reload but they didn't seem to do anything. How do I diagnose the problem? Note: It is a lot of work for me to restart the machine (I need to connect a keyboard and monitor to it), so please suggest solutions that don't require restarting if possible. /etc/resolv.conf : # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN | You need to solve the emptiness of your /etc/resolv.conf before hoping for some other error. It should reflect at least one resolver. Should this be the only problem, your resolving should work. Try vi /etc/resolv.conf Go into the edit mode. Add the below thing: nameserver 8.8.8.8nameserver 8.8.4.4 See post that if ping google.com Works. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/275428",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111284/"
]
} |
275,429 | I have been trying to create a bootable debian (jessie/8.4) image for the past 2 days, and as far as I can tell I have the procedure right, but I can not get the filesystem right. I am relatively sure that I am doing something wrong here, missing something with mounting or /etc/fstab ( there isn't one in my image ). I was hoping someone with some experience would be able to help me out/show me what I am missing. Here are the errors I see as I'm booting into qemu-system-x86: As text and then as the actual screenshots: Errors: fsck: error 2 (No such file or directory) while executing fsck.ext2 for /dev/sda1fsck exited with status code 8[FAILED] Failed to start Load/Save Random SeedSee `systemctl status systemd-random-seed.service` for details.[FAILED] Failed to start Various fixups to make systemd work better on Debian.See `systemctl status debian-fixup.service` for details....[FAILED] Failed to start Update UTMP about System Boot/Shutdown.See `systemctl status systemd-update-utmp.service` for details.[DEPEND] Dependency failed for Update UTMP about System Runlevel Changes. Here are the instructions I've written up for myself / steps I've taken: cd ~mkdir debootstrapcd debootstrap/# get newestwget http://ftp.debian.org/debian/pool/main/d/debootstrap/debootstrap_1.0.80_all.debar -x debootstrap_1.0.80_all.debzcat /root/debootstrap/data.tar.gz | tar xvapt-get install parted# 1.5Gbytesdd if=/dev/zero of=1445.img bs=1024 count=1 seek=1536kparted -s 1445.img -- mklabel msdos mkpart primary 1m 1.5g toggle 1 bootlosetup --show -f 1445.img# prints out `/dev/loopX`, enter this on the next linpartprobe /dev/loop0# only have to make the filesytem once --> if you are troubleshooting steps, do not redo this linemkfs -t ext2 /dev/loop0p1mount /dev/loop0p1 /mntdebootstrap --verbose --components=main,contrib,non-free \--include=firmware-realtek,linux-image-amd64,grub-pc,ssh,vim \--exclude=nano \--arch amd64 jessie /mnt http://ftp.us.debian.org/debian source for information on using --components Ensure that the kernel is installed, it should appear in /boot within the chroot, that is /mnt/boot with the following files: initrd.img-3.16.0-4-amd64 vmlinuz-3.16.0-4-amd64 config-3.16.0-4-amd64 System.map-3.16.0-4-amd64 install grub grub-install --boot-directory=/mnt/boot --modules=part_msdos /dev/loop0 Set up APT copy over the apt sources cp /etc/apt/sources.list /mnt/etc/apt/sources.list ensure the cdrom source is commented out add the line: deb http://ftp.debian.org/debian stable-backports main contrib non-free Setup a chroot mount --bind /dev/pts /mnt/dev/ptsmount --bind /proc /mnt/procmount --bind /sys /mnt/sysmount --bind /dev /mnt/dev# if you want your pushprofilesettingscp ~/.bashrc /mnt/root/cp ~/.vimrc /mnt/root/# chroot -- enter the system as if it were thy ownchroot /mnt /bin/bashexport HOME=/rootexport LC_ALL=Cexport LANG=C.UTF-8export TERM=xterm-256color mount from man mount : --bind Remount a subtree somewhere else (its contents are available in both places). -t <type> Mount of filesystem type , with this, mount will attempt to auto determine setup serial/console access edit /etc/default/grub : Set GRUB_CMDLINE_LINUX="" to: GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200n8" Uncomment GRUB_TERMINAL=console Beneath, add the line: GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1" Make the grub config - This MUST be done in a non- systemd-nspawn shell (that means chroot ) grub-mkconfig -o /boot/grub/grub.cfg Exit chroot exit Clean up for chroot'ed umount /mnt/sysumount /mnt/devumount /mnt/dev/ptsumount /mnt/proc Can check for additional mounts with: mount | grep /mnt and then unmount them with umount Enter systemd-nspawn systemd-nspawn -D /mnt# not you are in a special container Set the password for root with passwd In /etc/ssh/sshd_config comment out PermitRootLogin without-password to read #PermitRootLogin without-password and insert PermitRootLogin yes beneath it Now enable ssh on startup systemctl enable ssh clean up # this is needed to clean up both chroot and systemd-nspawn -D /mnt# once this is run you can not do systemd-nspawn either so wait until you are entirely doneexitumount /mntlosetup -d /dev/loop0 Check for additional mounts with: mount | grep /mnt If ANYTHING is returned, unmount them with umount Recover (only necessary in ERROR) If you broke something, or need to retry, RE-MOUNT / SETUP CHROOT on existing .img : losetup --show -f 1445.img# prints out `/dev/loopX`, enter this on the next linpartprobe /dev/loop0mount /dev/loop0p1 /mnt testing img qemu-system-x86_64 -hda 1445.img -m 1024 -vnc :0 | Kept at it and figured it out, relatively straight forward from here, but not just a matter of setting up /etc/fstab , here is the rest: not necessary but a good idea to clean things up apt-get autoclean set up /etc/fstab - check with mount to ensure you are on the right filesystem type echo "/dev/sda1 / ext4 defaults,errors=remount-ro 0 1" > /etc/fstab this will rebuild the initramfs and allow it to boot clean update-initramfs -u -k all Do that and the machine boots clean, tested in QEMU and then I am running it right now on hardware. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/275429",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133405/"
]
} |
275,463 | When tried the command cat < 1.pdf it printed a very large output, which was totally incomprehensible to me. The content of 1.pdf was abc . The output was like this: ÀýÓëöûcÎ=ÉÐÎTaüÍ8]ö¹mg:=Rú*@H1S¢▒ùá½~Ì8u_4,¬7ïyt#¯ÚZ|åôÛ~«Æ fM²JKÁNÿ6 ì©ìÞ¾▒bT¦åÊmBíöÖ¡÷ÄïÝM{Í1¹@;ÄqÄú t]È7DJ Êûc0£jÜÖã\0O8À±(2)èJR'Ø÷=~ÝÆÂµ¡´ oÇKÈ]¹ÞÜY)ÚwÒ?[4ò©Ió¦>G)î¾J&d}ýíÜÅÓò~Ø0 $´Në¿´Èc®pVqí+ëCppG¾ùóßeõõ6GÌ,öfú8Ô7»S[¢S50cq/_9¹jó¿·Ü%×tQSßî▒LðbkÂÒxâ£Ö▒üVAûÇamÏ·Â׫H´+ÆWíç´upèó`I]± ÎëÚwiòtçúwAhO¼²´'Æ©ëÀ0lô?¿ÌIò▒ìXË<»ÅUepçæå¥SïÒFҽϷº®Ën.Z×´\£ÁEH@®2ÊçC¢n½¡hÑâ>º´¢YÚXEfg sôë¥*|zº7>ù!I©Åÿ«; ;&==)dS/),÷È´:ÞõH:CÉÑÀiTÌw!u@Âp2÷AÒfµòÜtFIZ^iÿà£ùÖ5ÐsDiërÿ$0b6Ëü~xÏ·._ÏÒõÜr²`wYù;¤²å»äE3óù²ëvÇ»Ó'ãµ~?ÿîMZÍPkh{aÙ1y&tüÙòÕMoó¬²<ñ/ÇÖa?üʯuÝÓjû,¨Üå@/GMa-èGkD}¤ð©fZbYÑlt/ ±Øj¦èRhCå1âÆñ±S@ÖòÁ~e}>NÀ^²Jà-Û[Mø¡FËB7ÉVy0|ôÉÏjx[ÙÁnneê)wã+ök'R6"dÞqît¿ý,ߢ]MöV>»Ñ@ÞwM0®èçã^F`çFÕ²æL((¬±S¢ÅïÂy§púÓË5y1pÆ{uxëÈOþ'¾7+Öº!íuV-R²f*`æ\ías\Øl^÷ ÿ`r1|yÅ-YØ,º·¢▒ÀPæá¸EW0d¤q]&ÿdV6ß.cùÂ~´óðCß▒(¨îMëb#òEnÑ»PÅV½!ÀÈѵ c´èjFÇé¨J$ǵÀcu?4·[ö&å:1&OÓö(øyKxòëÑq¸çÎÇÈI#5¨çû,'µÐûfG¸Í§³UÚëÎCDøõe²Ñú$Á½é½Ocø»Éßs! ÀõE²©)8½îv¿<Üî|è¶»B▒ÿYw¹·ÌÞÆ¶âôIÇ.>¾H¡n¬Éüׯ*m«¶£L£#7È?¾sÊNoXµ·àMÚ?ó´ZìâþÌçùä½ÿ$qÀÊcOºùdewænår▒ÖB½dfÕ;t4Êe3#ÄúÀ£çP=¨QÌ▒ÕþºÑ\U¼Fµ»â¯/!NZ=>½éú©,EÉ|ªQafu,5Ý%Xw%seàØÇÇTª BZëCaßî;zÃ"Bma¤ y=ÞwÁű~ÿõåEyV/Ò%q¥Ì^Ç 2U¸âQ³1y(¾&¨òYùÆ«}üx#Á®úÅÿÆðö.i8 ïþ¨è|Âý6\ U+ᬮ[®eVéüvíÜ{ÈL+]¬)ùxþecäæº°ÿoö?,Ä:¯Oò9T:1G4qÞ.ÌtÉÑëEæáHÔ׬¡ª çc^nÍPÑU7/ÄñcªXâ§nc]¾¨XPayÚGºxª.wÈç¤}¬ÓÏÇ\rf`¤ñ@zJnî´a'¾¨sNÔAëG½PL6ºIQkíJÍçØ¼ÔKýF¾)$\&§^» Eý¨_{tÂp¥ñT`mùPvcìÃç1ÿûKáz¹â®ò÷pר?äIIö 6²¬QªMÚIµÈTã+¤i1âN¾8ɽNww²Îf¹¿kVr²ù½Ä¼Ìå±"ªúº+äÿ¥óv¡t5!(«:Ö+Ovl<¦aö6Kì»â2óÎ娨|üËàÇÒ.j§·¸[ãæ¿ï`¡÷¥¾©,ÝßiÝPMåoÑéïToãw¿dyçëÀã·ó6ês\ÔR;ÕXÚ»ûÿõå▒öÁ▒¡\Ðs·~=ðÈTDÝCCijÚ`¹ÎÔ¬\·ðñ_ÿü§¯$Âõj®Û¢_]Lù¦8áÌæ²»BJÖÛn¼ûXÏjY8Ò6éØí©YóZtÛt´ÌníUè¨PGØÊzý+ÚT¦M1¥e¬åxendstreamýC~¢6A¬»hå?5µÎÍbKÏÔlwæ l▒_%L;8ê8jßQüg-í× Jâ`d¬*»ö</nä"nAíÀ ÿ]©äXĦMYS▒endobjÎ{°m-°õ1Hgîºû:h*µVØK°F8ñGÔÎl~V3ÄÞ!bÊcÞDGë¯×Yl(.ãâÝå`£=cü§ýÔb£ÄèMu Íëve«XîÝ£#"VØgáKÔ?öþ§®êϺݡ[3uש²Nµq÷Ú▒ßób¸l6=?'«ì>BÔ?t_Ñ gÁ£õ=q@ÜÕÅûªE3¶L+ÕÅ©Cå}b-7Q,ì·Túlñ¨þ¦:=`î¹aÐçeÆãÜw°¥èsE▒ªpÇ !}¡1{¹_ZlÈë¡Á;u§·+ú,fo ä-AÏ[HM¥×▒ÌÝåìtò*9¼Â^ѧ▒aÛ`B>/Cö0Þ÷ðiNËþÊ âÄCH´/9fVÎÉó6!vóÑ@ ðÉ!w±y;¯m$i¾äµH+·]YA|åÀD!j{øEÙ^äFÖÑ4▒ääû5þµ)Ãå*y´¹Q« 7í?NýÍ'^õ(*C4f;3ûûn³i|nIï0uo>#n³yµ¹5§*É»&Gtê;c.9 0 objéðÜ}zÔ22T`¦E'ýX®WÈô»&Â>9=ay$àÊGWdwÂ!f·¹eMvÖ=EÞߢ¯ò^¢n`ZÜöQ!Yß§µã gÚEbØù»ÑñÓ 1ªAäØÿPâ'4RÅU]xý'¬¡Â>¹æîtê3Yêy.·¬4ÖçæÍÕOß®×ñh¶ap(<</Type/Font/Subtype/Type1/BaseFont/NimbusRomNo9L-Regu 9îî~ýÚK°ÓÑ*ÈTt÷ ØL/ToUnicode 8 0 R} Åta°Àj) _ Kû'Üd§éËpôKÜ~¯/FirstChar 0 /LastChar 255ºP!y%µRÕÖ×bðó°~®_ñA=ùjÒÜW!þy0Æ¢]ìMºõ$ÊÍD96)éàjM[îÍÙù»@y»;«!BÌaÓ;²À ÏÞî¨ZÚ8Ýà ìÏ?å²@ÙÏû¬W$O9²ößÄé«¶Âv(r·?,½ø?u«¬§ýéøZÍñÉÆSêÒfæÿ ÕÀb8ÇxØÝ¯¹ÅAýöµiº\ÉI$▒À}0@bâÚÕq9s'XÝ/Widths[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0®ã¥Vø![250 333 408 500 500 833 778 333 333 333 500 564 250 333 250 278Õ¶~~Yö*Ó}+«▒rl¥z«° :¬Î>2y®GmÀúÀ500 500 500 500 500 500 500 500 500 500 278 278 564 564 564 444921 722 667 667 722 611 556 722 722 333 389 722 611 889 722 722556 722 667 556 611 722 722 944 722 722 611 333 278 333 469 500e$<Ìßf¼p騸ag#au.ÁÄè6Ý▒333 444 500 444 500 444 333 500 500 278 278 500 278 778 500 500500 500 333 389 278 500 500 722 500 500 444 480 200 480 541 00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0/NimbusRomNo9L-Regu0 333 500 500 167 500 500 500 500 180 444 500 333 333 556 5560 500 500 500 250 0 453 350 333 444 444 500 1000 1000 0 4440 333 333 333 333 333 333 333 333 0 333 333 0 333 333 3331000 0 0 0 0 0 0 0 0 0 0 0 0 0 0 00 889 0 276 0 0 0 0 611 722 889 310 0 0 0 00 667 0 0 0 278 0 0 278 500 722 500 0 0 0 0 Why can't `cat' read content of pdf files? | It is because pdf is not plain text. cat can only print the file as-is. To see the contents of a pdf file using the command line, you can use pdftotext . pdftotext pdffile - | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/275463",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165108/"
]
} |
275,474 | Situation: When I turn on my Linux Mint 20/19/18/17 Cinnamon the NumLock is Off in the Login window. Objective: Turn on NumLock automatically at startup in the Login window. | For all versions of Linux Mint You need to install a program needed for this purpose - numlockx ; man page : sudo apt-get install numlockx Choose if you wish to achieve the goal through CLI or GUI below. Linux Mint 20.x / 19.x (LightDM) GUI ; probably most convenient under normal operation: Once numlockx is installed, the following menu item in Login Window -> Settings called: Activate numlock becomes available; as you can see: This will add the line: activate-numlock=true to the following file: /etc/lightdm/slick-greeter.conf Linux Mint 18.x / 17.x (MDM) GUI ; probably most convenient under normal operation: Once numlockx is installed, the following menu item in Login Window -> Options called: Enable NumLock becomes available; as you can see: As pointed out in the other answer , this will add the following line to /etc/mdm/mdm.conf : EnableNumLock=true CLI ; suitable if you are setting other computers up through SSH, for instance: Open a text editor you are skilled in with this file, e.g. nano if unsure: sudoedit /etc/mdm/Init/Default Add these lines at the beginning of the file: if [ -x /usr/bin/numlockx ]; then /usr/bin/numlockx onfi As pointed out by Gilles , don't put exec in front of the command. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/275474",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
275,490 | Background : I'm currently writing an archiving script, which creates gzipped tarballs from some folders and their contents. It should be able to synchronize the gzipped archives with the sources without uncompressing the archives, or compressing the sources. For this, the sought solution is synchronizing the output of ls -l with the output of tar -ztvf . Both of the commands return similar output, however they are slightly different. Most of those differences can be sorted out with regular expressions, or cut . One thing which I could not resolve easily is listing the files' path relative to the queried directory in maximum depth. To overcome this problem I used find to find every file, and fed them into ls with the command: find Webcam -exec ls -lR --time-style="+%Y-%m-%d %H:%M" {} \; | cut -f1,3- -d" " | sed "s/ /\//2" | sed "s/ \+/ /g" where most of the pipeline serve formatting purpose, find Webcam -exec ls -lR {} \; is the problematic part, and Webcam is the test folder. The output of this command is the following: -rw-r--r-- debian/debian 162406 2014-04-12 13:42 2014-04-12-134210.jpg-rw-r--r-- debian/debian 116247 2014-08-09 16:38 2014-08-09-163849.jpg-rw-r--r-- debian/debian 96597 2015-03-15 19:39 2015-03-15-193905.jpg-rw-r--r-- debian/debian 100795 2015-04-29 20:23 2015-04-29-202242.jpg-rw-r--r-- debian/debian 97120 2015-08-02 13:42 2015-08-02-134230.jpg-rw-r--r-- debian/debian 123835 2015-08-27 23:03 2015-08-27-230306.jpg-rw-r--r-- debian/debian 97120 2015-08-02 13:42 Webcam/2015-08-02-134230.jpg-rw-r--r-- debian/debian 100795 2015-04-29 20:23 Webcam/2015-04-29-202242.jpg-rw-r--r-- debian/debian 116247 2014-08-09 16:38 Webcam/2014-08-09-163849.jpg-rw-r--r-- debian/debian 96597 2015-03-15 19:39 Webcam/2015-03-15-193905.jpg-rw-r--r-- debian/debian 162406 2014-04-12 13:42 Webcam/2014-04-12-134210.jpg-rw-r--r-- debian/debian 123835 2015-08-27 23:03 Webcam/2015-08-27-230306.jpg Now the output resembles to the output of tar -ztvf : -rw-r--r-- debian/debian 162406 2014-04-12 13:42 Webcam/2014-04-12-134210.jpg-rw-r--r-- debian/debian 116247 2014-08-09 16:38 Webcam/2014-08-09-163849.jpg-rw-r--r-- debian/debian 96597 2015-03-15 19:39 Webcam/2015-03-15-193905.jpg-rw-r--r-- debian/debian 100795 2015-04-29 20:23 Webcam/2015-04-29-202242.jpg-rw-r--r-- debian/debian 97120 2015-08-02 13:42 Webcam/2015-08-02-134230.jpg-rw-r--r-- debian/debian 123835 2015-08-27 23:03 Webcam/2015-08-27-230306.jpg with the obvious flaw of ls listing every found item twice, one time with the required path, and one time without it. How can I "fix" ls to list every item twice? Additional insight about the nature of this error (e.g. what's happening under the hood) is more than welcome, while more practical ways to solve the whole archiving problem are also welcome as side notes. However, now I consider this as a challenge, which I would like to solve, so the main focus should be kept on limiting the output of ls . | The problem is that find finds the Webcam directory, too, and runs ls Webcam which lists all the files there. To only list files, not directories, tell find -type f | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/275490",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84984/"
]
} |
275,505 | I would like to do something like that: sudo -p password rm /a/file Can I put the password and the user on the same line? My Sudo version is 1.8.9p5 | sudo does not have an option to specify the password, but you can still do the authentication on the command line, like this: echo password | sudo -u root --stdin or just echo password | sudo -S referring to the sudo manual page : -S , --stdin Write the prompt to the standard error and read the password from the standard input instead of using the terminal device. The password must be followed by a newline character. and -u user, --user= user Run the command as a user other than the default target user (usually root). The user may be either a user name or a numeric user ID (UID) prefixed with the ‘#’ character (e.g. #0 for UID 0). When running commands as a UID, many shells require that the ‘#’ be escaped with a backslash (‘\’). Some security policies may restrict UIDs to those listed in the password database. The sudoers policy allows UIDs that are not in the password database as long as the targetpw option is not set. Other security policies may not support this. But sudo has been under development a long time. Check the version: -V , --version Print the sudo version string as well as the version string of the security policy plugin and any I/O plugins. If the invoking user is already root the -V option will display the arguments passed to configure when sudo was built and plugins may display more verbose information such as default options. The support for long options was added in 2013 . Besides authenticating, you need something for sudo to do , e.g., a command. I often check that it is working by doing just sudo id which should show the root user, e.g., uid=0(root) gid=0(root) groups=0(root) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/275505",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165144/"
]
} |
275,516 | Standard Unix utilities like grep and diff use some heuristic to classify files as "text" or "binary". (E.g. grep 's output may include lines like Binary file frobozz matches .) Is there a convenient test one can apply in a zsh script to perform a similar "text/binary" classification? (Other than something like grep '' somefile | grep -q Binary .) (I realize that any such test would necessarily be heuristic, and therefore imperfect.) | If you ask file for just the mime-type you'll get many different ones like text/x-shellscript , and application/x-executable etc, but I imagine if you just check for the "text" part you should get good results. Eg ( -b for no filename in output): file -b --mime-type filename | sed 's|/.*||' | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/275516",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
275,517 | If you want to execute one command and then another one after the first one finish, you can execute command1 & which prints the PID of the process executing command1 . You can then think of what you want to do after command1 has finished and execute: wait [PID printed by the previous command] && command2 However, this only works in the same terminal window and gets really, really messy if command1 prints output. If you open up a new terminal window and try to wait, you're shown something like this: $ wait 10668bash: wait: pid 10668 is not a child of this shell Is there a terminal emulator which supports waiting for programs without having to write the next command in the output of the currently executing command without throwing the output of the first command away (like piping it to /dev/null )? It doesn't have to work via wait or something similar. Right-clicking and choosing "execute after current command returned" would be perfectly fine. I don't mean simply concatenating commands but being able to run a command and then decide on what to run right after that one finished. | If you ask file for just the mime-type you'll get many different ones like text/x-shellscript , and application/x-executable etc, but I imagine if you just check for the "text" part you should get good results. Eg ( -b for no filename in output): file -b --mime-type filename | sed 's|/.*||' | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/275517",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147785/"
]
} |
275,526 | I'm trying to run a C-program in the background. I also want to save the stdout-outputs of the program. But if I try the following, nohup only redirects stderr-outputs to the logfile.txt: nohup ./GetTempValues 11 4 > logfile.txt 2>&1 & Whereby 11 and 4 are the parameters I need to pass to my program and the last & is for running the program in the background (GetTempValues is my program). | As Guido points out above, nohup already redirects standard error for you unless you redirect it to a file. If you're using Linux, it can be instructive to run a simple command under nohup and look at the calls to dup2(2) immediately before execve(2) . I doubt you're seeing what you think you're seeing. Let's think about what happens when you say nohup foo 2>&1 The shell redirects stderr to stdout. It then invokes nohup , which inherits that situation (stderr and stdout using the same file descriptor). nohup opens nohup.out (file descriptor 3), dups it to file descriptor 1, and closes 3. nohup notices stderr is not file, and dups file descriptor 1 to 2, too. Thus file descriptors 1 and 2 both refer to nohup.out . nohup calls exec with any arguments provided on the command line (in this case, foo ). The new process inherits the file descriptors that nohup set up for it. From the command line you cannot create a case in which, as you say, nohup only redirects stderr-outputs . nohup always writes stdout and stderr to a file. stdout goes either to one you specify via redirection, or to nohup.out ; stderr follows stdout unless you explicitly redirect it to another file. The one peculiar aspect of using 2>&1 with nohup is that GNU's version produces a pointless message on stderr, nohup: ignoring input and appending output to ‘nohup.out’ . (What other utility writes a message to standard error that amounts to saying, acting per documentation on instructions ?) Normally that noise is written to the terminal; under redirection it winds up as the first line of the output file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/275526",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164885/"
]
} |
275,545 | I'm attempting to grow /dev/sda5 , which is an 'lvm2 pv' partition, but it's contained in /dev/sda2 , which is an extended partition. There is plenty of "unallocated" space immediately after /dev/sda2 , since I've just cloned my previous disk to a larger one. My plan is to resize the extended partition, and then resize the contained PV to match. Then I'll extend the VG and the filesystem. So I've booted using a Linux Mint 17.1 USB stick and run gparted. It won't let me resize the extended partition, claiming that it's "Busy (At least one logical partition is mounted)". However, running mount (or cat /proc/mounts ) doesn't show anything other than the Live USB image mounted. What's the problem? | Because the partition contains an LVM2 volume group, it's treated as busy (even if it doesn't appear mounted). You need to deactivate the VG: sudo vgscan # to discover the name of the volume group "mint-vg"sudo vgchange -a n mint-vg Then, in gparted, select GParted / Refresh Devices . This should remove the lock icon from the partitions. Aside: rather than a lock icon, my copy of gparted displays a telephone, which is ... confusing. At this point, you should be able to resize the extended partition, /dev/sda2 as normal to use the unallocated space. Apply the change. Then resize the 'lvm2 pv' partition, /dev/sda5 . Apply the change. Then resize the PV: sudo pvresize /dev/sda5 Check the new size: sudo pvdisplay /dev/sda5 Reactivate the volume group: sudo vgchange -a y mint-vg Then extend the logical volume into the new space: sudo lvextend /dev/mint-vg/root /dev/sda5 I forgot to specify -r to resize the filesystem, so I have to do that as well... sudo e2fsck -f /dev/mint-vg/rootsudo resize2fs /dev/mint-vg/root | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/275545",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46851/"
]
} |
275,637 | I noticed recently that POSIX specifications for find do not include the -maxdepth primary. For those unfamiliar with it, the purpose of the -maxdepth primary is to restrict how many levels deep find will descend. -maxdepth 0 results in only command line arguments being processed; -maxdepth 1 would only handle results directly within the command line arguments, etc. How can I get the equivalent behavior to the non-POSIX -maxdepth primary using only POSIX-specified options and tools? (Note: Of course I can get the equivalent of -maxdepth 0 by just using -prune as the first operand, but that doesn't extend to other depths.) | @meuh's approach is inefficient as his -maxdepth 1 approach still lets find read the content of directories at level 1 to later ignore them otherwise. It will also not work properly with some find implementations (including GNU find ) if some directory names contain sequences of bytes that don't form valid characters in the user's locale (like for file names in a different character encoding). find . \( -name . -o -prune \) -extra-conditions-and-actions is the more canonical way to implement GNU's -maxdepth 1 . Generally though, it's depth 1 you want ( -mindepth 1 -maxdepth 1 ) as you don't want to consider . (depth 0), and then it's even simpler: find . ! -name . -prune -extra-conditions-and-actions For -maxdepth 2 , that becomes: find . \( ! -path './*/*' -o -prune \) -extra-conditions-and-actions And that's where you run in the invalid character issues. For instance, if you have a directory called Stéphane but that é is encoded in the iso8859-1 (aka latin1) charset (0xe9 byte) as was most common in Western Europe and the America up until the mid 2000s, then that 0xe9 byte is not a valid character in UTF-8. So, in UTF-8 locales, the * wildcard (with some find implementations) will not match Stéphane as * is 0 or more characters and 0xe9 is not a character. $ locale charmapUTF-8$ find . -maxdepth 2../St?phane./St?phane/Chazelas./Stéphane./Stéphane/Chazelas./John./John/Smith$ find . \( ! -path './*/*' -o -prune \)../St?phane./St?phane/Chazelas./St?phane/Chazelas/age./St?phane/Chazelas/gender./St?phane/Chazelas/address./Stéphane./Stéphane/Chazelas./John./John/Smith My find (when the output goes to a terminal) displays that invalid 0xe9 byte as ? above. You can see that St<0xe9>phane/Chazelas was not prune d. You can work around it by doing: LC_ALL=C find . \( ! -path './*/*' -o -prune \) -extra-conditions-and-actions But note that that affects all the locale settings of find and any application it runs (like via the -exec predicates). $ LC_ALL=C find . \( ! -path './*/*' -o -prune \)../St?phane./St?phane/Chazelas./St??phane./St??phane/Chazelas./John./John/Smith Now, I really get a -maxdepth 2 but note how the é in the second Stéphane properly encoded in UTF-8 is displayed as ?? as the 0xc3 0xa9 bytes (considered as two individual undefined characters in the C locale) of the UTF-8 encoding of é are not printable characters in the C locale. And if I had added a -name '????????' , I would have gotten the wrong Stéphane (the one encoded in iso8859-1). To apply to arbitrary paths instead of . , you'd do: find some/dir/. ! -name . -prune ... for -mindepth 1 -maxdepth 1 or: find some/dir/. \( ! -path '*/./*/*' -o -prune \) ... for -maxdepth 2 . I would still do a: (cd -P -- "$dir" && find . ...) First because that makes the paths shorter which makes it less likely to run into path too long or arg list too long issues but also to work around the fact that find can't support arbitrary path arguments (except with -f with FreeBSD find ) as it will choke on values of $dir like ! or -print ... The -o in combination with negation is a common trick to run two independent sets of -condition / -action in find . If you want to run -action1 on files meeting -condition1 and independently -action2 on files meeting -condition2 , you cannot do: find . -condition1 -action1 -condition2 -action2 As -action2 would only be run for files that meet both conditions. Nor: find . -contition1 -action1 -o -condition2 -action2 As -action2 would not be run for files that meet both conditions. find . \( ! -condition1 -o -action1 \) -condition2 -action2 works as \( ! -condition1 -o -action1 \) would resolve to true for every file. That assumes -action1 is an action (like -prune , -exec ... {} + ) that always returns true . For actions like -exec ... \; that may return false , you may want to add another -o -something where -something is harmless but returns true like -true in GNU find or -links +0 or ! -name '' or -name '*' (though note the issue about invalid characters above). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/275637",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135943/"
]
} |
275,649 | I am looking into the code displayed below and it checks the input if the row/column arguments start with either -r or -c . What does ${1:0:2} mean in this context? rowArgName="-r"colArgName="-c"if [ "${1:0:2}" != $rowArgName ] && [ "${1:0:2}" != $colArgName ]then echo $correctCmdMsg >&2 exit 1fi | It's a Substring Expansion (subclass of Parameter Expansion) pattern of shell. The format is: ${parameter:offset:length} and indexing starts at 0. Say, you have a variable foo , then ${foo:0:2} yields the first two characters (from position 0 the next 2). Example: $ foo=spamegg$ echo "${foo:0:2}"sp In your case, the first number, 1 , refers to variable name $1 , which is the first argument passed via command line (in the main program) or the first argument passed to the function. So in your case, "${1:0:2}" will: start extracting the substring starting from index 0 i.e. first character and continue upto next two characters so after the operation you will get the first two characters (indexed at 0 and 1) of the input string. The [ "${1:0:2}" != $rowArgName ] and [ "${1:0:2}" != $colArgName ] are checking if the output subtring is equal to some other strings. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/275649",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165213/"
]
} |
275,678 | How to temporarily stop synchronizing of dirty data with disk, instead just keeping it in memory. Clarification: I want to sync it later. | If you want to prevent disk writes as much as possible , you can do this with Laptop Mode . One of the features of laptop mode is to allow a disk to spin down and to prevent the kernel from writing to it until memory gets full or until a timeout occurs (or until the disk needs to spin up in order to read data from it). See also the Arch Wiki . You'll presumably want to enable only disk spindown and not other features, and set LM_SECONDS_BEFORE_SYNC to a large value in the configuration . There's also noflushd , which does this specific job (and is a lot easier to configure than Laptop Mode since it doesn't do anything else). I've used it happily in the past, but it's been unmaintained for a while so I don't know if it still works on modern systems. Note that preventing disk writes isn't a defense against writing, it's a power saving (and noise reduction) measure. It's difficult to control exactly what might cause the disk to spin up, for example some data that needs to be read because it got wiped from the cache, and at that point the disk will be written to. If the reason you want to prevent disk writes is that you don't want the disk content to be modified under any circumstances , you need different tools. You can do that at the filesystem level with a union mount . Mount the disk read-only, create a directory on a tmpfs filesystem, and create a union mount of the two where the tmpfs directory is the read-write branch. See Mount a filesystem read-only, and redirect writes to RAM? for several examples of union mount software on Linux. This is how live installations of Linux with a persistence option work: the live installation is read-only, but the persistent data partition is union-mounted on top of it. You can also achieve a similar effect at the block device level, though I can't think of a compelling reason to prefer this solution. See GNU/Linux: overlay block device / stackable block device Alternatively, if the disk data is on an LVM volume or on a ZFS filesystem, you can make a snapshot to keep a copy of the data at a point in time while continuing to write to a logically separate device. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/275678",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116512/"
]
} |
275,684 | I run following series of commands to bring curser under the prompt on shell terminal. $ NL=' # << press enter' # << press enter again$ PS1=${PS1}${NL} I've to do this every time I login. How can I automate it? I tried add these same statements in ~/.profile and restarted. No luck. EDIT: Here's how I did it. I added following (my favorite bash prompt) line in ~/.bashrc (I created it). export PS1="===================\n\n\d \A \u@\H [\w]\n\\$ \[$(tput sgr0)\]" | Use NL=$'\n' . You also need to double-quote $NL when you use it (and probably $PS1 too, depending on what it contains...include it inside the double-quotes anyway). e.g. add to your ~/.bash_profile (or ~/.profile if you prefer): NL=$'\n'PS1="${PS1}${NL}" BTW, in the long run, I'll bet you get sick of how much valuable vertical terminal space is wasted by the extra newline. Screens tend to be much wider than they are tall (e.g. some common resolutions for a 16:9 aspect ratio are 1920x1080 or 2560x1440, while common resolutions for 16:10 are 1920x1200 or 2560x1600), so vertical screen space is rarer and more valuable. The more wasted space, the less useful info you can see on screen at once. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/275684",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/156356/"
]
} |
275,707 | I've never encrypted files in my Linux distribution. Now I need to do that. I'm on Arch. I went to the documentation, it says that support of TrueCrypt is discontinued and after examining other libraries there I decided to use dm-crypt. But I can't figure out how to simply encrypt a file with it. It requires creating a partition or something like a container. I don't need that. How can I encrypt a file with dm-crypt? | dm-crypt is a transparent disk encryption subsystem. That being said, it's better suited to encrypt disks and partitions. It can encrypt files, but they have to be mapped as devices for this to work. If you want to encrypt only one file, GnuPG could be a better tool. Example: gpg -c filename See Also: nixCraft: Linux: HowTo Encrypt And Decrypt Files With A Password 7 Tools to Encrypt/Decrypt and Password Protect Files in Linux | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/275707",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94302/"
]
} |
275,727 | I have to demonstrate a practical where I have to make my own module, add this module to a kernel's source and implement the system call. I'm using 3.16 kernel on Ubuntu but it takes around 2 hours to install the kernel from source. Is it possible to remove some part of kernel(like unnecessary drivers,etc) from the source to save time as I'm not going to use this newly installed kernel for regular use? If yes, how? | As mentioned in comments, you should be building using something like make -j4 . Use a number equal or slightly higher than the number of CPU cores you have. make localmodconfig The following instructions apply to building a kernel from upstream. Personally I find that simplest. I don't know how to obtain a tree with the ubuntu patches applied, ready to build like this. (1) Theoretically the way you build kernels in more reasonabletimespans for testing is supposed to be cp /boot/config-`uname -r` .config you don't need to enable anything newer, so - only problem is thisbreaks if they renamed stuff: make oldnoconfig now disable all the modules not currently loaded. (Makesure you have all your usb devices you need plugged in...): make localmodconfig It worked for me recently, so might be useful. Not so well theprevious time I tried it. I think I got it from about one hour down to ten minutes. Even after make localmodconfig it's still building crazy amounts of stuff I don't need. OTOH actually finding and disabling that stuff (e.g. in make xconfig ) takes a while too (and even longer if you mistakenly disable something you do need). I guess it's worth knowing it exists, it's just not guaranteed to make you happy. (2) I don't think it should take two hours to build every modification to your "module". (It actually needs to be a builtin if you're implementing a new system call). make will just recompile your modified files and integrate it into a kernel binary. So in case getting the Kconfig right is too much trouble, then maybe an initial two-hour build is not too bad. You might be having this problem if you are building with a distribution kernel source package. (You can switch to manual builds, or you might be able to trick the distro source package into using ccache ). Or, your modifications might be modifying a header file which is unfortunately included by many many source files. Even so, it might be useful to make custom Kconfigs, e.g. much smaller Kconfigs, if you want to port to different kernel versions, do git bisect , test different build options, etc. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/275727",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152971/"
]
} |
275,728 | I would like for all my ls commands that display a date, such as ls -l , to print the date column in a format of my choosing. Currently, I manually set this every time with --time-style . Is there any way of permanently setting this to, say, long-iso (as opposed to issuing ls -l --time-style="long-iso" on every invocation)? | If you define an alias such as: alias ls='ls --time-style=long-iso' then all ls invocations which end up displaying dates will use that. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/275728",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165270/"
]
} |
275,729 | Jimmij wrote : I would probably end up using temporary directory in this case: for file in [[:digit:]]*.png; do echo mv $file tmp/$(printf %04d $((10#${file%.png}+1))).pngdone The important part is 10#N which forces bash to interpret 000N as just N , otherwise leading zeros denotes octal numbers. Is 10#N part of arithmetic expansion, or something else? Is this mentioned in the Bash manual or POSIX specification? I don't find it. | 10#N or a general form [base#]n where 2 <= base <= 64 will interpret n as the number in that base. The bash manual, section Shell Arithmetic mentioned this. Note that this feature is not specified by POSIX, only available in bash , ksh and its derivatives, zsh with base between 2 and 36. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/275729",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
275,757 | I've gotten used to using grep for my command line searches and wanted to know how to successfully do a search using the result of another search. Here's my attempt, where I am looking for 'tool' within my result: grep tool | grep -rl embed= This returns some results and then the console hangs. Are there any simple/elegant solutions to this? | Pipelines run from left to right. More precisely, the processes run in parallel, but the data flows from left to right: the output of the command on the left becomes the input of the command on the right. Here the command on the left is grep tool . Since you're passing a single argument to grep , it's searching in its standard input. Since you haven't redirected the standard input, grep is reading from the terminal: it's waiting for you to type. To search in a file, use grep tool path/to/file | … To search in a directory recursively, use grep -r tool path/to/directory | … You can filter the results to list only the lines that contain embed= . Drop the -l and -r options, they make no sense when the input is coming from standard input. grep -r tool path/to/directory | grep 'embed=' This lists lines containing both tool and embed= (in either order). An alternative method with simpler plumbing would be to do a single search with an or pattern; this is always possible, but if the patterns can overlap (not the case here), the pattern can get very complicated. grep -E -r 'tool.*embed=|embed=.*tool' path/to/directory If you wanted to list files containing both tool and embed= , you'd need a different command structure, with the first grep listing file names ( -l ) and the second one receiving those file names as arguments , not as input. Converting standard input into command line arguments what the xargs command is for. grep -lZ -r tool path/to/directory | xargs -0 grep -l 'embed=' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/275757",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146412/"
]
} |
275,772 | Wouldn't it be nice if I could just get my system set up right after I install it, with all of the themes and programs and SDKs that I want, and then just save that snapshot? Like a backup, except I'd be able to boot that image and install it on whatever machine I'd like to? I'm pretty sure this is possible. In fact, I know that it is to some extent . https://help.ubuntu.com/community/LiveCDCustomizationFromScratch But is there a better way? Does someone know magic that I can use? Can't I just save my system state so that when I install some unsupported server software and I brick my system, I don't need to spend another hour setting it up? | I use both CloneZilla and Back In Time to do system and data back-ups respectively. The advantage of CloneZilla over built-in tools like dd is that it uses its own partclone which creates very small images (it recognizes sparse files, can use a number of compression utilities, ...) and falls back to ddrescue which allows reading of damaged hard drives! (Nice to have that very last back-up before the HDD finally gives up completely.) You should however have (at a minimum) a separate / and /home if you want to easily differentiate between your OS and user config files. Nothing is as flexible, robust and fast as CloneZilla to do full off-line image/disk back-ups (and I'm always reading back-up manuals if someone mentions its favourite back-up tool to see whether they've got something better then I have) If you put CloneZilla onto a 512 MByte bootable partition of an external USB HDD, you can just boot it on any machine and restore any backup you've made to the partition taking the rest of that same HDD. Warning: if you restore the same system back-up to multiple machines all these machines will be clones of one another with the same host name, time zone, IP (if static), ... so all these need to be personalised after "restore"... Alternatively , I have a bootable USB SLC stick that has a full install of Linux (in my case lubuntu) without any proprietary drivers and a leading FAT partition, no swap partition but a swap file inside the / partition that I can boot on any machine (so far) and that I use to disinfect Windows machines (or just work on someone else's machine without touching their data if they're paranoid). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/275772",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165306/"
]
} |
275,794 | I read that it is bad to write things like for line in $(command) , the correct way seem to be instead: command | while IFS= read -r line; do echo $line; done This works great. But what if what I want to iterate on is the contents of a variable , not the direct result of a command? For example, imagine that you create the following file quickfox : The quick brownfoxjumps\ over -thelazy ,dog. I would like to be able to do something like this: # This is just for the example,# I could of course stream the contents to `read`variable=$(cat quickfox);while IFS= read -r line < $variable; do echo $line; done; # this is incorrect | In modern shells like bash and zsh, you have a very useful `<<<' redirector that accepts a string as an input. So you would do while IFS= read -r line ; do echo $line; done <<< "$variable" Otherwise, you can always do echo "$variable" | while IFS= read -r line ; do echo $line; done | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/275794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45354/"
]
} |
275,824 | Is there a generic way of running a bash script and seeing the commands that result, but not actually run the commands - i.e. a "dry run"/simulator of sorts? I have a database install script (actually "make install" after running ./configure and make) that I wish to run, but it's installing all sorts of stuff that I don't want. So I'd like a way to see exactly what it's going to do before I run it for real - maybe even run the commands by hand instead. Is there any utility that can perform such a task (or anything related/similar)? | GNU make has an option to do a dry-run: ‘-n’ ‘--just-print’ ‘--dry-run’ ‘--recon’ “No-op”. Causes make to print the recipes that are needed to make the targets up to date, but not actually execute them. Note that some recipes are still executed, even with this flag (see How the MAKE Variable Works). Also any recipes needed to update included makefiles are still executed. So for your situation, just run make -n install to see the commands that make would execute. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/275824",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99494/"
]
} |
275,827 | Is there a terminal command that merges all open terminal windows into one window with tabs? Been searching all over the place, but have yet to find any solutions. | GNU make has an option to do a dry-run: ‘-n’ ‘--just-print’ ‘--dry-run’ ‘--recon’ “No-op”. Causes make to print the recipes that are needed to make the targets up to date, but not actually execute them. Note that some recipes are still executed, even with this flag (see How the MAKE Variable Works). Also any recipes needed to update included makefiles are still executed. So for your situation, just run make -n install to see the commands that make would execute. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/275827",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165339/"
]
} |
275,883 | I have a symbolic link for a directory, e.g ln -s /tmp /xxx Now when I type /xx and press tab key, bash would complete the line to /xxx If I press it again it become /xxx/ Now, how can I ask bash to complete /xx to /xxx/ automatically (provided that there's only one match) | Add the following line to your ~/.inputrc file: set mark-symlinked-directories on See "Readline Init File Syntax" in the Bash Reference Manual for more on this topic. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/275883",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
275,907 | My computer says: $ uptime 10:20:35 up 1:46, 3 users, load average: 0,03, 0,10, 0,13 And if I check last I see: reboot system boot 3.19.0-51-generi Tue Apr 12 08:34 - 10:20 (01:45) And then I check: $ ls -l /var/log/boot.log -rw-r--r-- 1 root root 4734 Apr 12 08:34 boot.log Then I see in /var/log/syslog the first line of today being: Apr 12 08:34:39 PC... rsyslogd: [origin software="rsyslogd" swVersion="7.4.4" x-pid="820" x-info="http://www.rsyslog.com"] start So all seems to converge in 8:34 being the time when my machine has booted. However, I wonder: what is the exact time uptime uses? Is uptime a process that launches and checks some file or is it something on the hardware? I'm running Ubuntu 14.04. | On my system it gets the uptime from /proc/uptime : $ strace -eopen uptimeopen("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3open("/lib/libproc-3.2.8.so", O_RDONLY|O_CLOEXEC) = 3open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3open("/proc/version", O_RDONLY) = 3open("/sys/devices/system/cpu/online", O_RDONLY|O_CLOEXEC) = 3open("/etc/localtime", O_RDONLY|O_CLOEXEC) = 3open("/proc/uptime", O_RDONLY) = 3open("/var/run/utmp", O_RDONLY|O_CLOEXEC) = 4open("/proc/loadavg", O_RDONLY) = 4 10:52:38 up 3 days, 23:38, 4 users, load average: 0.00, 0.02, 0.05 From the proc manpage : /proc/uptime This file contains two numbers: the uptime of the system (seconds), and the amount of time spent in idle process (seconds). The proc filesystem contains a set of pseudo files. Those are not real files, they just look like files, but they contain values that are provided directly by the kernel. Every time you read a file, such as /proc/uptime , its contents are regenerated on the fly. The proc filesystem is an interface to the kernel. In the linux kernel source code of the file fs/proc/uptime.c at line 49 , you see a function call: proc_create("uptime", 0, NULL, &uptime_proc_fops); This creates a proc filesystem entry called uptime (the procfs is usually mounted under /proc ), and associates a function to it, which defines valid file operations on that pseudo file and the functions associated to them. In case of uptime it's just read() and open() operations. However, if you trace the functions back you will end up here , where the uptime is calculated. Internally, there is a timer-interupt which updates periodically the systems uptime (besides other values). The interval, in which the timer-interupt ticks, is defined by the preprocessor-macro HZ , whose exact value is defined in the kernel config file and applied at compilation time. The idle time and the number of CPU cycles, combined with the frequency HZ (cycles per second) can be calculated in a number (of seconds) since the last boot. To address your question: When does “uptime” start counting from? Since the uptime is a kernel internal value, which ticks up every cycle, it starts counting when the kernel has initialized. That is, when the first cycle has ended. Even before anything is mounted, directly after the bootloader gives control to the kernel image. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/275907",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40596/"
]
} |
275,915 | When working with remote file systems such as sshfs or smbfs , it might happen that the file systems become stale due to network problems. To check if the mount is stale or not, I usually use the command ls to see if I can list the contents of the remote mounts. When these remote mounts are stale, the ls command just waits for a long time until outputting, after some minutes, something along the lines of: ls: cannot access '/mnt/remote': Input/output error Instead of waiting for this error, is there a way to stop the ls command from within the same bash session ? The regular Control+C does not seem to do the job. Closing the bash shell works, but this is undesirable. Any alternatives? | No, since ls (or any other file-operating process) is in the process state "uninterruptible sleep", there is nothing that can interrupt it, even SIGKILL can't. Maybe you can lower the timeout values when mounting remote filesystems. sshfs has ServerAliveInterval and ServerAliveCountMax . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/275915",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73475/"
]
} |
275,937 | I'm active Apple user, I have MacBook at home, but Linux PC on the work. One of most annoying thing for me is sync my personal data between home, work and mobile devices (like documents etc). At the moment my main cloud storage is iCloud (it is integrate to OS X). Today I tried to install iCloud for Windows via Wine, but it didn't work. Is there any possible ways to configure Wine or other software? I know about icloud.com, I need automatic sync as Dropbox. | Aside from the iCloud services with more standard protocols (Mail, Contacts, Calendar) it doesn't look like there's a way to provide "in sync" access to iCloud Drive, aside from visiting iCloud Drive in a browser… Q: How do I use icloud with Linux? (Apple Discussions) Also, not that this helps much for Linux (Ubuntu), but Apple does provide a Windows (7+) client: iCloud for Windows | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/275937",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165414/"
]
} |
275,942 | Both of the below commands gives the same output,however which one is the correct method to use it? grep ^'\<that\>' file.txt Searches for the line which has that as the first word grep '^\<that\>' file.txt Same output as the previous one. Whats the difference in placing ^ anchor before the quotes or inside the quotes?Which one is the appropriate one? | Aside from the iCloud services with more standard protocols (Mail, Contacts, Calendar) it doesn't look like there's a way to provide "in sync" access to iCloud Drive, aside from visiting iCloud Drive in a browser… Q: How do I use icloud with Linux? (Apple Discussions) Also, not that this helps much for Linux (Ubuntu), but Apple does provide a Windows (7+) client: iCloud for Windows | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/275942",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154292/"
]
} |
276,005 | I use MATE on Fedora. At some point, the behavior of scrollbars on many applications has changed. When I click below a scrollbar, now the scrollbar jumps to where I clicked. Previously, it used to page down by one page (if I clicked anywhere below the current location of the scrollbar). I preferred the old behavior. When on a very long page, the new behavior tends to make the scrollbar almost unusable: I can't control where I click precisely enough to control where the page jumps to. Is there a way to regain the previous behavior? In other words, is there a way to make clicking on a scrollbar, below the current location of the scroll, to cause the window to go down by one page, rather than jumping to where I clicked? This difference is most noticeable in Firefox, but is not limited solely to Firefox; it affects other applications, too. | I had the same issue on Firefox 48, and this answer worked for me: Create ~/.config/gtk-3.0/settings.ini and add [Settings]gtk-primary-button-warps-slider = false I'm using XFCE, but Firefox is reading that setting for some reason. It also worked with other Gnome 3 applications, such as gnome-todo . After creating that file, I only had to restart Firefox and its behavior was modified (no need to reboot). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/276005",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9812/"
]
} |
276,037 | A short while ago I asked this question - Does "apt-get -s upgrade" or some other apt command have an option to list the repositories the packages will be downloaded from? , about how to list the repositories packages would be upgraded from.I have now learned another command, apt-cache madison which will list the repos a package will be installed from. Why a name like madison which is in no way related to the task at hand? | The madison command was added in apt 0.5.20 . It produces output that's similar to a then-existing tool called madison which was used by Debian server administrators. Several of these tools had names which were common female forenames, I don't know if there's a specific history behind that. The madison tool no longer exists but there's a partial reimplementation called madison-lite (querying a local package archive, like the original), as well as a script called rmadison in devscripts which queries remote servers. apt-cache madison is not emphasized because most of what it displays is also available through apt-cache showpkg and apt-cache policy . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/276037",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26026/"
]
} |
276,065 | I am encountering a problem in which mounting a remote CIFS server without an fstab entry works, but mounting through fstab does not. The following command works: $ sudo mount -t cifs //w.x.y.z/Home$ /mnt/dir -o domain=A,username=B,password='C',sec=ntlmssp,file_mode=0700,dir_mode=0700 However, if I instead add the following line to /etc/fstab and try to mount by the mount command (e.g., mount -a or mount /mnt/dir ), I receive the error listed below: $ tail -n 1 /etc/fstab//w.x.y.z/Home$ /mnt/dir cifs domain=A,username=B,password='C',sec=ntlmssp,file_mode=0700,dir_mode=0700 error: $ sudo mount /mnt/csifmount error(13): Permission deniedRefer to the mount.cifs(8) manual page (e.g. man mount.cifs) Explicitly setting dump and fsck pass order to 0 does not help. Both commands seem to do the same thing | When you type the mount command, the part password='C' is first handled by the shell and becomes password=C before it gets to the mount command. This is not done with fstab entries, so you must remove the single quotes. If your password contains special characters you can replace them by their octal code, in particular \040 for space. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14960/"
]
} |
276,097 | I have file: xxx.lst with values: 111222333 I just need to make one line with: 111 222 333 into a variable or standard output. | paste is probably the best tool for this job: $ paste -sd ' ' file111 222 333 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276097",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165435/"
]
} |
276,142 | I am trying to create a file in a directory but I am getting: `touch`: cannot touch ‘test’: Permission denied Here are my commands: [user@xxx api]$ ls -ltotal 184...drwxrwxr-x 2 root root 4096 2016-04-12 14:38 public ..[user@xxx api]$ cd ./public[user@xxx public]$ touch testtouch: cannot touch ‘test’: Permission denied | You can't edit the contents of the public directory if you don't have write and execute access. You indicate you are attempting to create a new file. If the test file doesn't already exist in public , touch will attempt to create a new file. It cannot do this without the write and execute permissions over the parent directory. Execute is required to traverse the directory; write is required to add the inode entry for the new file. Apparently, you don't have one or both of these permissions. If the test file does already exist in public , touch will, by default, update the modification time of the file. Only write access to the file is required for this, as the modification date/time is stored in the file's inode. If the file already exists, you will need to inspect the file's permissions using a command like ls -l public/test to determine if you have write access. The permissions bitmask on the directory, rwxrwxr-x , means: the root user, i.e. the owner of the directory, has write privileges to the directory as indicated by the first rwx block. This user can also read the directory (the r bit) and traverse it to access its contents (the x bit). members of the root group, i.e. the group on the directory, who are not themselves the root user, also have similar privileges to read, write and traverse the directory as indicated by the second rwx block All other users only have read and execute rights, as indicated by the last r-x block. As noted, for directories, execute permissions allow you to traverse that directory and access its contents. See this question for more clarity on this. How do I get permissions? You will need to talk to your system administrator (which might be you!) to do one of the following: Make you the owner of the public/ directory using a command like chown user public/ . This will be suitable if you are the only user who will need to have edit rights. Create a new group with a suitable name , perhaps publiceditors , and set this as the group on the public/ directory using a command like chgrp publiceditors public/ . Ensure you and any other users who require the ability to modify the directory are listed as members of the group. This approach works where multiple users need edit capability. Make your user account a member of the root group (not something I would recommend). Provide you with access to log in or masquerade as root , such as with sudo or su with the root password Change the rights on the directory to grant all users write permissions , using a command like chmod o+w public . Be aware that this gives everyone on the box the ability to edit and delete files in the public directory.* You may not want this! * In the absence of other access control enforcement, such as mandatory access control in the kernel. What do read , write and execute permissions mean in the context of a directory? Assuming you're on a Linux box, on a directory, a read permission bit allows you to read the directory listing. The write permission bit allows you to update the directory listed, which is required for creating a file*, editing the name of a file, unlinking (deleting) a file. The execute bit allows you to traverse the directory, access its files etc. More information on Linux directory permissions . * Actually, you're linking a file into the directory. Most times you will do this at the point of file creation, but there are more complex examples. For example, making a hard link to a file which originally existed elsewhere in the file system will require write access to the target directory of the link, despite the fact you're not creating a new file. Why write access to the directory ? You need to be able to write to the directory to add a reference to the relevant inode for the file you are adding. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/276142",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135235/"
]
} |
276,168 | I'm reading Wikipedia about X11 and it says that: In its standard distribution it is a complete, albeit simple, display and interface solution which delivers a standard toolkit and protocol stack for building graphical user interfaces on most Unix-like operating systems... But later it says that: X primarily defines protocol and graphics primitives - it deliberately contains no specification for application user-interface design, such as button, menu, or window title-bar styles. So, does X11 provide widgets like a button or a window panel/frame, etc or not? What is a graphic primitive? What does X11 provide exactly? It is also stated that: X does not mandate the user interface; individual client programs handle this. Programs may use X's graphical abilities with no user interface. What does this mean? | Like many words, “X11” can have multiple meanings. “X11” is, strictly speaking, a communication protocol. In the sentences “X primarily defines protocol and graphics primitives …” and “X does not mandate the user interface …”, that's what X refers to. X is a family of protocols, X11 is the 11th version and the only one that's been in use in the last 25 years or so. The first sentence in your question refers to a software distribution which is the reference implementation of the X11 protocol. The full name of this software distribution is “the X Window System”. This distribution includes programs that act as servers in the X11 protocol, programs that act as clients in the X11 protocol, code libraries that contain code that makes use of the X11 protocol, associated documentation, resources such as fonts and keyboard layouts that can be used with the aforementioned programs and libraries, etc. Historically , this software distribution was made by MIT; today it is maintained by the X.Org Foundation . The X11 protocol allows applications to create objects such as windows and use basic drawing primitives (e.g. fill a rectangle, display some text). Widgets like buttons, menus, etc. are made by client libraries. The X Window System includes a basic library (the Athena widget set ) but most applications use fancier libraries such as GTK+ , Qt , Motif , etc. Some X11 programs don't have a graphical user interface at all, for example command line tools such as xset , xsel and xdotool , key binding programs such as xbindkeys , etc. Most X11 programs do of course have a GUI. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/276168",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159975/"
]
} |
276,173 | Command on AIX is: [root@hx042:/home/user1]$ lqueryvg -Atp hdiskpower130516-1396 lqueryvg: The physical volume hdiskpower13, was not found in the system database. Max LVs: 256 PP Size: 30 Free PPs: 0 LV count: 3 PV count: 3 Total VGDAs: 3 Conc Allowed: 0 MAX PPs per PV: 1016 MAX PVs: 32 Quorum (disk): 1 Quorum (dd): ??????? Auto Varyon ?: 1 Conc Autovaryon 0 Varied on Conc: 0Logical: 00f62b5c00004c000000014de7f073b1.1 prekod 1 00f62b5c00004c000000014de7f073b1.2 prekre 1 00f62b5c00004c000000014de7f073b1.3 prekcf 1Physical: 00f62b5ceb80c074 1 0 00f62b5ceb76311b 1 0 00f62b5ceb790075 1 0 Total PPs: 309 LTG size: 128 HOT SPARE: 0 AUTO SYNC: 0 VG PERMISSION: 0 SNAPSHOT VG: 0 IS_PRIMARY VG: 0 PSNFSTPP: 4352 VARYON MODE: ??????? VG Type: 0 Max PPs: 32512Mirror Pool Str n Sys Mgt Mode: ??????? VG Reserved: ??????? PV RESTRICTION: ??????? Infinite Retry: 2 Varyon State: 0 Disk Block Size 512 I need only these values out: prekodprekreprekcf I tried: [root@hx042:/home/user1]$ lqueryvg -Atp hdiskpower13|sed -n -e '/Logical/,/Physical/ p' 0516-1396 lqueryvg: The physical volume hdiskpower13, was not found in thesystem database.Logical: 00f62b5c00004c000000014de7f073b1.1 prekod 1 00f62b5c00004c000000014de7f073b1.2 prekre 1 00f62b5c00004c000000014de7f073b1.3 prekcf 1 Physical: 00f62b5ceb80c074 1 0 and now I'm stuck because there is Logical in same line as first value I need, also there is this unavoidable error message which is not useful at all at this point which I also don't need. | Like many words, “X11” can have multiple meanings. “X11” is, strictly speaking, a communication protocol. In the sentences “X primarily defines protocol and graphics primitives …” and “X does not mandate the user interface …”, that's what X refers to. X is a family of protocols, X11 is the 11th version and the only one that's been in use in the last 25 years or so. The first sentence in your question refers to a software distribution which is the reference implementation of the X11 protocol. The full name of this software distribution is “the X Window System”. This distribution includes programs that act as servers in the X11 protocol, programs that act as clients in the X11 protocol, code libraries that contain code that makes use of the X11 protocol, associated documentation, resources such as fonts and keyboard layouts that can be used with the aforementioned programs and libraries, etc. Historically , this software distribution was made by MIT; today it is maintained by the X.Org Foundation . The X11 protocol allows applications to create objects such as windows and use basic drawing primitives (e.g. fill a rectangle, display some text). Widgets like buttons, menus, etc. are made by client libraries. The X Window System includes a basic library (the Athena widget set ) but most applications use fancier libraries such as GTK+ , Qt , Motif , etc. Some X11 programs don't have a graphical user interface at all, for example command line tools such as xset , xsel and xdotool , key binding programs such as xbindkeys , etc. Most X11 programs do of course have a GUI. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/276173",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165435/"
]
} |
276,237 | I have an application written in C++ on Linux (Ubuntu 12.14), which decodes audio and finds the peak. This calculation is running on a background thread. From the main UI thread, I am calling the background thread several times. When this background thread runs, I see about 100% CPU utilization; that is, only the first core's usage goes 100%, and the remaining 3 cores are idle (quad core processor). So is it good behavior for the application to utilize 100% CPU? I read some post which states, "Usually it's a good thing for a process to use 100% of the CPU. It means it finishes sooner." Another statement I've read is, "75% to 100% cpu usage is not too bad if you're getting this under full load or when there's a major application running; however if this is a reading when idle or when you are not using the PC/laptop then this is a worry." | It depends on whether your application is a computational one (like this) or interactive. For a computational application, full utilisation of the CPU(s) is your goal, as that means that the result is ready sooner. Anything that causes that utilisation to go down is an opportunity for improvement (e.g. waiting on I/O). For an interactive application, any time used in CPU is time that's not spent ready to respond to user input. You would like your usage to be low. Some applications, such as multimedia editors, are both computational and interactive. The good ones divide the work into different threads, so that they can be responsive to interaction, yet achieve high throughput. This appears to be what you're doing. One thing you might want to consider is using more threads for your workload (assuming it is divisible) so that you are keeping more cores busy with your computation. If some of them are idle, that's a wasted resource! | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/276237",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108802/"
]
} |
276,238 | I'm running Debian Jessie. I've upgraded the system with: sudo apt-get updatesudo apt-get upgrade Then I try to update flash-plugin with: sudo update-flashplugin-nonfree --install But the command runs without output and the plugin is not updated. sudo update-flashplugin-nonfree --status gives: Flash Player version installed on this system : 11.2.202.577Flash Player version available on upstream site: 11.2.202.616flash-mozilla.so - auto mode link currently points to /usr/lib/flashplugin-nonfree/libflashplayer.so/usr/lib/flashplugin-nonfree/libflashplayer.so - priority 50Current 'best' version is '/usr/lib/flashplugin-nonfree/libflashplayer.so'. If I reinstall flashplugin-nonfree with: sudo apt-get install --reinstall flashplugin-nonfree the same problem occurs. What is the problem? == Edit (June 7, 2016 )== This is a recurring problem, running: sudo update-flashplugin-nonfree --installsudo update-flashplugin-nonfree --status gives Flash Player version installed on this system : 11.2.202.616Flash Player version available on upstream site: 11.2.202.621 This problem is already reported . I know that it's never safe to run flash player but debian distribution should help to mitigate this risk for those who need to rely on flash player. | The update signatures aren't available yet ; flashplugin-nonfree checks these to make sure the files being installed are OK ( i.e. you're not downloading corrupted files). A bug already exists for this , you need to wait for the maintainer to react (or install the plugin manually). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276238",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74926/"
]
} |
276,253 | Will grep [0-9] work in the same way as grep [:digit:] ? | No, [0-9] is not the same as [:digit:] . [0-9] matches the numerals 0 to 9. [:digit:] matches 0 to 9, and numerals in non-western languages as well (e.g. Eastern Arabic). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276253",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165647/"
]
} |
276,263 | How do I write a bash script that goes through each directory inside a parent_directory and executes a command on specific file. The directory structure is as follows: Parent_dir/ dir1/ acc.bam dir2/ acc.bam dir3/ acc.bam... around 30 directories This is the command I want to use : java8 -jar /picard.jar CollectRnaSeqMetrics REF_FLAT=/refFlathuman.refflat STRAND_SPECIFICITY=NONE I=acc.bam O=output | The usual idiom is for d in Parent_dir/*/do (cd "$d" && $command)done The for loop executes once for each directory directly within Parent_dir . For each of those directories, a sub-shell is spawned; in the sub-shell, we attempt to change into that directory (which might fail, e.g. if we have insufficient permission), and if we succeed, then execute the command. Whether or not we succeeded, the cd has no effect on the parent shell, so we don't need to worry about being in the wrong place there. If you want to make it more robust, you might (cd "$d" && test -r acc.bam && $command) to ensure that acc.bam exists and is readable in that directory. You might also add a test -w . to avoid trying to run the command in directories that are not writable. P.S. None of the above is specific to Bash; you can use /bin/sh for it quite portably. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276263",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163629/"
]
} |
276,279 | I have linux redhat machine version 6.5 I reboot machine and work as single user mode , then I set the network on eth0 with default gw address /etc/sysconfig/network-script/ifcfg-eth0 but from some reason dg address not appears from netstat -rn after service network restart my question - can we set default gw address and start network when we in single user mode ? | Single-user mode by definition does not implement networking at startup. To put it in sysV runlevel terms, you want runlevel 2 (local multi-user with networking). You can switch to this with telinit 2 . The standard runlevel definitions are: 0 - Halt the system 1 - Single-user mode 2 - Multi-user with networking, but no network services (e. g. NFS) 3 - Multi-user with networking and services 4 - Undefined 5 - Multi-user with networking, services, and GUI (e. g. Xorg) 6 - Reboot the system. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276279",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153544/"
]
} |
276,306 | I understand that with Unix file permissions, there's "user", "group", and "world" octets . For the sake of this discussion, let's assume that setuid/sticky bits don't exist. Consider the following example: $ echo "Hello World" > strange$ chmod 604 strange$ ls -l strange-rw----r-- 1 mrllama foo 12 Apr 13 15:59 strange Let's assume that there's another user, john , who is a member of my group, foo . What permissions does John have regarding this file? Does the system go with the most specific permission match (i.e. John isn't owner, but he's in the foo group, so use the permissions for the "group" octet)? ...or does it go by the most permissive of the octets that apply to him (i.e. John meets the criteria for "group" and "world", so it goes with the more permissive of the two)? Bonus questions: What if the permissions were instead 642 ? Can John only read, only write, or both? Are there any reasons to have strange permissions like 604 ? | When determining access permissions using Unix-style permissions, the current user is compared with the file's owner, then the group, and the permissions applied are those of the first component which matches. Thus the file's owner has the owner's permissions (and only those), members of the file's group have the group's permissions (and only those), everyone else has the "other users'" permissions. Thus: John has no permissions for this file. The most specific permission match wins, not the most permissive (access rights aren't cumulative). With permissions 642 , John could read the file. There are reasons to give permissions such as 604 : this allows the group to be excluded, which can be handy in some situations — I've seen it on academic systems with a students group, where staff could create files accessible to anyone but students. root has access to everything, regardless of the permissions defined on the file. For more complex access control you should look into SELinux and POSIX ACLs . (SELinux in particular can even limit what root has access to.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276306",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165677/"
]
} |
276,326 | I was using terminator for my terminal emulator, but I'm seeing huge performance issues with it. ie, launching a new terminal session takes 20-30 seconds on my netbook. gnome-terminal is better, but still takes 10-15 seconds to launch. xterm however takes only a second or two. I'm fine using it, but I absolutely loathe the way it handles copy/paste. How do I configure xterm to not copy something when highlighted via mouse (since I use a clipboard manager and I don't want my recently used entries to get nuked every time I select something) and use shift+ctrl+c/shift+ctrl+v for copy/paste? OS is Debian 8, window manager is fluxbox. Thanks! | I was able to use CTRL SHIFT C and CTRL SHIFT V for Copy-and-Paste with XTerm by adding XTerm*vt100.translations: #override \ Shift Ctrl <Key> C: copy-selection(CLIPBOARD) \n\ Shift Ctrl <Key> V: insert-selection(CLIPBOARD) to my ~/.Xresources and reloading it with xrdb -merge ~/.Xresources . Documentation of this feature can be found in xterm(1) , “Custom Key Bindings” . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276326",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65344/"
]
} |
276,392 | I configured and compiled Linux kernel with nouveau driver built-into kernel, i.e. with <*> as opposed to <M> when doing make menuconfig inside Linux kernel source directory. Now, I intend to use another driver rather than nouveau . If nouveau was a module, I would add a line like blacklist nouveau inside /etc/modprobe.d/blacklist.conf What should I do now. | You can also temporarily blacklist them on the grub command line (linux line) when you boot with the syntax module_to_blacklist.blacklist=yes OR modprobe.blacklist=module_to_blacklist You need to modify the grub,cfg to make the changes permanent. Mind you, this solution will not work for few modules | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276392",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/158683/"
]
} |
276,417 | For example, I have a output: Hello, this is the output. (let's say that for example hello is colored red, and the is colored green, and output is colored purple). Now, let's say that this is the output of a command named x . If I use this command, the output becomes white: x | grep hello I've read that one could use grep --color=always . However, this changes the color to highlight the result I searched for instead of keeping the original line colors. I want to keep the original line colors. How do I use grep while keeping them? | You could do this, x | grep --color=never hello To quickly test it, you can do, ls -l /etc/ --color=always | grep --color=never . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/276417",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165621/"
]
} |
276,425 | I've installed Kali Linux (normally I use ArchLinux, but I need Kali for work) and every time I want to upgrade, a few packages can't upgrade and are kept back. To force them to update I have to manually select them and perform an installation. To update, I use the following command: apt-get update && apt-get upgrade Even from a clean installation the problem persists. Is there someone with the same problem? How can I solve it permanently? | The apt-get upgrade command you have used will only upgrade packages that need no new packages as dependencies. You can use apt-get dist-upgrade to include new packages in the set of candidates. Be aware though that using dist-upgrade will also delete packages that have been obsoleted by other, possibly newer, packages. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276425",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165770/"
]
} |
276,471 | Does something like this exist in Unix? $ echo "this should show in red" | red$ echo "this should show in green" | green$ echo "this should show in blue" | blue Here I don't mean for literal color code text to come up (to be pasted in a file, for example). I just mean for the text to actually show up in the terminal as that color. Is this possible? | You'd use tput for that: tput setaf 1echo This is redtput sgr0echo This is back to normal This can be used to build a pipe: red() { tput setaf 1; cat; tput sgr0; }echo This is red | red The basic colours are respectively black (0), red (1), green, yellow, blue, magenta, cyan and white (7). You'll find all the details in the terminfo(5) manpage . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/276471",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86396/"
]
} |
276,474 | Is it possible to assign the setuid bit to sudo in order to execute any command as a normal user? Let's suppose that we have the user test and then: test@test$ apt-get update But I don't want to use sudo nor modify the sudoers file, is this possible using only the setuid bit? | Short answer: You can't execute arbitrary admin commands without either a sudo, or being root. Long answer: You must either have NOPASSWD in /etc/sudoers , or log as root. See https://askubuntu.com/questions/147241/execute-sudo-without-password . visudo then add a line username ALL=(ALL) NOPASSWD: ALL As requested, if you want to run, as root, a specific binary file, you might use chown root:wheel /usr/binarychmod u+s /usr/binary however, if program you want to run as root without sudo is a shell (or a python, awk, perl), you can't. beware of pitfall, on my main ubuntu /usr/bin/shutdown is a link to /sbin/systemctl . I would need to copy the later to the former before applying chmod/chown above. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276474",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165797/"
]
} |
276,477 | I'm learning the Unix shell command language looking for a repository that contains the source code for the Almquist shell ("ash" / "dash") but I could not find it. Can you help me find the source? I'm looking for the source code to a minimal shell and it seems that the Almquist shell is one. | The two widely used variants of ash nowadays are dash , which has a repository on kernel.org , and the Busybox ash , which has its own repository . The Almquist Shell variants page lists many variants and provides links to their source code, including the original post on Usenet . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276477",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9115/"
]
} |
276,484 | I have a shell script incrementing a variable like in the example below with set -e: $ var=0; echo $?0$ ((var++)); echo $?1$ ((var++)); echo $?0$ ((var++)); echo $?0$ echo $var; echo $?30$ bash --versionGNU bash, version 4.2.46(1)-release (x86_64-redhat-linux-gnu)Copyright (C) 2011 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>This is free software; you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law. $ The script exits unexpectedly. The thing is that the exact same script doesn't exit the same way when running it locally on a MacBook. The bash shell on the MacBook behaves exactly the same way when running the example above. Does anyone have any clue what is going on here? | An arithmetic command is successful if the value of the arithmetic expression is nonzero. If the value of the expression is 0, the command fails with status 1. This allows arithmetic commands to be used in tests, as the boolean operators in shell arithmetic expressions return 0 for true and 1 for false (like in C). For example if ((x==3)); then … works because ((x==3)) returns 0 when $x is equal to 3 and 1 otherwise. The postfix increment operator returns the old value of the variable. So ((var++)) returns an error status if var was previously zero. set -e tells the shell to exit on the first command that fails. No surprises there. To avoid unwanted errors resulting from arithmetic expressions that may legitimately have the value 0, don't use arithmetic commands, use an ordinary assignment with an arithmetic expression. var=$((var+1)) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276484",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41793/"
]
} |
276,492 | I have n files with one word per line file 1 file 2 file 3 ...1_a 2_a 3_a1_b 2_b 3_b1_c 3_c I want to write a bash script that takes all this files and generates all the possible combination of n words (one from each file). In my example I want this result : 1_a 2_a 3_a1_a 2_a 3_b1_a 2_a 3_c1_a 2_b 3_a1_a 2_b 3_b1_a 2_b 3_c1_b 2_a 3_a1_b 2_a 3_b1_b 2_a 3_c1_b 2_b 3_a1_b 2_b 3_b1_b 2_b 3_c1_c 2_a 3_a1_c 2_a 3_b1_c 2_a 3_c1_c 2_b 3_a1_c 2_b 3_b1_c 2_b 3_c I have tried to do to this with paste and awk but I have failed. How can I do this ? | You can use a recursive function that calls itself while there're files to process: #!/bin/bashprocess () { local prefix=$1 local file=$2 shift 2 while read line ; do if (($#)) ; then # There are still unprocessed files. process "$prefix $line" "$@" else # Reading the last file. printf '%s\n' "$prefix $line" fi done < "$file"}process '' "$@" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276492",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163621/"
]
} |
276,498 | I have several accounts set up in Thunderbird. If I reply to a message, I would like Thunderbird to always use the same e-mail address as the one the original mail was addressed to. However, it doesn't seem to work that way. If I move an e-mail from a folder of account A to a folder of account B, and I then hit Reply , it will default to account B, even though the mail was originally addressed to account A. Any way to make "whoever is addressed" always supersede the folder-based account selection? I've seen various topics around the same issue, but none of this fits my requirement of wanting to respond from a different account's folder with the original e-mail address... | If you reply to an email the "original" email address will only be used by Thunderbird if there is an identity that matches that email address for that account . If such an identity is not available the default identity for that account is taken, if you hit Reply . What you need to do is add extra identities to each of the accounts for the alternate email addresses. However if you move a mail to [email protected] to the account for [email protected] and in the settings for that latter account add an identity with at "Your Name" specified with your name, and "Email Address" set to [email protected] , then Thunderbird will put [email protected] as the return address if you press Reply (even if this is not the default). You can add additional identities to an account by right clicking the account in Thunderbird's left pane, then clicking Settings , then on the Account Settings form that pops up click Manage Identities... (just above Ok in the lower right corner. Click Add , and fill out the first two fields as described, and check that the appropriate SMTP server is selected (if you sent as [email protected] you might want to use the smtp server associated with that account and not the one associated with the account you add this extra identity to). You need to do this on each account for all your email addresses (assuming you don't want to worry about where you move which mail) that you want to have their incoming email address as reply to address. If you have 4 email accounts set up in Thunderbird, this requires you to add the3 "other" addresses as extra identities to each of the accounts, and for each select the appropriate SMTP server. The above will result in no difference for the recipient (in headers, route etc) of a reply whether you first move an email from one account to another, or directly reply from the account where you received the email. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276498",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16766/"
]
} |
276,540 | I need to split a .txt file into smaller ones containing 100 lines each, including the header. I don't know if this is relevant, but the original file is delimited like this: COLUMN1 | COLUMN2 | COLUMN31 | 2 | 34 | 5 | 67 | 8 | 9 I need every file generated from this split to have the header line. Also, they need to be generated in/moved to another directory and follow a name pattern, like file_01.txt , file_02.txt , etc | With gnu split you could save the header in a variable then split starting from the 2nd line, using the --filter option to write the header first and then the 99 lines for each piece and also specify the output directory (e.g. path to/output dir/ ): header=$(head -n 1 infile.txt)export headertail -n +2 infile.txt | split -l 99 -d --additional-suffix=.txt \--filter='{ printf %s\\n "$header"; cat; } >path\ to/output\ dir/$FILE' - file_ this will create 100-lines pieces as path to/output dir/file_01.txtpath to/output dir/file_02.txtpath to/output dir/file_03.txt.............................. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276540",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140841/"
]
} |
276,546 | I have an entry that may or may not appear in /etc/rsyslog.conf : # Added for Kiwi*.err;*.emerg;*.alert;*.warning;*.debug;*.notice;*.crit;*.info @10.19.24.50 Since some of these servers would have had this manually keyed in, I can't assume the whitespace is uniform (and it does vary on at least two servers I've found). I'm trying to write a puppet module for removing these lines. Relevant segment of that module: file_line {'remove_kiwi_comment': ensure => absent, path => $confFile, match => "^#.*Kiwi$", line => "# Added for Kiwi", match_for_absence => true, } file_line {'remove_kiwi_forward2': ensure => absent, match_for_absence => true, path => $confFile, match => '^.*50$', line => '*.err;*.emerg;*.alert;*.warning;*.debug;*.notice;*.crit;*.info @10.19.24.50', notify => Service[$serviceName], } The above succeeds in removing the comment from one of the DEV servers but the actual redirect doesn't appear to be removed. I've played around with the regexp in match=> to no avail and I'm not sure what else I can try to get it to delete the line. If I add enough spaces, it will remove it, but I don't want my module to assume any amount of whitespace, just that there is some amount of whitespace present in order to get rsyslog to load. Stdlib module version is 4.11, master is 3.3, client node for this server is 3.6 | With gnu split you could save the header in a variable then split starting from the 2nd line, using the --filter option to write the header first and then the 99 lines for each piece and also specify the output directory (e.g. path to/output dir/ ): header=$(head -n 1 infile.txt)export headertail -n +2 infile.txt | split -l 99 -d --additional-suffix=.txt \--filter='{ printf %s\\n "$header"; cat; } >path\ to/output\ dir/$FILE' - file_ this will create 100-lines pieces as path to/output dir/file_01.txtpath to/output dir/file_02.txtpath to/output dir/file_03.txt.............................. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276546",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34520/"
]
} |
276,581 | For a command (builtin or external), what is the difference of running it directly in a bash shell process and with bash -c in the bash shell? What are their advantages and disadvantages comparing to each other? For example, in a bash shell, run `date` directly, and run `bash -cdate`. Also consider a builtin command instead of an external one. | The -c option allows programs to run commands. It’s a lot easier to fork and do execl("/bin/sh", "sh", "-c", "date | od -cb && ps > ps.out", NULL); than it is to fork, create a pipe, fork again,call execl in each child, call wait , check the exit status,fork again, call close(1) , open the file,ensure that it is open on file descriptor 1, and do another execl . I believe that this was the reason why the option was createdin the first place. The system() library function runs a command by the above method. It provides a way to take an arbitrarily complex commandand make it look like a simple command. This is useful with programs that run a user-specified command,such as find … -exec or xargs . But you already knew that; it was part of the answer to your question, How to specify a compound command as an argument to another command? It can come in handyif you’re running an interactive shell other than bash. Conversely, if you are running bash, you can use this syntax $ ash -c " command " ︙ $ csh -c " command " ︙ $ dash -c " command " ︙ $ zsh -c " command " ︙ to run one command in another shell,as all of those shells also recognize the -c option. Of course you could achieve the same result with $ ashash$ command ︙ash$ exit $ cshcsh$ command ︙csh$ exit $ dashdash$ command ︙dash$ exit $ zshzsh$ command ︙zsh$ exit I used ash$ , etc., to illustrate the prompts from the different shells;you probably wouldn’t actually get those. It can come in handyif you want to run one command in a “fresh” bash shell; for example, $ ls -lAtotal 0-rw-r--r-- 1 gman gman 0 Apr 14 20:16 .file1-rw-r--r-- 1 gman gman 0 Apr 14 20:16 file2$ echo *file2$ shopt -s dotglob$ echo *.file1 file2$ bash -c "echo *"file2 or $ type shiftshift is a shell builtin$ alias shift=date$ type shiftshift is aliased to ‘date’$ bash -c "type shift"shift is a shell builtin The above is a misleading over-simplification. When bash is run with -c , it is considered a non-interactive shell,and it does not read ~/.bashrc , unless is -i specified. So, $ type cpcp is aliased to ‘cp -i’ # Defined in ~/.bashrc $ cp .file1 file2cp: overwrite ‘file2’? n $ bash -c "cp .file1 file2" # Existing file is overwritten without confirmation! $ bash -c -i "cp .file1 file2"cp: overwrite ‘file2’? n You could use -ci , -i -c or -ic instead of -c -i . This probably applies to some extent to the other shellsmentioned in paragraph 3, so the long form (i.e., the second form,which is actually exactly the same amount of typing) might be safer, especially if you have initialization/configuration filesset up for those shells. As Wildcard explained , since you’re running a new process tree(a new shell process and, potentially, its child process(es)),changes to the environment made in the subshell cannot affect the parent shell (current directory,values of environment variables, function definitions, etc.) Therefore, it’s hard to imagine a shell builtin commandthat would be useful when run by sh -c . fg , bg , and jobs cannot affect or access background jobsstarted by the parent shell, nor can wait wait for them. sh -c "exec some_program " is essentially equivalentto just running some_program the normal way,directly from the interactive shell. sh -c exit is a big waste of time. ulimit and umask could change the system settingsfor the child process, and then exit without exercising them. Just about the only builtin command that would be functionalin a sh -c context is kill . Of course, the commands that only produce output( echo , printf , pwd and type ) are unaffected,and, if you write a file, that will persist. Of course you can use a builtin in conjunction with an external command; e.g., sh -c "cd some_directory ; some_program " but you can achieve essentially the same effect with a normal subshell: (cd some_directory ; some_program ) which is more efficient. The same (both parts) can be said for something like sh -c "umask 77; some_program " or ulimit (or shopt ). And since you can put an arbitrarily complex command after -c —up to the complexity of a full-blown shell script —you might have occasion to use any of the repertoire of builtins;e.g., source , read , export , times , set and unset , etc. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276581",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
276,614 | I am trying to write a bash function that behaves similarly to the where builtin in tcsh . In tcsh , where lists all the builtins, aliases, and the absolute paths to executables on the PATH with a given name, even if they are shadowed, e.g. tcsh> where tcsh/usr/bin/tcsh/bin/tcsh As part of this I want to loop over everything in the $PATH and see if an executable file with the appropriate name exists. The following bash snippet is intended to loop over a colon-delimited list of paths and print each component followed by a newline, however, it just seems to print the entire contents of $PATH all on one line #!/bin/bashwhile IFS=':' read -r line; do printf "%s\n" "$line"done <<< "$PATH" As is stands now, bash where and ./where just print /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games So, how do I set up my while loop so that the value of the loop variable is each segment of the colon-separated list of paths in turn? | read uses IFS to separate the words in the line it reads, it doesn't tell read to read until the first occurrence of any of the characters in it. IFS=: read -r a b Would read one line, put the part before the first : in $a , and the rest in $b . IFS=: read -r a would put the whole line (the rest) in $a (except if that line contains only one : and it's the last character on the line). If you wanted to read until the first : , you'd use read -d: instead ( ksh93 , zsh or bash only). printf %s "$PATH" | while IFS= read -rd: dir || [ -n "$dir" ]; do ...done (we're not using <<< as that adds an extra newline character). Or you could use standard word splitting: IFS=:; set -o noglobfor dir in $PATH""; do ...done Now beware of few caveats: An empty $PATH component means the current directory. An empty $PATH means the current directory (that is, $PATH contains one component which is the current directory, so the while read -d: loop would be wrong in that case). //file is not necessary the same as /file on some system, so if $PATH contains / , you need to be careful with things like $dir/$file . An unset $PATH means a default search path is to be used, it's not the same as a set but empty $PATH. Now, if it's only the equivalent of tcsh / zsh 's where command, you could use bash 's type -a . More reading: What's a safe and portable way to split a string in shell programming? Understanding "IFS= read -r line" Why not use "which"? What to use then? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276614",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86874/"
]
} |
276,624 | A huge application needs, at one specific time, to perform a small number of writes to a file which requires root permissions. It is not really a file but a hardware interface which is exposed to Linux as a file. To avoid giving root privileges to the whole application, I wrote a bash script which does the critical tasks. For example, the following script will enable port 17 of the hardware interface as output: echo "17" > /sys/class/gpio/exportecho "out" > /sys/class/gpio/gpio17/direction However, as suid is disabled for bash scripts on my system, I wonder what is the best way to achieve this. Use some workaround presented here Call the script with sudo from the main application, and edit the sudoers list accordingly, to avoid requiring a password when calling the script. I'm a little bit uncomfortable to give sudo privileges to echo . Just write a C program, with fprintf , and set it to suid root. Hardcode the strings and filenames and make sure only root can edit it. Or read the strings from a text file, similarly making sure that no one can edit the file. Some other solution which didn't occur to me and is safer or simpler then the ones presented above? | You don't need to give sudo access to echo . In fact, that's pointless because, e.g. with sudo echo foo > bar , the redirection is done as the original user, not as root. Call the small script with sudo , allowing NOPASSWD: access to ONLY that script (and any other similar scripts) by the user(s) who need access to it. This is always the best/safest way to use sudo . Isolate the small number of commands that need root privileges into their own separate script(s) and allow the un-trusted or partially-trusted user to only run that script as root. The small sudo -able script(s) should either not take args (or input) from the user (i.e. any other programs it calls should have hard-coded options and args) or it should very careful validate any arguments/input that it has to accept from the user. Be paranoid in the validation - rather than look for 'known bad' things to exclude, allow only 'known good' things and abort on any mismatch or error or anything even remotely suspicious. The validation should occur as early in the script as possible (preferably before it does anything else as root). I really should have mentioned this when I first wrote this answer, but if your script is a shell script it MUST properly quote all variables. Be especially careful to quote variables containing input supplied by the user in any way, but don't assume some variables are safe, QUOTE THEM ALL . That includes environment variables potentially controlled by the user (e.g. "$PATH" , "$HOME" , "$USER" etc. And definitely including "$QUERY_STRING" and "HTTP_USER_AGENT" etc in a CGI script). In fact, just quote them all. If you have to construct a command line with multiple arguments, use an array to build the args list and quote that - "${myarray[@]}" . Have I said "quote them all" often enough yet? remember it. do it. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/276624",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46448/"
]
} |
276,643 | I find man command disappears on my RHEL7 : # man lsbash: man: command not found...# which man/usr/bin/which: no man in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/local/go/bin:/root/gocode/bin:/root/bin:/opt/linuxki) But man and man-pages are all on my box: # yum install manLoaded plugins: auto-update-debuginfo, langpacks, product-id, search-disabled-repos, subscription-managerPackage man-db-2.6.3-9.el7.x86_64 already installed and latest versionNothing to do# yum install man-pagesLoaded plugins: auto-update-debuginfo, langpacks, product-id, search-disabled-repos, subscription-managerPackage man-pages-3.53-5.el7.noarch already installed and latest versionNothing to do Where did man go? Update 1 : Try to reinstall man , but it prompts following errors: ......Running transaction Installing : man-db-2.6.3-9.el7.x86_64 1/1Error unpacking rpm package man-db-2.6.3-9.el7.x86_64error: unpacking of archive failed on file /usr/bin/man: cpio: rename Verifying : man-db-2.6.3-9.el7.x86_64 1/1Failed: man-db.x86_64 0:2.6.3-9.el7 Update 2 # ls -lt /usr/bin/mantotal 4drwxr-xr-x. 2 nan nan 81 Mar 24 22:30 man1drwxr-xr-x. 2 nan nan 4096 Mar 24 22:30 man7# stat /usr/bin/man File: ‘/usr/bin/man’ Size: 28 Blocks: 0 IO Block: 4096 directoryDevice: fd00h/64768d Inode: 67811254 Links: 4Access: (0755/drwxr-xr-x) Uid: ( 1000/ nan) Gid: ( 1000/ nan)Context: unconfined_u:object_r:bin_t:s0Access: 2016-04-15 17:47:56.613595324 +0800Modify: 2016-03-24 22:30:30.000000000 +0800Change: 2016-04-08 11:08:45.605815500 +0800 Birth: - | I doubt we'll ever be able to tell you where it went, but you should just be able to reinstall it using yum . yum reinstall man yum doesn't check to see if files exist when you run yum install , it just checks a database of which packages have been installed. If someone deletes all the files outside of the package manager, it won't know (you can get it to check, but it doesn't by default). Using yum reinstall tells it to do the install even though it thinks the package is already there. Depending on what has been deleted or removed, you may need to yum reinstall ... other things like man-pages . Updated in light of new information: For some reason, your /usr/bin/man is a directory, rather than a single file, and judging by the content it looks like someone has done something weird like mv /usr/share/man /usr/bin or something odd. You're not going to be able to simply undo this - you need to investigate, see what's been broken or moved, and correct it. You might just be able to remove /usr/bin/man and its contents and then re-install man and man-pages using yum but without more investigation it's not going to be clear. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/276643",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85056/"
]
} |
276,651 | I have shell ( php ) script that gets in touch with target file this way: inspects whether file and directory are writable with php 's is_writable() (I don't think that this is problem) does in-place file edit with sed command: grep -q "$search" "$passwd_file" && { sed -i "s|$search|$replace|" "$passwd_file"; printf "Password changed!\n"; } || printf "Password not changed!\n" As a result I get (everything else correct but) file which was myuser:www-data to be myuser:myuser . Does sed changes file group ownership as it seems, and how do I avoid it, if possible? | There is a little problem with sed 's inplace editing mode -i . sed creates a temporary file in the same directory called sedy08qMA , where y08qMA is a randomly generated string. That file is filled with the modified contents of the original file. After the operation, sed removes the original file and renames the temporary file with the original filename. So it's not a true inplace edit . It creates a new file with permissions of the calling user and a new inode number. That behavior is mostly not bad, but for instance, hard links get broken. However, if you want true inplace editing, you should use ed . It reads commands from the stdin and edits the file directly, without a temporary file (it's done over ed 's memory buffer). A common practise is to use printf to generate the command list: printf "%s\n" '1,$s/search/replace/g' wq | ed -s file The printf command produces output as follows: 1,$s/search/replace/gwq Those two lines are ed commands. The first one searches for the string search and replaces it with replace . The second one writes ( w ) the changes to the file and quits ( q ). -s suppresses diagnostic output. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/276651",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68350/"
]
} |
276,659 | I'd like to parse a file containing 5 digit numbers separated by comma or dash, lines like : 12345,23456,34567-45678,12345-23456,34567 My goal is to find lines which have incorrect formatting eg. lines which contain numbers which are not composed of 5 digits being separated by other characters than comma or dash. I tried to egrep the file with : cat file.txt | egrep -v [-,]*[0-9]{5}[,-]* but if I have a 6 digit number, it is matched, and the line is not displayed and if I have a 4 digit number, it is not matched but other numbers from that same line are matched and the line is not displayed To specify the lines content : a number must be of 5 digits ranges are defined with dash, like 12345-12389 a line can contain anything from a single number to several numbers and ranges in any order Any suggestions please ? | grep -vxE '([0-9]{5}[,-])*[0-9]{5}' Would report the incorrect lines. Or if you also want to forbid 12345-12345-12345 : num='[0-9]{5}'num_or_range="$num(-$num)?"grep -vxE "($num_or_range,)*$num_or_range" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276659",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137578/"
]
} |
276,666 | Is it possible to list available packages in a specific repository, one of the repositories configured in apt , on Debian Jessie using the command line? For example, deb http://ftp.de.debian.org/debian jessie main non-free from /etc/apt/sources.list . | Given that the repository you're interested in is in your apt sources, you can find the information on the packages available there in the files apt downloads; for the line deb http://ftp.de.debian.org/debian jessie main non-free these would be respectively /var/lib/apt/lists/ftp.de.debian.org_debian_dists_jessie_main_binary-amd64_Packages /var/lib/apt/lists/ftp.de.debian.org_debian_dists_jessie_non-free_binary-amd64_Packages (assuming you're on amd64 ). You can ensure those files are up-to-date first by running apt-get update | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/276666",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153195/"
]
} |
276,731 | This is such a simple question that I'm sure it's been asked somewhere, but I can't find it. My shell, which I have not intentionally set up to do so, seems to eat any words involving question marks: $ bash --versionGNU bash, version 4.3.42(1)-release (x86_64-apple-darwin13.4.0)Copyright (C) 2013 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>This is free software; you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law.$ echo a ? = =?a =$ not-a-command? echo aa$ (?) echo aa In case it matters, note that any word containing a question mark seems to disappear completely silently—to the extent that she shell never even notices that the invocation starts with a word that doesn't specify a valid executable—even if the question mark is not at the beginning. | You probably have the nullglob shell option enabled; this causes any word containing a globbing character ( * or ? ) to be removed if the glob doesn't match anything. Thus, if you're in a folder which doesn't contain any file, folder etc. with a single-character name, ? won't expand to anything and will instead be removed; likewise, not-a-command? is unlikely to match anything and will instead be removed. To check whether this is the case, run shopt nullglob To de-activate the option, run shopt -u nullglob | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276731",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138566/"
]
} |
276,741 | I have a text file of this type, and I would look for any lines containing the string Validating Classification and then obtain uniquely the reported errors. I do not know the types of possible errors. Input file: 201600415 10:40 Error Validating Classification: error1201600415 10:41 Error Validating Classification: error1201600415 10:42 Error Validating Classification: error2201600415 10:43 Error Validating Classification: error3201600415 10:44 Error Validating Classification: error3 Output file 201600415 10:40 Error Validating Classification: error1201600415 10:42 Error Validating Classification: error2201600415 10:43 Error Validating Classification: error3 Can I achieve this using grep, pipes and other commands? | You will need to discard the timestamps, but 'grep' and 'sort --unique' together can do it for you. grep --only-matching 'Validating Classification.*' | sort --unique So grep -o will only show the parts of the line that match your regex (which is why you need to include the .* to include everything after the "Validating Classification" match). Then once you have just the list of errors, you can use sort -u to get just the unique list of errors. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/276741",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165975/"
]
} |
276,751 | In a shebang, is a space or more allowed between #! and the interpreter? For example, #! /bin/bash . It seems work, but some said that it is incorrect. | Yes, this is allowed. The Wikipedia article about the shebang includes a 1980 email from Dennis Ritchie, when he was introducing kernel support for the shebang (as part of a wider package called interpreter directives ) into Version 8 Unix (emphasis mine): The system has been changed so that if a file being executed begins with the magic characters #! , the rest of the line is understood to be the name of an interpreter for the executed file. […] To take advantage of this wonderful opportunity, put #! /bin/sh at the left margin of the first line of your shell scripts. Blanks after ! are OK. So spaces after the shebang have been around for quite a while, and indeed, Dennis Ritchie’s example is using them. Note that early versions of Unix had a limit of 16 characters in this interpreter line, so you couldn’t have an arbitrary amount of whitespace there. This restriction no longer applies in modern kernels. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/276751",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
276,797 | I was ssh ing into a certain computing system that I use. I was trying to follow some Linux instructions that involved sudo , and tried a few times to enter the password unsuccessfully, before realizing that I was getting it incorrect because I was typing in my ssh terminal. I then received a very accusatory email about how my trying to sudo constituted a threat to the system. The tone was sustained even after I explained that it was an accident. My concrete question is: what is the threat model whereby trying a couple times to sudo without permission is considered a serious violation of system security? Note: I understand in principle why the rule is there. My guess is that you don't want people writing automated scripts trying to crack the password - or I guess try to input a password one obtained via social engineering. But a couple unsuccessful guesses for some silly command...what's the threat model? | First off, sudo all by itself, doesn't send any emails or create warning messages, other than logging your unsuccessful attempt to the log. People who observe these logs and correlate events (most probably using a scripted log watcher), see that some user id, which happened to be yours this time, is trying to gain root access where he/she is not permitted. As a result, the automated process fires an email to the offender. Even though you think you might be responding to a human being, 9 out 10 times, your response goes into a mailbox, which is either not observed, or checked very seldom. If you think you have received a response to your explanation from an actual human, who keeps accusing you after you made clear that this was a mistake and you were not on the right server, he is either too bored and looking for something to do or have strict orders to scare people off. Other than brute force cracking attempts, there is no other threat vector in the wild at this time, attacking sudo protected servers, that I know. Also, consider asking questions of this nature in the Information Security section of Stack Exchange. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/276797",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165879/"
]
} |
276,834 | I can do this by calling the external utility sed (for a known non-empty $myvar ) like so: if [ "$(printf %s "$myvar" | sed -n '$=')" -eq 1 ]; then echo "Your variable has only one line, proceeding"else echo "Error condition, variable must have only one line"fi Is there a way to do this using only bash builtins? Even better, is there a way to do this in a POSIX-specified manner, without calling external utilities? (This question is one of curiosity and finding better ways to do things; the above code does function. I'm wondering if there is a cleaner/faster way.) | The POSIX way: NL=''case $myvar in *"$NL"*) echo more than one line ;; *) echo one line ;;esac This also works in pre-POSIX Bourne-like shells, too. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276834",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135943/"
]
} |
276,851 | I would like to print the character at a given position using only the command line. E.g.: <command> 5 Would output a if the 5th char of that file was a . Since I am dealing with big files, ideally this would be able to handle big files. | With sed : $ echo 12345 | sed 's/.\{4\}\(.\).*/\1/;q'5$ echo 1234ắ | sed 's/.\{4\}\(.\).*/\1/;q'ắ Note that sed will fail to produce output if you input contain invalid multi-byte characters in current locale. You can use LC_ALL=C if you work with single byte characters only. With ASCII file, you can also use dd : $ echo 12345 | dd bs=1 skip=4 count=1 2>/dev/null5 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/276851",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165589/"
]
} |
276,951 | In Gnome 3.18, it was possible to change the titlebar height of all windows by changing the css in ~/.config/gtk-3.0/gtk.css as per Reduce title bar height in gnome 3 / gtk+ 3 . .header-bar.default-decoration { padding-top: 0px; padding-bottom: 0px; }.header-bar.default-decoration .button.titlebutton { padding-top: 0px; padding-bottom: 0px;}/* No line below the title bar */.ssd .titlebar { border-width: 0; box-shadow: none;} In Gnome 3.20, this appears to no longer apply to windows with a headerbar/CSD (gnome-specific buttons in the title bar), such as Nautilus (Files), Settings, Photos, Contacts, etc. The tweak still reduces the titlebar height for other applications, such as gnome-terminal and gVim. How do I reduce the height of the titlebar in gnome-programs such as Nautilus in Gnome 3.20? Update I have also tried what is suggested in this reddit thread . I tried both window.ssd and .ssd only, no dice. This works, see the answer I posted for more details window.ssd headerbar.titlebar { padding-top: 1px; padding-bottom: 1px; min-height: 0;}window.ssd headerbar.titlebar button.titlebutton { padding-top: 1px; padding-bottom: 1px; min-height: 0;} and /* shrink headebars */headerbar { min-height: 38px; padding-left: 2px; /* same as childrens vertical margins for nicer proportions */ padding-right: 2px;}headerbar entry,headerbar spinbutton,headerbar button,headerbar separator { margin-top: 2px; /* same as headerbar side padding for nicer proportions */ margin-bottom: 2px;}/* shrink ssd titlebars */.default-decoration { min-height: 0; /* let the entry and button drive the titlebar size */ padding: 2px}.default-decoration .titlebutton { min-height: 26px; /* tweak these two props to reduce button size */ min-width: 26px;} | Note: If you are on PopOS, there is an option to "Remove Window Titles" in the top bar menu that also controls tiling. This is what I use currently myself and it works great for only removing the superfluous non-CSD titlebars. Headerbar/CSD Actually, a section of the code that I found via reddit and posted above, namely headerbar entry,headerbar spinbutton,headerbar button,headerbar separator { margin-top: 2px; /* same as headerbar side padding for nicer proportions */ margin-bottom: 2px;} DOES modify the headerbars/CSDs. However the effect is not immediate. Even if you reload gnome, you might need to close all windows, wait a while, or log out and log back in again to see the effect. I am still not seeing any difference in the header bar when modifying the following. headerbar { min-height: 38px; padding-left: 2px; /* same as children's vertical margins for nicer proportions */ padding-right: 2px;} Standard titlebar The two sections for the normal window titlebars work as expected. .default-decoration { min-height: 0; /* let the entry and button drive the titlebar size */ padding: 2px}.default-decoration .titlebutton { min-height: 26px; /* tweak these two props to reduce button size */ min-width: 26px;} Titlebar border You can use the following to remove the titlebar border if you are running the default adwaita theme.From https://bbs.archlinux.org/viewtopic.php?id=211102 window.ssd headerbar.titlebar { border: none; background-image: linear-gradient(to bottom, shade(@theme_bg_color, 1.05), shade(@theme_bg_color, 0.99)); box-shadow: inset 0 1px shade(@theme_bg_color, 1.4);} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/276951",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65850/"
]
} |
276,959 | I would like to be able to extract a tar file, such that all extracted files are placed under a certain prefix directory. Any attempt by the tar files to write to outside directories should cause the extraction to fail. As you might imagine, this is so that I can securely extract an untrusted tar file. How can I do this with GNU tar ? I came up with: tar --exclude='/*' --exclude='*/../*' --exclude='../*' -xvf untrusted_file.tar but I am not sure that this is paranoid enough. | You don't need the paranoia at all. GNU tar — and in fact any well-written tar program produced in the past 30 years or so — will refuse to extract files in the tarball that begin with a slash or that contain .. elements, by default. You have to go out of your way to force modern tar programs to extract such potentially-malicious tarballs: both GNU and BSD tar need the -P option to make them disable this protection. See the section Absolute File Names in the GNU tar manual. The -P flag isn't specified by POSIX,¹ though, so other tar programs may have different ways of coping with this. For example, the Schily Tools' star program uses -/ and -.. to disable these protections. The only thing you might consider adding to a naïve tar command is a -C flag to force it to extract things in a safe temporary directory, so you don't have to cd there first. Asides : Technically, tar isn't specified by POSIX any more at all. They tried to tell the Unix computing world that we should be using pax now instead of tar and cpio , but the computing world largely ignored them. It's relevant here to note that the POSIX specification for pax doesn't say how it should handle leading slashes or embedded .. elements. There's a nonstandard --insecure flag for BSD pax to suppress protections against embedded .. path elements, but there is apparently no default protection against leading slashes; the BSD pax man page indirectly recommends writing -s substitution rules to deal with the absolute path risk. That's the sort of thing that happens when a de facto standard remains in active use while the de jure standard is largely ignored. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/276959",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55436/"
]
} |
277,002 | Is there a way to run ls or find to get the list of files within a directory and then run stat to get all of the specific information (i.e. File Group, File Name, File Owner, File Size (displayed in K, M, etc.) & Permissions? I was trying something along the lines of: find .content/media -type f | statls -l .content/media | stat Answer: find ./content/"subdirectory name"/ -type f -exec stat -c '%n : %U : %A : %G : %s' {} + | Use stat on the -exec action of find : find .content/media/ -type f -exec stat -c '%n : %U : %G : %s' {} + Change the format sequences of stat to meet your need. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/277002",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165655/"
]
} |
277,005 | I'm struggling to get my wireless card detected as wlan0 on a new Debian install (3.16.0-4-amd64). The wireless PCI device is visible as follows. $ lspci -knn | grep Net -A204:00.0 Network controller [0280]: Broadcom Corporation BCM43602 802.11ac Wireless LAN SoC [14e4:43ba] (rev 01) Subsystem: Apple Inc. Device [106b:0152]05:00.0 Multimedia controller [0480]: Broadcom Corporation 720p FaceTime HD Camera [14e4:1570]--0b:00.0 Ethernet controller [0200]: Broadcom Corporation NetXtreme BCM57762 Gigabit Ethernet PCIe [14e4:1682] Subsystem: Apple Inc. Device [106b:00f6] Kernel driver in use: tg3 With the help of this tip online , I placed a download of brcmfmac43602 in /lib/firmware/brcm . I've since restarted and attempted to add the module to with modprobe brcmfmac . Still, I'm unable to see the network interface, as displayed below. $ ip link show1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:002: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 98:5a:eb:c6:cf:4d brd ff:ff:ff:ff:ff:ff Any pointers on what I may have overlooked would be greatly appreciated. | Use stat on the -exec action of find : find .content/media/ -type f -exec stat -c '%n : %U : %G : %s' {} + Change the format sequences of stat to meet your need. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/277005",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166151/"
]
} |
277,089 | I need to grep all strings that start with "[" and finish with a certain string, e.g. "apal". So all chars in between these 2 chars would be shown as well. Given an input such as: [44060]apal223reaea[55000]opoer4nr4on[95749]assad fdfdf Bhassrj sdaapald33qdq3d3da3ded[66000]dsfsldfsfldkj[77000]porpo4o4o3j3mlkfxxxx[101335]KaMMMM MMM lapa[131322]sadasds ddd apaladsdas[138133]sadasdadasddsss KMMapaldsadsadwe[150000]idhoqijdoiwjodwiejdw The output would be something lie [44060]apal[95749]assad fdfdf Bhassrj sdaapal[101335]KaMMMM MMM lapal[131322]sadasds ddd apal[138133]sadasdadasddsss KMMapal | Use: grep -o '\[.*apal' file.txt Replace file.txt with the actual filename. On the other hand, if you want to match [ at the start of the line: grep -o '^\[.*apal' file.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/277089",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165589/"
]
} |
277,137 | Do we need to deploy varnish if our Web application is using memcached or do we need memcached if we are using varnish to cache web contents. Can someone recommend some scenarios where should we use one or the other or may be both. | An HTTP proxy server and memcached are different technologies that solve different problems and apply at different layers of your software stack. Both can be useful. An HTTP proxy server sitting in front of your application can respond to requests from its cache, saving your application from having to handle some of the request load. This only works if your application outputs content that is cacheable and if end users request the content more than once. In order for the content to be cacheable, your application needs to set the appropriate HTTP headers to let proxy servers (and browsers) know what is cacheable and for how long. In the case of requests that make it all the way to your application (they miss the HTTP proxy cache or there is no HTTP proxy), your application has to compute the content it needs to send back. If this computation is expensive but parts of the data can be cached from previous requests, memcached is a good way for your application to stash away the results of [parts of] those computations so they can be reused later. Your application needs to be written specifically to do this, and to connect to memcached instances to get and set this data. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/277137",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148778/"
]
} |
277,152 | I have an array that contains details about each NIC. Each array index consists of three space-separated values. I would like to have a nice table as output. Is there a way to assign formatting patterns to each value in an index? so far this is what I use: NIC_details[0]='wlp2s0 192.168.1.221 xx:xx:xx:xx:xx:xx'NIC_details[1]='wwan0 none xx:xx:xx:xx:xx:xx'NIC_details[2]='virbr0 192.168.122.1 00:00:00:00:00:00'printf "%-20s\n" "${NIC_details[@]}" Is there a way to access the values in an array index by their position $1, $2, and $3, knowing that they are space-separated, and then apply a formatting pattern for each $X position? I have also tried echo "$i" | column -t but it does not bring the desired result. | Here's a slight modification to what you were already doing: set -f # to prevent filename expansionfor i in ${!NIC_index[@]}do printf "%-20s" ${NIC_details[i]} printf "\n"done I added a for loop to go over all of the array indexes -- that's the ${!NIC... syntax. Since it's a numeric array, you could alternatively loop directly over the indexes. Note that I intentionally left ${NIC_details[i]} unquoted so that the shell would split the value on the spaces that you've used to separate the values. printf thus sees the 3 values and so repeats the %-20s formatting for each of them. I then add a newline after each NIC to make a nice table. The set -f at the top is important if we leave variables unquoted; otherwise the shell will step in and attempt to glob any wildcards it sees in unquoted variables. Sample output: wlp2s0 192.168.1.221 xx:xx:xx:xx:xx:xxwwan0 none xx:xx:xx:xx:xx:xxvirbr0 192.168.122.1 00:00:00:00:00:00 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/277152",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144310/"
]
} |
277,203 | One program created lots of nested sub-folders. I tried to use command rm -fr * to remove them all. But it's very slow. I'm wondering is there any faster way to delete them all? | The fastest way to remove them from that directory is to move them out of there, after that just remove them in the background: mkdir ../.tmp_to_removemv -- * ../.tmp_to_removerm -rf ../.tmp_to_remove & This assumes that your current directory is not the toplevel of some mounted partition (i.e. that ../.tmp_to_remove is on the same filesystem). The -- after mv (as edited in by Stéphane) is necessary if you have any file/directory names starting with a - . The above removes the files from your current directory in a fraction of a second, as it doesn't have to recursively handle the subdirectories. The actual removal of the tree from the filesystem takes longer, but since it is out of the way, its actual efficiency shouldn't matter that much. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/277203",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166284/"
]
} |
277,208 | I want to change the wallpaper of xmonad from the command line. How can I do this? | You can use feh : feh --bg-scale /path/to/wallpaper You can also set up bash scripts to change wallpaper to random one every X minutes(I use it with feh ), get random wallpaper on each boot, etc. The Arch wiki explains it nicely. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/277208",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.