source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
189,777 | I need to be able to execute an executable shell script ( sh ) with a double click. I set the executable flag on the permissions for the file, and yet when I double click on it, it opens in a text editor. I do not have any options in my UI under the files properties menu to use a custom command or anything. It only lists various applications which are installed. I just want it to execute, nothing more. How can I accomplish this? | To run your script by double clicking on its icon, you will need to create a .desktop file for it: [Desktop Entry]Name=My scriptComment=Test hello world scriptExec=/home/user/yourscript.shIcon=/home/user/youricon.pngTerminal=falseType=Application Save the above as a file on your Desktop with a .desktop extension. Change /home/user/yourscript.sh and /home/user/youricon.png to the paths of your script and whichever icon you want it to have respectively and then you'll be able to launch by double clicking it. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/189777",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105070/"
]
} |
189,784 | I have a PHP code that generates the file name on which wget will append its logs. I generated 2000+ files, but the problem is I am having trouble working with them because I had a mistake of putting PHP_EOL as part of its name, that code will add LF/line feed/%0A at its name Example of such file name (when accessed via browser, when put on /var/www/html) http://xxxx/wget_01_a%0a.txt notice the %0a before the extension name I messed up, and I wish there's a rename batch that will search through all files and if it found line feed it would rename it without the line feed so it would just be http://xxxx/wget_01_a.txt I am not pretty sure how to handle this because seems like when I ls on putty all special character not limited to that unwanted char becomes ? , what I only wish to target is that line feed. | Using the utility rename from util-linux, which CentOS 6 provides, and assuming bash: rename $'\n' '' wget_* This asks to delete newline characters from the names of listed files. I recommend trying it out on a small subset to ensure it does what you want it to (note that rename on CentOS 7 supports a -v switch to show you what changes it is making). If instead you were on a distribution that provides the Perl-based rename : rename -n 's/\n//g' wget_* And then run without -n to actually perform the renaming. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/189784",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79493/"
]
} |
189,787 | What is the difference between echo and echo -e ? And which quotes ("" or '') should be used with the echo command? i.e: echo "Print statement" or echo 'Print statement' ? Also, what are the available options that can be used along with echo ? | echo by itself displays a line of text. It will take any thing within the following "..." two quotation marks, literally, and just print out as it is. However with echo -e you're making echo to enable interpret backslash escapes. So with this in mind here are some examples INPUT: echo "abc\n def \nghi" OUTPUT:abc\n def \nghiINPUT: echo -e "abc\n def \nghi"OUTPUT:abc def ghi Note: \n is new line, ie a carriage return. If you want to know what other sequences are recognized by echo -e type in man echo to your terminal. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/189787",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99113/"
]
} |
189,790 | I have a Lenovo Yoga 3 that apparently has a new Broadcom Bluetooth device. The bluetooth is detected at boot and when I try to pair a something in gnome, I can see a list of devices but none of them pair. How can I get this device to work? lsusbBus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hubBus 001 Device 004: ID 048d:8386 Integrated Technology Express, Inc.Bus 001 Device 003: ID 5986:0535 Acer, IncBus 001 Device 002: ID 0489:e07a Foxconn / Hon HaiBus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub usb-devicesT: Bus=01 Lev=01 Prnt=01 Port=03 Cnt=02 Dev#= 2 Spd=12 MxCh= 0D: Ver= 2.00 Cls=ff(vend.) Sub=01 Prot=01 MxPS=64 #Cfgs= 1P: Vendor=0489 ProdID=e07a Rev=01.12S: Manufacturer=Broadcom CorpS: Product=BCM20702A0S: SerialNumber=38B1DBE337E4C: #Ifs= 4 Cfg#= 1 Atr=e0 MxPwr=0mAI: If#= 0 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=01 Prot=01 Driver=btusbI: If#= 1 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=01 Prot=01 Driver=btusbI: If#= 2 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=ff Driver=(none)I: If#= 3 Alt= 0 #EPs= 0 Cls=fe(app. ) Sub=01 Prot=01 Driver=(none) | As of version 3.19, this device is supported in the Linux kernel, but you need to manually provide the device's firmware to the kernel. Finding the Firmware: You can find the firmware in the device's Windows driver, which you can download from Lenovo (or your computer manufacturer's website). Many drivers can just be unzipped, but for this particular computer, the driver is an .exe file and must be extracted with wine . wine 4ab802rf.exe Follow the "installation" instructions. The wizard will extract the .exe file and on the last step will ask to install it. Uncheck "Install Broadcom Bluetooth Driver now": The driver file has been extracted to ~/.wine/driver_c/drivers/Broadcom Bluetooth Driver/ Identifying the right file In my case, there are 20 - 30 firmware files in the extracted package. Which one corresponds to your device is revealed in one of the driver's inf files. Find your device ID from the output of lsusb or if that's unclear, usb-devices . In this case, it's e07a . Then grep the inf files to find out which one talks about that device: grep -c E07A -r --include \*.infWin32/LD/bcbtumsLD-win7x86.inf:0Win32/bcmhidnossr.inf:0Win32/btwl2cap.inf:0Win32/btwavdt.inf:0Win32/btwrchid.inf:0Win32/bcbtums-win8x86-brcm.inf:17Win32/btwaudio.inf:0Win64/LD/bcbtumsLD-win7x64.inf:0Win64/bcmhidnossr.inf:0Win64/btwl2cap.inf:0Win64/btwavdt.inf:0Win64/btwrchid.inf:0Win64/bcbtums-win8x64-brcm.inf:17Win64/btwaudio.inf:0Autorun.inf:0 So in this driver, you can look in either Win32/bcbtums-win8x86-brcm.inf or Win64/bcbtums-win8x64-brcm.inf . Look through the file and find the hex file that is mentioned near E07A : ;;;;;;;;;;;;;RAMUSBE07A;;;;;;;;;;;;;;;;;[RAMUSBE07A.CopyList]bcbtums.sysbtwampfl.sysBCM20702A1_001.002.014.1443.1496.hex So the fimware is in the same directory and named BCM20702A1_001.002.014.1443.1496.hex . Converting and Placing the Firmware Download and compile the hex2hcd tool . git clone https://github.com/jessesung/hex2hcd.gitcd hex2hcdmake Convert the firmware to hcd : hex2hcd BCM20702A1_001.002.014.1443.1496.hex firmware.hcd Rename and move the firmware to the system's firmware subdirectory: su -c 'mv firmware.hcd /lib/firmware/brcm/BCM20702A0-0489-e07a.hcd' The name of this file is critical. The two sets of four characters, in this case 0489-e07a , should match your device's Vendor ID and Product ID. Loading the Firmware The easiest way to load the firmware is to power off your computer and turn it on again. Note that the computer should be turned off; a simple reboot may not be sufficient to reload this firmware. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/189790",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34796/"
]
} |
189,805 | tmux has its control mode , activated with -CC , to allow the terminal emulator to control tmux. iTerm2 uses this to great effect for allowing tmux windows to appear as separate iTerm windows. Are there other terminal emulators that support control mode? I am particularly interested in terminal emulators / SSH clients for Windows that might offer such functionality, but Linux/Unix ones are interesting as well. | Edit : see the other answer, give a try to EternalTerminal : It has tmux -CC support and can be installed on linux. It may not be exactly what you need though. What you are asking does not seem to be developed on linux yet with the mainstream terminal emulators : Terminator has an open issue, somebody seems to be working on it actively : https://bugs.launchpad.net/terminator/+bug/1301605 Gnome-terminal doesn't seem to have any support. There is an open bug in upstream bugzilla: GNOME/vte#2177 - tmux integration Konsole also has an open bug: bug 372496 - Support tmux control mode Eterm : Nothing . Rxvt : Nothing . List not exhaustive; comments welcome. Another idea would be to run your dear Iterm2 mac binary in your Linux environment. But the solutions currently available will not support it : Darling ? It uses a Wine-like approach but "At this point, does not yet run OS X application with a GUI" http://www.darlinghq.org/ Maloader ? "Running all Mac binaries isn't my goal. Only command line tools such as compiler tool chain can be executed by this loader" https://github.com/shinh/maloader#readme | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/189805",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2538/"
]
} |
189,878 | I have been using a rsync script to synchronize data at one host with the data at another host. The data has numerous small-sized files that contribute to almost 1.2TB. In order to sync those files, I have been using rsync command as follows: rsync -avzm --stats --human-readable --include-from proj.lst /data/projects REMOTEHOST:/data/ The contents of proj.lst are as follows: + proj1+ proj1/*+ proj1/*/*+ proj1/*/*/*.tar+ proj1/*/*/*.pdf+ proj2+ proj2/*+ proj2/*/*+ proj2/*/*/*.tar+ proj2/*/*/*.pdf.........- * As a test, I picked up two of those projects (8.5GB of data) and I executed the command above. Being a sequential process, it tool 14 minutes 58 seconds to complete. So, for 1.2TB of data it would take several hours. If I would could multiple rsync processes in parallel (using & , xargs or parallel ), it would save my time. I tried with below command with parallel (after cd ing to source directory) and it took 12 minutes 37 seconds to execute: parallel --will-cite -j 5 rsync -avzm --stats --human-readable {} REMOTEHOST:/data/ ::: . This should have taken 5 times less time, but it didn't. I think, I'm going wrong somewhere. How can I run multiple rsync processes in order to reduce the execution time? | Following steps did the job for me: Run the rsync --dry-run first in order to get the list of files those would be affected. $ rsync -avzm --stats --safe-links --ignore-existing --dry-run \ --human-readable /data/projects REMOTE-HOST:/data/ > /tmp/transfer.log I fed the output of cat transfer.log to parallel in order to run 5 rsync s in parallel, as follows: $ cat /tmp/transfer.log | \ parallel --will-cite -j 5 rsync -avzm --relative \ --stats --safe-links --ignore-existing \ --human-readable {} REMOTE-HOST:/data/ > result.log Here, --relative option ( link ) ensured that the directory structure for the affected files, at the source and destination, remains the same (inside /data/ directory), so the command must be run in the source folder (in example, /data/projects ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/189878",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48188/"
]
} |
189,880 | I'd like to know the difference between Linux "hardware time" and "system time". What time does the computer use from the point of view of a process querying it ?Is it hardware or system time? | The system time is maintained by the operating system, it is the one the processes will get when querying the date/time. Being stored in RAM, reading it is a fast operation. The hardware time is maintained by a real clock powered by a battery. That means this clock persist a reboot. However, reading it implies performing a I/O operation which is more resource intensive than reading the system clock. For that reason, the hardware clock is seldom used, mainly at boot time to set the system clock initial value, and then optionally to adjust/synchronize it either manually or through NTP. Note that the hardware clock might be set to either the local time or UTC time while the system clock is always set on Unix/Linux systems to UTC time. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189880",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106040/"
]
} |
189,905 | I know linux has 3 built-in tables and each of them has its own chains as follow: FILTER : PREROUTING, FORWARD, POSTROUTING NAT : PREROUTING, INPUT, OUTPUT, POSTROUTING MANGLE : PREROUTING, INPUT, FORWARD, OUTPUT, POSTROUTING But I can't understand how they are traversed, in which order, if there is.For example, how are they traversed when: I send a packet to a pc in my same local network when I send a packet to a pc in a different network when a gateway receives a packet and it has to forward it when I receive a packet destinated to me any other case (if any) | Wikipedia has a great diagram to show the processing order. For more details you can also look at the iptables documentation, specifically the traversing of tables and chains chapter . Which also includes a flow diagram . The order changes dependent on how netfilter is being used (as a bridge or network filter and whether it has interaction with the application layer). Generally (though there are more devil in the details in the chapter linked above) the chains are processed as: See the INPUT chain as "traffic inbound from outside to this host". See the FORWARD chain as "traffic that uses this host as a router" (source and destination are not this host). see the OUTPUT chain as "traffic that this host wants to send out". PREROUTING / POSTROUTING has different uses for each of the table types (for example for the nat tables, PREROUTING is for inbound (routed/forwarded) SNAT traffic and POSTROUTING is for outbound (routed/forwarded) DNAT traffic. Look at the docs for more specifics. The various tables are: Mangle is to change packets (Type Of Service, Time To Live etc) on traversal. Nat is to put in NAT rules. Raw is to be used for marking and connection tracking. Filter is for filtering packets. So for your five scenarios: If the sending host your host with iptables, OUTPUT The same as above The FORWARD chain (provided the gateway is the host with iptables) If "me" is the host with iptables, INPUT Look at the chain rules above (which is the general rule of thumb) and the flow diagram (and this also varies on what you are trying to achieve with IPTables) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/189905",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66485/"
]
} |
189,911 | A. { echo "Hello World"; } >outputfile B. ( echo "Hello World" ) >outputfile C. ./anothershell.sh D. /bin/echo "Hello World" Which is right? And what kind of command can run in the same process of the current shell? | Only A will run within the process of the current shell. B will run in a subshell because you asked for a subshell by using paranehteses. C and D will both run outside of the current shell process because they are invocations of external commands. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189911",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105630/"
]
} |
189,928 | I've got bunch of files with ill-formed numbering: prefix-#.ext | for files with number 1-9prefix-##.ext | for files with number 10-99prefix-###.ext | for files with number 100-999 Due to further processing I need all of their names to be in format: prefix-###.ext . Is there any easy way to do that? | On Debian, Ubuntu and derivatives, you can use the rename Perl script : rename 's/(?<=-)([0-9]+)/sprintf "%03d", $1/e' prefix-*.ext Some systems may have this command installed as prename or perl-rename . Note that this is not the rename utility from the util-linux suite which does not provide an easy way to do this. In zsh, you can use zmv to rename and the l parameter expansion flag to pad with zeroes. autoload -U zmvzmv '(prefix-)(*)(.ext)' '$1${(l:3::0:)2}$3' You can also do this with a plain shell loop. Shells don't have nice string manipulation constructs; one way to pad with zeroes is to add 1000 and strip off the leading 1 . for x in prefix-*.ext; do n=${x%.ext}; n=${x##*-}; n=$((n+1000)) mv "$x" "${x%-*.ext}${n#1}${x##*-}"done Another way is to call the printf utility. for x in prefix-*.ext; do n=${x%.ext}; n=${x##*-} mv "$x" "${x%-*.ext}$(printf %03d "$n")${x##*-}"done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/189928",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13428/"
]
} |
190,027 | I am logging on to a node which I know has near 100% cpu usage (20 single process jobs using nearly 100% cpu each). When I use top interactively, the first iteration, it gives me about 20% ni and the next iteration and all after it correctly gives 95+% ni usage. I am wanting to pipe the output of top via top -bn1 > outfile , but the first iteration of top -bn1 (the only iteration) gives the incorrect CPU usage. If I use top -bn2 , then the second iteration gives the correct usage, but that is too much output. How can I get top -bn1 to correctly give me the CPU usage? I am writing this script for usage statistics, so if necessary, I can go back and run an analysis on the processes independently to generate my own CPU usage, but it would be nice if top would give me the correct usage right off the bat. EDIT: mpdstat -P ALL gives me the same, incorrect, initial usage statistics. It would be nice to get that figured out as well too. I can use mpstat -P ALL 1 1 , but this gives the output twice. | Just drop the output of the first iteration. top -bn2 | awk '/^top -/ { p=!p } { if (!p) print }' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190027",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32234/"
]
} |
190,065 | I have a client that updates/writes into myfile.csv arbitrarily. I've written the following code, I left the MySQL snippet out: while truedo awk_variables_value_array=`tail -n2 myfile.csv | awk -F, '$7 == "status" {print $4, $5, $10 }'` var1=${awk_variables_value_array[0]} var2=${awk_variables_value_array[1]} var3=${awk_variables_value_array[2]} if[ "var3" -gt "0" ] --MYSQL SNIPPET IS-- fidone Q: tail -n2 reads the last 2 lines, how can I change it is so it's the second last line the file: where n is the last line of the file line 1line 2line 3line 4....line n-2line n-1line n current output yields: line n-1line n I would like it so that the output is: line n-1 | How about tail -n2 myfile.csv | head -n1 | awk .... ? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/190065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102813/"
]
} |
190,088 | I am on Debian Wheezy (stable) and routinely update my system via the basic: sudo apt-get updatesudo apt-get dist-upgrade In the future, once Jessie becomes the stable release, I wonder whether Wheezy will automagically become Jessie if I keep doing dist-upgrade , or not. Will there be ANY manual steps needed, to make sure to always be on the current Stable release (years into the future, e.g. Sid after Jessie), or am I set to go as I am? E.g. do I have to modify my sources.list file in some sort of way to ensure Jessie will simply saunter in without any manual steps down the track, or will some 'big' update ('dist-upgrade') do it all for me and change all instances of wheezy to jessie when it knows to do so? (Every single line in my sources.list has wheezy in it. Perhaps I need only remove wheezy from them?) I am a bit of a newcomer (from OS X, and before that Windows), so am not sure how 'release upgrades' on the same channel can be done automatically on Debian - where, OS X simply offers, via its automatic updates, full upgrades to its next (stable/ready) release with no manual checking required or complicated steps apart from normal system update checking. | If the lines in your sources.list say "wheezy", you will stay with Wheezy even when Jessie is released. If you change those lines to say "stable" instead, apt will upgrade you to Jessie when it's released, because "stable" will become an alias for "jessie" instead of "wheezy". (And if you change those lines to say "jessie", you'll upgrade to Jessie now , even though it's still in testing and hasn't been released as "stable" yet.) Although it may be tempting to change your sources.list to say "stable" so that you upgrade to new stable releases automatically, I don't recommend it. The upgrade process may have special steps you'll want or need to do in addition to (and possibly before ) upgrading packages, so it's better to wait for Jessie to be released and then look at the release notes before making the switch. (In practice, just upgrading the packages is usually okay, but it's safer to wait and read the release notes first. Look before you leap.) BTW, Sid will never become a stable release. It's the permanent name of the "unstable" repository, and doesn't participate in the progression of names through the "testing" and "stable" aliases. After Jessie is released, some other Toy Story character will be chosen for the new "testing", and Sid will remain unstable as ever. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/190088",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106556/"
]
} |
190,140 | Background I'm setting up a new build, with all new hardware, tabula rosa . I want to have multiple Linux installations and common data partitions. From what I'e gathered so far, using new hardware and up-to-date kernels, I should be able to use rEFInd as a simple boot manager and use a fully modern boot process. I've read Rod's general instructioms , but I need some more specific advice. Question Since disk partition editors tend to "helpfully" hide the EFI partition, how can I set that up on a new unformatted disk? With gparted 0.16.1, I created a gpt type partition table. But, there's no indication that this is the case: the display looks no different than before or a legacy partion table in place. So did it do anything? The New partition command gives no options for the special EFI reserved partition, so did it do that automatically too? Constraints and Assumptions There is no existing OS, and no optical drives. Assume that any existing contents on the ssd should be blown away (junkware from the manufacturer or previous attempts to partition). I'm booting UBCD from a USB thumbdrive, so using gparted or other tools included in the Partion Magic image would be easiest. Once I have a proper GPT disk with the special EFI partition, I'm comfortable using gparted etc. for addional partions, as I've done as long as there have been PC's with HDD's. | Current util-linux versions of fdisk support GPT, the one I'm looking at here is fdisk from util-linux 2.24.2 (reported via fdisk -v ). Run fdisk /dev/whatever . Have a look at the options with m . Note these change depending on the state of the partition table. First check what state the disk is currently in with p . Note the Disklabel type ; if it is gpt you don't have to do anything, you can delete the existing partitions and start creating your own. If not, use the g option. This will eliminate any existing partitions because fdisk does not convert the MBR table. You can now start adding partitions with n . For the EFI partition, use t to set the type to 1 , then the table should read, e.g., Device Start End Size Type /dev/sdb1 256 122096640 465.8G EFI System Obviously that's a bit silly, but hopefully the point is clear. None of your changes take effect until you use w and exit. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190140",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106567/"
]
} |
190,163 | I'm trying to figure out more ways to see if a given host is up, solely using shell commands (primarily bash ). Ideally, it would be able to work with both hostnames and IP addresses. Right now the only native way I know of is ping, perhaps integrated into a script as described here. Any other ideas? | Ping is great to get a quick response about whether the host is connected to the network, but it often won't tell you whether the host is alive or not, or whether it's still operating as expected. This is because ping responses are usually handled by the kernel, so even if every application on the system has crashed (e.g. due to a disk failure or running out of memory), you'll often still get ping responses and may assume the machine is operating normally when the situation is quite the opposite. Checking services Usually you don't really care whether a host is still online or not, what you really care about is whether the machine is still performing some task. So if you can check the task directly then you'll know the host is both up and that the task is still running. For a remote host that runs a web server for example, you can do something like this: # Add the -f option to curl if server errors like HTTP 404 should fail tooif curl -I "http://$TARGET"; then echo "$TARGET alive and web site is up"else echo "$TARGET offline or web server problem"fi If it runs SSH and you have keys set up for passwordless login, then you have a few more options, for example: if ssh "$TARGET" true; then echo "$TARGET alive and accessible via SSH"else echo "$TARGET offline or not accepting SSH logins"fi This works by SSH'ing into the host and running the true command and then closing the connection. The ssh command will only return success if that command could be run successfully. Remote tests via SSH You can extend this to check for specific processes, such as ensuring that mysqld is running on the machine: if ssh "$TARGET" bash -c 'ps aux | grep -q mysqld'; then echo "$TARGET alive and running MySQL"else echo "$TARGET offline or MySQL crashed"fi Of course in this case you'd be better off running something like monit on the target to ensure the service is kept running, but it's useful in scripts where you only want to perform some task on machine A as long as machine B is ready for it. This could be something like checking that the target machine has a certain filesystem mounted before performing an rsync to it, so that you don't accidentally fill up its main disk if a secondary filesystem didn't mount for some reason. For example this will make sure that /mnt/raid is mounted on the target machine before continuing. if ssh "$TARGET" bash -c 'mount | grep -q /mnt/raid'; then echo "$TARGET alive and filesystem ready to receive data"else echo "$TARGET offline or filesystem not mounted"fi Services with no client Sometimes there is no easy way to connect to the service and you just want to see whether it accepts incoming TCP connections, but when you telnet to the target on the port in question it just sits there and doesn't disconnect you, which means doing that in a script would cause it to hang. While not quite so clean, you can still do this with the help of the timeout and netcat programs. For example this checks to see whether the machine accepts SMB/CIFS connections on TCP port 445, so you can see whether it is running Windows file sharing even if you don't have a password to log in, or the CIFS client tools aren't installed: # Wait 1 second to connect (-w 1) and if the total time (DNS lookups + connect# time) reaches 5 seconds, assume the connection was successful and the remote# host is waiting for us to send data. Connecting on TCP port 445.if echo 'x' | timeout --preserve-status 5 nc -w 1 "$TARGET" 445; then echo "$TARGET alive and CIFS service available"else echo "$TARGET offline or CIFS unavailable"fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190163",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67459/"
]
} |
190,184 | I followed an online article to remove my Apache from my system. I removed Apache through these commands sudo apt-get purge apache2 apache2-utilssudo rm -rf /etc/apache2-binsudo apt-get autoremove Then in the article they mentioned to remove the files and directories of the results of whereis apache2 . After running the command whereis apache2 I found /usr/sbin/apache2/usr/share/apache2/usr/lib/apache2/usr/share/man/man8/apache2.8.gz/etc/apache2 I removed the above directories and files through command sudo rm -rf file_or_directory_name . Then I tried sudo apt-get install apache2 I clicked 'y' when the system asked do you want to continue? . Then the error came: Setting up apache2 (2.4.7-1ubuntu4.4) ...cp: cannot stat ‘/usr/share/apache2/default-site/index.html’: No such file or directorydpkg: error processing package apache2 (--configure): subprocess installed post-installation script returned error exit status 1Errors were encountered while processing: apache2 E: Sub-process /usr/bin/dpkg returned an error code (1) I tried sudo apt-get install apache2 again after running the sudo apt-get update command, but still got the same error results. | To recover /usr/share/apache2/default-site/index.html you need to re-install apache2-data . Given your current situation, try sudo apt-get purge apache2-datasudo apt-get install apache2 Presumably your system ended up in that state because apt-get autoremove didn't uninstall apache2-data , but your rm -rf removed the files it contained. Then apt-get install apache2 would reckon that apache2-data was still installed and didn't need to be re-installed, but its files were gone... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190184",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106600/"
]
} |
190,188 | I'm looking to permanently disable IPv6 on a specific interface because it's broken and my question on Superuser to fix it is dead so how can I go about doing that? I've added net.ipv6.conf.eth0.disable_ipv6=1 to /etc/sysctl.conf but it doesn't work for some reason. At the moment I just use the sysctl command every time I turn on my PC to disable it. | First, edit /etc/default/grub and find the line: GRUB_CMDLINE_LINUX="" and change the line to say this instead (this will disable ipv6 completely): GRUB_CMDLINE_LINUX="ipv6.disable=1" alternatively, to leave the ipv6 stack functional but to disable assignment of ipv6 addresses you can use the following option instead: GRUB_CMDLINE_LINUX="ipv6.disable_ipv6=1" Finally, run: sudo update-grub and reboot to apply the changes. This will disable ipv6 at the kernel level so that it is never enabled from the get-go. Also, after making the following changes to /etc/sysctl.conf net.ipv6.conf.eth0.disable_ipv6 = 1 Run the following command to apply the changes: sudo sysctl -p Finally, if using the option to disable ipv6 in sysctl.conf, you need to also make sure ipv6 is commented out in /etc/hosts. See here https://wiki.archlinux.org/index.php/IPv6#Disable_functionality | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190188",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101019/"
]
} |
190,217 | I use following command to search for string ELF in plain text files under the current directory recursively: grep ELF -r . but it also searches in binary files (e.g. zip file, PDF file), and in code files such as HTML file and .js . How can I specify it to search only in plain text files that are not source code? | With GNU grep, pass --binary-files=without-match to ignore binary files. Source code files are text files, so they will be included in the results. If you want to ignore text files with certain extensions, you can use the --exclude option, e.g. grep -r --exclude='*.html' --exclude='*.js' … or you can instead include only explicitly-matching files, e.g. grep -r --include='*.txt' … If you want to ignore text files that are source code, you can use the file command to guess which files are source code. This uses heuristics so it may detect source code as non-source-code or vice versa. find -type f exec sh -c ' for x do case $(file <"$x") in *source*) :;; # looks like source code *text*) grep -H -e "$0" "$x";; # looks like text # else: looks like binary esac done' "REGEXP" {} + or find -type f exec sh -c ' for x do case $(file -i <"$x") in text/plain\;*) grep -H -e "$0" "$x";; # looks like text # else: looks like source code or binary esac done' "REGEXP" {} + Alternatively, you may use ack instead of grep. Ack integrates a file classification system based on file names. It's geared towards searching in source code by default, but you can tell it to search different types by passing the --type option. Search ALL files with ack may help. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190217",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
190,230 | I have Kali Linux installed on my laptop as a dual boot with Windows 8.1 and the pci wireless adaptor inside of my laptop (Broadcom 802.11ac) apparently is not compatible with either as I've spend weeks trying to find any information on how to get it to work. So I decided it would just be easier to order a usb adaptor from amazon and I was wondering if anyone had experience with this and knew which would be a good adaptor that is relatively inexpensive that I can rely on.Thanks!-Cade | With GNU grep, pass --binary-files=without-match to ignore binary files. Source code files are text files, so they will be included in the results. If you want to ignore text files with certain extensions, you can use the --exclude option, e.g. grep -r --exclude='*.html' --exclude='*.js' … or you can instead include only explicitly-matching files, e.g. grep -r --include='*.txt' … If you want to ignore text files that are source code, you can use the file command to guess which files are source code. This uses heuristics so it may detect source code as non-source-code or vice versa. find -type f exec sh -c ' for x do case $(file <"$x") in *source*) :;; # looks like source code *text*) grep -H -e "$0" "$x";; # looks like text # else: looks like binary esac done' "REGEXP" {} + or find -type f exec sh -c ' for x do case $(file -i <"$x") in text/plain\;*) grep -H -e "$0" "$x";; # looks like text # else: looks like source code or binary esac done' "REGEXP" {} + Alternatively, you may use ack instead of grep. Ack integrates a file classification system based on file names. It's geared towards searching in source code by default, but you can tell it to search different types by passing the --type option. Search ALL files with ack may help. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190230",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88380/"
]
} |
190,241 | In another answer here on UNIX & Linux Stack Exchange Michael D Parker wrote , in response to someone saying that doing so was "safe", that: Usually you should NEVER edit the /etc/shadow file directly. So: Why should you never edit the /etc/shadow file directly? | There are several reasons not to edit /etc/passwd , /etc/shadow , /etc/group , /etc/gshadow or /etc/sudoers directly, but rather use vipw , vigr or visudo : If you make a syntax error, you may not be able to log in or become root anymore. Using the viXXX tools reduces this risk because the tool makes sanity checks before modifying the file. If the file is edited concurrently, whoever saves last will override the changes made by previous edits. This includes both an administrator editing the file and the file being modified because a user called passwd , chsh or chfn to change something about their account. If you use the appropriate tool, it will prevent concurrent modifications. This is mostly a concern on systems with multiple users, less so if you're the only user. On some systems (mostly or only *BSD), vipw updates multiple files (e.g. /etc/passwd and /etc/master.passwd ). This doesn't apply to Linux. vipw automatically creates a backup ( passwd- , shadow- , …), which is useful if you realize that you accidentally deleted a line. It's only useful if you realize before the next edit, so it doesn't replace version control and backups, but it can be very nice if you realize your mistake soon enough. visudo doesn't do this. You can edit the file directly. You'll just be taking an additional risk with no real advantage. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190241",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106635/"
]
} |
190,248 | I'm running a shell script using terminal from userA . In the middle of this shell script, I switch to userB user using su , but this asks me for the password of userB and I have to manually enter the password through terminal. I'm asking if there is a way so that I can enter the password automatically without having me to stay beside the machine to manually enter the password? As there is a loop in this script and I don't want to keep staying all the time looking at the terminal to enter the password of userB when it asks me. Or if there is a way I can put the password in my shellscript so that the terminal won't wait for me to manually enter it? Could anyone please advise how this could be done? | sudo also allows NOPASSWD on specific entries in the /etc/sudoers configuration, if you can get to that. Like: userA ALL = (userB) NOPASSWD: ALL This will give userA full access to userB without password. Should probably only be used if userA can be trusted to lock his screen whenever leaving it … Alternatively, you can give userA access to only certain scripts. For instance: userA ALL = (userB) NOPASSWD: /home/userB/scripts-for-userA/ This lets userA run any command in the directory /home/userB/scripts-for-userA/ as userB. It's still only as secure as those commands are, though. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190248",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85865/"
]
} |
190,250 | I can't explain the relations between updates for every release and the release itself. Let's make an example.Ubuntu has 14.04 LTS and 14.10.Both of them are still supported, so what is the difference if I apply updates?i.e., from https://packages.ubuntu.com I can clearly see that weechat package is updated to version 0.4 in 14.04, while it ships (or updates to) in 14.10. The question is: what is the purpose of OS updates in every release and how the updates are entering (or not) in a release? | sudo also allows NOPASSWD on specific entries in the /etc/sudoers configuration, if you can get to that. Like: userA ALL = (userB) NOPASSWD: ALL This will give userA full access to userB without password. Should probably only be used if userA can be trusted to lock his screen whenever leaving it … Alternatively, you can give userA access to only certain scripts. For instance: userA ALL = (userB) NOPASSWD: /home/userB/scripts-for-userA/ This lets userA run any command in the directory /home/userB/scripts-for-userA/ as userB. It's still only as secure as those commands are, though. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190250",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
190,264 | I've set up a Soft Raid 1 using Debians built in RAID systems. I set up the raid because I had a space HDD when I set up the server and thought why not. The RAID is set up using what-ever Debian did when I installed the OS (sorry, not a linux techie). Now, how-ever I could really use the disk for a much more useful purpose. Is it easy to discontinue the raid without having to reinstall the OS, and how would I go about doing this? fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk identifier: 0x000d9640 Device Boot Start End Blocks Id System/dev/sda1 2048 976771071 488384512 fd Linux raid autodetectDisk /dev/sdb: 500.1 GB, 500107862016 bytes255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk identifier: 0x0009dd99 Device Boot Start End Blocks Id System/dev/sdb1 2048 950560767 475279360 83 Linux/dev/sdb2 950562814 976771071 13104129 5 ExtendedPartition 2 does not start on physical sector boundary./dev/sdb5 950562816 976771071 13104128 82 Linux swap / SolarisDisk /dev/sdc: 2000.4 GB, 2000398934016 bytes255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x6fa10d6b Device Boot Start End Blocks Id System/dev/sdc1 63 3907024064 1953512001 7 HPFS/NTFS/exFATDisk /dev/sdd: 7803 MB, 7803174912 bytes122 heads, 58 sectors/track, 2153 cylinders, total 15240576 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0xc3072e18 Device Boot Start End Blocks Id System/dev/sdd1 * 8064 15240575 7616256 b W95 FAT32 fstab content: # /etc/fstab: static file system information.## Use 'blkid' to print the universally unique identifier for a# device; this may be used with UUID= as a more robust way to name devices# that works even if disks are added and removed. See fstab(5).## <file system> <mount point> <type> <options> <dump> <pass># / was on /dev/sdb1 during installationUUID=cbc19adf-8ed0-4d20-a56e-13c1a74e9cf0 / ext4 errors=remount-ro 0 1# swap was on /dev/sdb5 during installationUUID=f6836768-e2b6-4ccf-9827-99f58999607e none swap sw 0 0/dev/sda1 /media/usb0 auto rw,user,noauto 0 0/dev/sdc1 /media/mns ntfs-3g defaults 0 2 | The easiest method, that requires no changes to your setup whatsoever, is probably to reduce the RAID to a single disk. That leaves you the option to add a disk and thus re-use the RAID at a later time. mdadm /dev/mdx --fail /dev/disky1mdadm /dev/mdx --remove /dev/disky1mdadm --grow /dev/mdx --raid-devices=1 --force The result would look something like this: mdx : active raid1 diskx1[3] 62519296 blocks super 1.2 [1/1] [U] Ta-daa a single disk "RAID1". If you want to get rid of the RAID layer altogether, it would involve mdadm --examine /dev/diskx1 (to find out the data offset), mdadm --zero-superblock (to get rid of the RAID metadata), and parted to move the partition by the data offset so it points to the filesystem, and then update bootloader and system configs to reflect the absence of RAID... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190264",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106658/"
]
} |
190,289 | I pressed something around my mouse pad (keys in the altgr region+mousepad - quite possibly multitouch) and suddenly the whole X11 display zoomed around 10%. That means I can see 90% of the 1920x1080 screen in a somewhat blurry version. When I move the cursor, the 90% follows the cursor, so by panning around I can see everything on the screen. Since it applies to everything my guess is that it is caused by xfwm or Xorg. If I suspend the machine, it seems to go away in the lock screen, but when the lock screen is unlocked, the blurriness and zoom re-appears. Taking a screenshot grabs what is displayed on my screen (i.e. the 90% but scaled to 1920x1080). I can see the usefulness of this in certain situations, but I would really like to exit it (other than rebooting). I use xfce on Linux Mint. | Alt + scrollwheel . So in my case, I had pressed Alt + two fingers on the mouse pad. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/190289",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2972/"
]
} |
190,300 | I know deleting user with userdel username can cause information leakage and other security issues (as tutorial book says, administrator should delete user with -r option). But i tried it to see what happens. Now i have "unowned" directories left. I can delete them with cd /home; rm -r username . Is there any quick way of doing it? The book says: The root user can find "unowned" files and directories by running: find / -nouser -o -nogroup 2> /dev/null How does it work? | Okay, i solved it myself. With help of find / -nouser -o -nogroup 2> /dev/null you see all unlinked/unowned files on your system and you can delete every single file left on your system. If you didn't use -r option with userdel command, you can do the following to get rid of all old user's files. Delete removed user's home directory. cd /home; rm -r username Find remaining files: find / -nouser -o -nogroup 2> /dev/null . Delete every file in the output of previous command. Important edit : Instead of these 3 steps, use: find / -nouser -o -nogroup 2> /dev/null | xargs rm -fr It removes every single output of find command with force ( -f ) and recursive ( -r ) options of rm command. Quote from @Tim Pierce's answer on this question: xarg reads lines on standard input and turns them into command-line arguments, so you can effectively pipe data to the command line of another program. Edit #2 : According to @roaima, we need to use: find / \( -nouser -o -nogroup \) -print0 | xargs -0 rm -rf Good luck! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190300",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106608/"
]
} |
190,313 | I have a directory structure based on events. In these events, I have one type of file which does or does not contain a pattern for which I need to search. The positive results I would like to store in a separate file. My first approach was: find . /EVENT*/'filename' | grep 'searchtext' head -2 > error_file but that does not seem to work. I was told that it is not possible to combine find and grep in this way, so how do I need to do it? | Here is a general pattern: find /directory/containing/files -type f -exec grep -H 'pattern_to_search' {} + Here at first find will search all files in the directory containing necessary files, you can also use wildcards e.g. *.txt to look for only files ending with .txt . In that case the command would be: find /directory/containing/files -type f -name "*.txt" -exec grep -H 'pattern_to_search' {} + After finding the necessary files we are searching for the desired pattern in those files using -exec grep -H 'pattern_to_search' {} + ( -H will print the filename where the pattern is found). Here you can think of {} as containing the files to be searched and + is needed to be used at the end with -exec so that -exec will be forked only once i.e. grep will search as if the command is grep -H 'pattern_to_search' file_1.txt file_2.txt file_3.txt . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190313",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106687/"
]
} |
190,317 | I would like to used the yes command so that GNU parted does not wait for user input : root@195-xxx-xxx-xxx:/proc# parted /dev/sda unit B resizepart 2 1166016512B Warning: Shrinking a partition can cause data loss, are you sure you want to continue?Yes/No? y Information: You may need to update /etc/fstab.root@195-xxx-xxx-xxx:/proc# echo $?0 However using yes does not work here : root@195-xxx-xxx-xxx:/proc# yes | parted /dev/sda unit B resizepart 2 166016512B Warning: Shrinking a partition can cause data loss, are you sure you want to continue?root@195-xxx-xxx-xxx:/proc# echo $?1 Edit: The --script option does not work as well : root@195-xxx-xxx-xxx:/proc# parted --script /dev/sda unit B resizepart 2 1166016512B Warning: Shrinking a partition can cause data loss, are you sure you want to continue?root@195-xxx-xxx-xxx:/proc# echo $?1 | If resizepart does not work, you might have to resort to rm and mkpart to achieve the same thing. Of course, this would require you to parse the partition table first in order to determine partition type and start offset. Unless you already know the necessary values. After all you had to get the 166016512B from somewhere too. parted has the --machine option to produce easily parseable output. On the other hand, examples of actually parsing it are not easily found. ;) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190317",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37426/"
]
} |
190,334 | I'm using macOS, and I want to find and replace a given word with another using the following sed command, but it makes no changes. What is the problem? sed 's/\<cat\>/dog/g' tt.txt | sed on macOS uses [[:<:]] in place of \< and [[:>:]] in place of \> . The macOS implementation of sed additionally does not support \b as a word boundary pattern. This is even though the sed(1) manual on macOS refers to the re_format(7) which does mention \< , \> , and \b . The file itself won't change with sed 's/[[:<:]]cat[[:>:]]/dog/g' tt.txt The only thing that happens is that sed will read the file and apply the expression to each input line. It will then write the transformed data to standard output. To save the output to a new file, use sed 's/[[:<:]]cat[[:>:]]/dog/g' tt.txt >tt.txt-new You may then replace the original file with the new data: sed 's/[[:<:]]cat[[:>:]]/dog/g' tt.txt >tt.txt-new && mv tt.txt-new tt.txt On macOS, you should also be able to perform the transformation in-place with sed -i '' 's/[[:<:]]cat[[:>:]]/dog/g' tt.txt although the manual says: It is not recommended to give a zero-length extension whenin-place editing files, as you risk corruption or partial content in situations where diskspace is exhausted, etc. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190334",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106699/"
]
} |
190,337 | If have a long text file and I want to display all the lines in which a given pattern occurs, I do: grep -n form innsmouth.txt | cut -d : -f1 Now, I have a sequence of numbers (one number per line) I would like to make a 2D graphical representation with the occurrence on the x-axis and the line number on the y-axis. How can I achieve this? | You could use gnuplot for this: primes 1 100 |gnuplot -p -e 'plot "/dev/stdin"' produces something like You can configure the appearance of the graph to your heart's delight, output in various image formats, etc. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/190337",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102788/"
]
} |
190,344 | I have two devices & mdash ; the first one has 20 partitions and the second has one big partition. I would like to clone specific partition (content + data) from device one to device two. How can I do this? How can I create in the second device the same partition with same features as the source partition? For example, I want to duplicate the partition type, filesystem type, flags, ... etc of the original partition. | You could use gnuplot for this: primes 1 100 |gnuplot -p -e 'plot "/dev/stdin"' produces something like You can configure the appearance of the graph to your heart's delight, output in various image formats, etc. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/190344",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106702/"
]
} |
190,350 | I would like to try compile mmu-less kernel. From what I found in configuration there is no option for such a thing. Is it possible to be done? | You can compile a Linux kernel without MMU support on most processor architectures, including x86. However, because this is a rare configuration only for users who know what they are doing, the option is not included in the menu displayed by make menuconfig , make xconfig and the like, except on a few architectures for embedded devices where the lack of MMU is relatively common. You need to edit the .config file explicitly to change CONFIG_MMU=y to CONFIG_MMU=n . Alternatively, you can make the option appear in the menu by editing the file in arch/*/Kconfig corresponding to your architecture and replacing the stanza starting with CONFIG MMU by config MMU bool "MMU support" default y ---help--- Say yes. If you say no, most programs won't run. Even if you make the option appear in the menus, you may need to tweak the resulting configuration to make it internally consistent. MMU-less x86 systems are highly unusual. The easiest way to experiment with an MMU-less system would be to run a genuine MMU-less system in an emulator, using the Linux kernel configuration provided by the hardware vendor or with the emulator. In case this wasn't clear, normal Linux systems need an MMU. The Linux kernel can be compiled for systems with no MMU, but this introduces restrictions that prevent a lot of programs from running. Start by reading No-MMU memory mapping support . I don't think you can use glibc without an MMU, µClibc is usually used instead. Documentation from the µClinux project may be relevant as well (µClinux was the original project for a MMU-less Linux, though nowadays support for MMU-less systems has been integrated into the main kernel tree so you don't need to use µClinux). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/190350",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106704/"
]
} |
190,369 | Is there any way to find out on which line of the text document is some word which is matching pattern for example with grep or something. Thanks. | Yes, its possible with the -n option of grep . From man grep : -n, --line-number Prefix each line of output with the 1-based line number within its input file. For example, if you have a file named file.txt having: this isfoo testand this isbar test Now the output of grep -n "test" file.txt : $ grep -n "test" file.txt 2:foo test4:bar test Here 2 and 4 indicates the line numbers where the pattern is found. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/190369",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106676/"
]
} |
190,398 | From what I understand, the purpose of a swap partition in Linux is to free up some "not as frequently accessed" information from RAM and move it to a specific partition on your harddrive (at the cost of making it slower to read from or write to), essentially allowing active applications more of the "high speed memory". This is great for when you are on a machine with a small amount of RAM and don't want to run into problems if you run out. However, if your system has 16 GB or 32 GB of RAM, and assuming you aren't running a MySQL database for StackExchange or editing a 1080p full length movie in Linux, should a swap partition be used? | Yes. You should most definitely always have swap enabled, except if there is a very compelling, forbidding reason (like, no disk at all, or only network disk present). Should you have a swap on the order of the often recommended ridiculous sizes (such as, twice the amount of RAM)? Well, no . The reason is that swap is not only useful when your applications consume more memory than there is physical RAM (actually, in that case, swap is not very useful at all because it seriously impacts performance). The main incentive for swap nowadays is not to magically turn 16GiB of RAM into 32 GiB, but to make more efficient use of the installed, available RAM. On a modern computer, RAM does not go unused. Unused RAM is something that you could just as well not have bought and saved the money instead. Therefore, anything you load or anything that is otherwise memory-mapped, anything that could possibly be reused by anyone any time later (limited by security constraints) is being cached. Very soon after the machine has booted, all physical RAM will have been used for something . Whenever you ask for a new memory page from the operating system, the memory manager has to make an educated decision: Purge a page from the buffer cache Purge a page from a mapping (effectively the same as #1, on most systems) Move a page that has not been accessed for a long time -- preferably never -- to swap (this could in fact even happen proactively, not necessarily at the very last moment) Kill your process, or kill a random process (OOM) Kernel panic Options #4 and #5 are very undesirable and will only happen if the operating system has absolutely no other choice. Options #1 and #2 mean that you throw something away that you will possibly be needing soon again. This negatively impacts performance. Option #3 means you move something that you (probably) don't need any time soon onto slow storage. That's fine because now something that you do need can use the fast RAM. By removing option #3, you have effectively limited the operating system to doing either #1 or #2. Reloading a page from disk is the same as reloading it from swap, except having to reload from swap is usually less likely (due to making proper paging decisions). In other words, by disabling swap you gain nothing, but you limit the operation system's number of useful options in dealing with a memory request. Which might not be , but very possibly may be a disadvantage (and will never be an advantage). [EDIT] The careful reader of the mmap manpage , specifically the description of MAP_NORESERVE , will notice another good reason why swap is somewhat of a necessity even on a system with "enough" physical memory: "When swap space is not reserved one might get SIGSEGV upon a write if no physical memory is available." -- Wait a moment, what does that mean? If you map a file, you can access the file's contents directly as if the file was somehow, by magic, in your program's address space. For read-only access, the operating system needs in principle no more than a single page of physical memory which it can repopulate with different data every time you access a different virtual page (for efficiency reasons, that's of course not what is done, but in principle you could access terabytes worth of data with a single page of physical memory). Now what if you also write to a file mapping? In this case, the operating system must have a physical page -- or swap space -- ready for every page written to. There's no other way to keep the data around until the dirty pages writeback process has done its work (which can be several seconds). For this reason, the OS reserves (but doesn't necessarily ever commit) swap space, so in case you are writing to a mapping while there happens to be no physical page unused (that's a quite possible, and normal condition), you're guaranteed that it will still work. Now what if there is no swap? It means that no swap can be reserved (duh!), and this means that as soon as there are no free physical pages left, and you're writing to a page, you are getting a pleasant surprise in the form of your process receiving a segmentation fault, and probably being killed. [/EDIT] However, the traditional recommendation of making swap twice the size of RAM is nonsensical. Although disk space is cheap, it does not make sense to assign that much swap. Wasting something that is cheap is still wasteful, and you absolutely don't want to be continually swapping in and out working sets several hundreds of megabytes (or larger) in size. There is no single "correct" swap size (there are as many "correct" sizes as there are users and opinions). I usually assign a fixed 512MiB, regardless of RAM size, which works very well for me. The reasoning behind that is that 512MiB is something that you can always afford nowadays, even on a small disk. On the other hand, adding several gigabytes of swap is none better. You are not going to use them, except if something is going seriously wrong. Even on a SSD, swap is orders of magnitude slower than RAM (due to bus bandwidth and latency), and while it is very acceptable to move something to swap that probably won't be needed again (i.e. you most likely won't be swapping it in again, so your pool of available pages is effectively enlarged for free), if you really need considerable amounts of swap (that is, you have an application that uses e.g. a 50GiB dataset), you're pretty much lost. Once your computer starts swapping in and out gigabytes worth of pages, everything goes to a crawl. So, for most people (including me) this is not an option, and having that much swap therefore makes no sense. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/190398",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5769/"
]
} |
190,431 | I've tried to figure this out myself, but the myriad of options just baffles me. I want to use ideally either ffmpeg or mencoder (or something else, but those two I know I have working) to convert any incoming video to a fixed screen size. If the video is wider or too short for it, then centre crop the video. If it's then not the right size, the resize up or down to make it exactly the fixed screen size. The exact final thing I need is 720x480 in a XVid AVI with an MP3 audio track. I've found lots of pages showing how to resize to a maximum resolution, but I need the video to be exactly that resolution (with extra parts cropped off, no black bars). Can anyone tell me the command line to run - or at least get me some/most of the way there? If it needs to be multiple command lines (run X to get the resolution, do this calculation and then run Y with the output of that calculation) I can script that. | I'm no ffmpeg guru, but this should do the trick. First of all, you can get the size of input video like this: ffprobe -v error -of flat=s=_ -select_streams v:0 -show_entries stream=height,width in.mp4 With a reasonably recent ffmpeg, you can resize your video with these options: ffmpeg -i in.mp4 -vf scale=720:480 out.mp4 You can set the width or height to -1 in order to let ffmpeg resize the video keeping the aspect ratio. Actually, -2 is a better choice since the computed value should even. So you could type: ffmpeg -i in.mp4 -vf scale=720:-2 out.mp4 Once you get the video, it may be bigger than the expected 720x480 since you let ffmpeg compute the height, so you'll have to crop it. This can be done like this: ffmpeg -i in.mp4 -filter:v "crop=in_w:480" out.mp4 Finally, you could write a script like this (can easily be optimized, but I kept it simple for legibility): #!/bin/bashFILE="/tmp/test.mp4"TMP="/tmp/tmp.mp4"OUT="/tmp/out.mp4"OUT_WIDTH=720OUT_HEIGHT=480# Get the size of input video:eval $(ffprobe -v error -of flat=s=_ -select_streams v:0 -show_entries stream=height,width ${FILE})IN_WIDTH=${streams_stream_0_width}IN_HEIGHT=${streams_stream_0_height}# Get the difference between actual and desired sizeW_DIFF=$[ ${OUT_WIDTH} - ${IN_WIDTH} ]H_DIFF=$[ ${OUT_HEIGHT} - ${IN_HEIGHT} ]# Let's take the shorter side, so the video will be at least as big# as the desired size:CROP_SIDE="n"if [ ${W_DIFF} -lt ${H_DIFF} ] ; then SCALE="-2:${OUT_HEIGHT}" CROP_SIDE="w"else SCALE="${OUT_WIDTH}:-2" CROP_SIDE="h"fi# Then perform a first resizingffmpeg -i ${FILE} -vf scale=${SCALE} ${TMP}# Now get the temporary video sizeeval $(ffprobe -v error -of flat=s=_ -select_streams v:0 -show_entries stream=height,width ${TMP})IN_WIDTH=${streams_stream_0_width}IN_HEIGHT=${streams_stream_0_height}# Calculate how much we should cropif [ "z${CROP_SIDE}" = "zh" ] ; then DIFF=$[ ${IN_HEIGHT} - ${OUT_HEIGHT} ] CROP="in_w:in_h-${DIFF}"elif [ "z${CROP_SIDE}" = "zw" ] ; then DIFF=$[ ${IN_WIDTH} - ${OUT_WIDTH} ] CROP="in_w-${DIFF}:in_h"fi# Then crop...ffmpeg -i ${TMP} -filter:v "crop=${CROP}" ${OUT} | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/190431",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106755/"
]
} |
190,452 | I have a big file that contains among others serveral lines of the following form: USet07-1USet07-2USet08-1USet08-2...USet22-2... I want to remove the hyphen/dash - from these strings in vim . I search for the strings with: :/USet\d\d-\d but when I try to replace these with :%s/Uset\d\d-\d/USet\d\d\d I obviously get USetddd for all instances. But what I want is: USet071USet072USet081USet082...USet222... How can this be done? Can I reuse parts of the matched string and use it in the substitution? | Yes you can, with capture groups . Basically, you wrap the parts of the pattern with \(...\) and reference that in the replacement part with \1 etc.: :%s/Uset\(\d\d\)-\(\d\)/USet\1\2 Since you only want to remove a single part of the pattern, a shorter option is restricting the actual match (but still asserting that the stuff around is also there) via \zs (match start) and \ze (match end): :%s/Uset\d\d\zs-\ze\d// These are all very basic things, and capture groups are common in many regular expression-based tools (like sed ). Learn how to look up commands and navigate the built-in :help ; it is comprehensive and offers many tips. You won't learn Vim as fast as other editors, but if you commit to continuous learning, it'll prove a very powerful and efficient editor. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190452",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65869/"
]
} |
190,490 | I have a linux fedora21 client laptop behind a corporate firewall (which lets through http and https ports but not ssh 22) and I have a linux fedora21 server at home behind my own router.Browsing with https works when I specify my home server's public IP address (because I configured my home router) Is it possible to ssh (remote shell) to my home server over the http/s port? I saw a tool called corkscrew . would that help? opensshd and httpd run on the home server. What else would need configuration? | What is possible depends on what the firewall allows. If the firewall allows arbitrary traffic on port 443 Some firewalls take the simple way out and allow anything on port 443. If that's the case, the easiest way to reach your home server is to make it listen to SSH connections on port 443. If your machine is directly connected to the Internet, simply add Port 443 to /etc/ssh/sshd_config or /etc/sshd_config just below the line that says Port 22 . If your machine is behind a router/firewall that redirects incoming connections, make it redirect incoming connections to port 443 to your server's port 22 with something like iptables -t nat -I PREROUTING -p tcp -i wan0 --dport 443 -j DNAT --to-destination 10.1.2.3:22 where wan0 is the WAN interface on your router and 10.1.2.3 is your server's IP address on your home network. If you want to allow your home server to listen both to HTTPS connections and SSH connections on port 443, it's possible — SSH and HTTPS traffic can easily be distinguished (in SSH, the server talks first, whereas in HTTP and HTTPS, the client talks first). See http://blog.stalkr.net/2012/02/sshhttps-multiplexing-with-sshttp.html and http://wrouesnel.github.io/articles/Setting%20up%20sshttp/ for tutorials on how to set this up with sshttp , and also Have SSH on port 80 or 443 while webserver (nginx) is running on these ports If you have a web proxy that allows CONNECT tunnelling Some firewalls block all outgoing connections, but allow browsing the web via a proxy that allows the HTTP CONNECT method to effectively pierce a hole in the firewall. The CONNECT method may be restricted to certain ports, so you may need to combine this with listening on port 443 as above. To make SSH go via the proxy, you can use a tool like corkscrew . In your ~/.ssh/config , add a ProxyCommand line like the one below, if your web proxy is http://web-proxy.work.example.com:3128 : Host homeHostName mmm.dyndns.example.netProxyCommand corkscrew web-proxy.work.example.com 3128 %h %p then you can connect by just running ssh home . Wrapping SSH in HTTP(S) Some firewalls don't allow SSH traffic, even on port 443. To cope with these, you need to disguise or tunnel SSH into something that the firewall lets through. See http://dag.wiee.rs/howto/ssh-http-tunneling/ for a tutorial on doing this with proxytunnel . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/190490",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106795/"
]
} |
190,492 | I've been trying to set up VSFTPD on Centos 6.6 to allow virtual users. Below is my vsftpd.conf , which is configured to allow only virtual users in /etc/vsftpd/vsftpd-virtual-user.db . listen=YESlocal_umask=002anonymous_enable=NOlocal_enable=YESvirtual_use_local_privs=YESwrite_enable=YESpam_service_name=vsftpd_virtualguest_enable=YESlocal_root=/var/siteschroot_local_user=YEShide_ids=YESconnect_from_port_20=YESpasv_enable=YESpasv_addr_resolve=YESpasv_address=10.175.9.23pasv_min_port=1024pasv_max_port=65535 I have also set up the vsftpd_virtual module in /etc/pam.d/vsftpd_virtual which contains the following: #%PAM-1.0auth required pam_userdb.so db=/etc/vsftpd/vsftpd-virtual-useraccount required pam_userdb.so db=/etc/vsftpd/vsftpd-virtual-usersession required pam_loginuid.so When trying to log in to FTP on localhost, I'm getting a 530 error from FTP and the following line in /var/log/secure : vsftpd: pam_userdb(vsftpd_virtual:auth): user_lookup: could not open database `/etc/vsftpd/vsftpd-virtual-user': Permission denied The file permissions for the database file seem fine, but I may be wrong: Access: (0777/-rwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) | What is possible depends on what the firewall allows. If the firewall allows arbitrary traffic on port 443 Some firewalls take the simple way out and allow anything on port 443. If that's the case, the easiest way to reach your home server is to make it listen to SSH connections on port 443. If your machine is directly connected to the Internet, simply add Port 443 to /etc/ssh/sshd_config or /etc/sshd_config just below the line that says Port 22 . If your machine is behind a router/firewall that redirects incoming connections, make it redirect incoming connections to port 443 to your server's port 22 with something like iptables -t nat -I PREROUTING -p tcp -i wan0 --dport 443 -j DNAT --to-destination 10.1.2.3:22 where wan0 is the WAN interface on your router and 10.1.2.3 is your server's IP address on your home network. If you want to allow your home server to listen both to HTTPS connections and SSH connections on port 443, it's possible — SSH and HTTPS traffic can easily be distinguished (in SSH, the server talks first, whereas in HTTP and HTTPS, the client talks first). See http://blog.stalkr.net/2012/02/sshhttps-multiplexing-with-sshttp.html and http://wrouesnel.github.io/articles/Setting%20up%20sshttp/ for tutorials on how to set this up with sshttp , and also Have SSH on port 80 or 443 while webserver (nginx) is running on these ports If you have a web proxy that allows CONNECT tunnelling Some firewalls block all outgoing connections, but allow browsing the web via a proxy that allows the HTTP CONNECT method to effectively pierce a hole in the firewall. The CONNECT method may be restricted to certain ports, so you may need to combine this with listening on port 443 as above. To make SSH go via the proxy, you can use a tool like corkscrew . In your ~/.ssh/config , add a ProxyCommand line like the one below, if your web proxy is http://web-proxy.work.example.com:3128 : Host homeHostName mmm.dyndns.example.netProxyCommand corkscrew web-proxy.work.example.com 3128 %h %p then you can connect by just running ssh home . Wrapping SSH in HTTP(S) Some firewalls don't allow SSH traffic, even on port 443. To cope with these, you need to disguise or tunnel SSH into something that the firewall lets through. See http://dag.wiee.rs/howto/ssh-http-tunneling/ for a tutorial on doing this with proxytunnel . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/190492",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106802/"
]
} |
190,495 | How can I launch a bash command with multiple args (for example " sudo apt update ") from a python script? | @milne's answer works, but subprocess.call() gives you little feedback. I prefer to use subprocess.check_output() so you can analyse what was printed to stdout: import subprocess res = subprocess.check_output(["sudo", "apt", "update"]) for line in res.splitlines(): # process the output line by line check_output throws an error on on-zero exit of the invoked command Please note that this doesn't invoke bash or another shell if you don't specify the shell keyword argument to the function (the same is true for subprocess.call() , and you shouldn't if not necessary as it imposes a security hazard), it directly invokes the command. If you find yourself doing a lot of (different) command invocations from Python, you might want to look at plumbum . With that you can do the (IMO) more readable: from plumbum.cmd import sudo, apt, echo, cutres = sudo[apt["update"]]()chain = echo["hello"] | cut["-c", "2-"]chain() | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/190495",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70379/"
]
} |
190,513 | I found scripts that say they check for internet connectivity. Some check the IP address if the interface is up BUT it does not check for internet connectivity. I found some that uses ping like this: if [ 'ping google.com -c 4 | grep time' != "" ]; then but sometimes this may be unreliable as ping itself may hang for some reason (e.g. waiting for some stuck IO). Any suggestions on the proper/reliable way to check for internet connectivity using scripts? Do I have to use some packages? It needs to be able to check periodically with cron for example, then, do something when the connection goes down like invoke ifup --force [interface] | I highly recommend against using ping to determine connectivity. There are too many network admins that disable ICMP (the protocol it uses) due to worries about ping flood attacks originating from their networks. Instead, I use a quick test of a reliable server on a port you can expect to be open: if nc -zw1 google.com 443; then echo "we have connectivity"fi This uses netcat ( nc ) in its port scan mode, a quick poke ( -z is zero-I/O mode [used for scanning] ) with a quick timeout ( -w 1 waits at most one second, though Apple OS X users may need to use -G 1 instead). It checks Google on port 443 (HTTPS). I've used HTTPS rather than HTTP as an effort to protect against captive portals and transparent proxies which can answer on port 80 (HTTP) for any host. This is less likely when using port 443 since there would be a certificate mismatch, but it does still happen. If you want to proof yourself against that, you'll need to validate the security on the connection: test=google.comif nc -zw1 $test 443 && echo |openssl s_client -connect $test:443 2>&1 |awk ' $1 == "SSL" && $2 == "handshake" { handshake = 1 } handshake && $1 == "Verification:" { ok = $2; exit } END { exit ok != "OK" }'then echo "we have connectivity"fi This checks for a connection (rather than waiting for openssl to time out) and then makes the SSL handshake, keying on the verification phase. It silently exits ("true") if the verification was "OK" or else exits with an error ("false"), then we report the finding. The awk code analyzes the output of openssl line by line: If the first word of the line is "SSL" and the second is "Verification", set handshake to 1 If handshake is set and the first word of the line is "Verification", then save the second word (the verification status) in ok and stop reading Exit with a value of 0 (true) if the verification status was OK , or else exit with 1 (false). We use != here because shell exit codes are reversed (An awk oddity: Running exit while reading lines will simply stop reading lines and enter the END condition, from which you can truly exit .) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/190513",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95782/"
]
} |
190,537 | I tried it with SCP, but it says "Negative file size". >scp matlab.iso xxx@xxx:/matlab.isomatlab.iso: Negative file size Also tried using SFTP, worked fine until 2 GB of the file had transferred, then stopped: sftp> put matlab.isoUploading matlab.iso to /home/x/matlab.isomatlab.iso -298% 2021MB -16651.-8KB/s 00:5do_upload: offset < 0 Any idea what could be wrong? Don't SCP and SFTP support files that are larger than 2 GB? If so, then how can I transfer bigger files over SSH? The destination file system is ext4. The Linux distribution is CentOS 6.5. The filesystem currently has (accessible) large files on it (up to 100 GB). | I'm not sure about the file size limits of SCP and SFTP, but you might try working around the problem with split: split -b 1G matlab.iso This will create 1 GiB files which, by default, are named as xaa, xab, xac, ... . You could then use scp to transfer the files: scp xa* xxx@xxx: Then on the remote system recreate the originial file with cat: cat xa* > matlab.iso Of course, the penalties for this workaround are the time taken in the split and cat operations, as well as the extra disk space needed on the local and remote systems. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/190537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106828/"
]
} |
190,543 | Apologies if this is a duplicate post. I did try searching to see if someone already asked/answered this but didn't find anything. What does || mean in bash? For example, in researching steps to troubleshoot a driver issue, I came across this command: modprobe -q vmxnet3 && echo "vmxnet3 installed" || echo "vmxnet3 not installed" I get that the first part is querying modprobe and returning a response of "vmxnet3 installed" if it receives a successful return code... but... what's the last part doing? | || is the OR operator. It executes the command on the right only if the command on the left returned an error. See Confusing use of && and || operators . I think your example isn't correct bash though. The part to the right of || is missing the echo command. (This was fixed.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/190543",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1822/"
]
} |
190,571 | I have a script that executes three functions: A && B && C . Function B needs to be run as a super-user, while A and C don't. I have several solutions but neither of these are satisfying: sudo the entire script: sudo 'A && B && C' That seems like a bad idea to run A and C as super-user if it's notneeded make the script interactive: A && sudo B && C I might have to type-in my password, but I want my script to benon-interactive, as each function can take some time, and I don't wantthe script to wait for me. Well, that's also why it's a script in thefirst place, so I don't have to watch it run. The stupid solution: sudo : && A && sudo -n B && C First it seems stupid to run a no-op sudo first, and also I must crossmy finger that A is not going to take more than $sudo_timeout . Hypothetical solution (I wish you tell me it exists): sudo --store-cred 'A && sudo --use-cred-from-parent-sudo B && C' That would prompt for my password at the beginning, and then use thatcredentials only when needed. What's your opinion on all this? I'd be very surprised that there is nosolution to that problem, as I think it's a pretty common problem (whatabout make all && sudo make install ) | I think the best thing that you can do is launch the script with sudo and then launch the processes you want to run as a normal user explicitly with su user or sudo -u user : #!/usr/bin/env bash## Detect the user who launched the scriptusr=$(env | grep SUDO_USER | cut -d= -f 2)## Exit if the script was not launched by root or through sudoif [ -z $usr ] && [ $UID -ne 0 ]then echo "The script needs to run as root" && exit 1fi## Run the job(s) that don't need rootsudo -u $usr commandA## Run the job that needs to be run as rootcommandB | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190571",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45438/"
]
} |
190,579 | I have tried searching for this but there seems to be no command that can output a list of packages (ideally in Ubuntu) that I have installed, not including any dependencies. | comm -23 <(apt-mark showmanual | sort -u) \ <(gzip -dc /var/log/installer/initial-status.gz | sed -n 's/^Package: //p' | sort -u) This gets the correct list of user-installed packages, to a better approximation than the answer from @Stephen Kitt. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190579",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81542/"
]
} |
190,597 | I just spent the last two hours running a dd command (or picture any similar "difficult to re-do" scenario) from a live CD without a GUI; all I have is my trusty "multi-window" ( CTRL+ALT+F# ) Bash terminal. Alas, during the command dd threw out several nasty error messages and a bit more information that I would like to keep. I have a USB drive plugged in to which I can write data, but how do I get the previous output saved as a text file after the command has already been run? If this had been a terminal emulator inside a nice GUI, I would have simply used my mouse to select the text, copy it, and paste it into a document. And had I known the command would have produced errors, I would have piped it out to a file to begin with, but alas, the additional output came as a surprise. How do I save text output from my previous command to a file without re-running the command? Is this even possible? | A linux kernel should store an on-screen log for your vts in the corresponding /dev/vcsa*[ttynum] device. It is why the following works: echo hey >/dev/tty2dd bs=10 count=1 </dev/vcs2 ...which prints... hey The corresponding /dev/vcsa[ttynum] device will store an encoded version of the formatted text on-screen, whereas the /dev/vcs[ttynum] will be a plain dump. The vcsa[ttynum] devices will encode a pair of bytes which describe each on-screen char and its attributes, as well as a string at the head of each logical page that indicates the referenced tty's lines,columns count. As @kasperd points out, I had it wrong before by assuming the \a BEL was encoded between every character, when in fact: The default color combination happens to coincide with the bell character. For your purposes using the /dev/vcs[ttynum] is probably easiest. Here's a l ook at the differences: echo hey >/dev/tty2dd bs=10 count=1 </dev/vcs2 |sed -n l ...prints... hey $ ...and... echo hey >/dev/tty2dd bs=10 count=1 </dev/vcsa2 |sed -n l ...prints... 0\200\000\004h\ae\ay\a$ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190597",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5769/"
]
} |
190,719 | I'm using nix in "single-user mode" in a system where I'm not the root (see below for a description of my nix setup). I wanted to quickly run one of my binaries which is dynamically linked with a library which is absent in the system. So, I've installed the library with nix : $ nix-env -qa 'gmp'gmp-4.3.2gmp-5.1.3$ nix-env -i gmp-5.1.3 But the library is still not found by the linker: $ ldd -r ../valencies ../valencies: /lib64/libc.so.6: version `GLIBC_2.15' not found (required by ../valencies)../valencies: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by ../valencies) linux-vdso.so.1 => (0x00007fffbbf28000) /usr/local/lib/libsnoopy.so (0x00007f4dcfbdc000) libgmp.so.10 => not found libffi.so.5 => /usr/lib64/libffi.so.5 (0x00007f4dcf9cc000) libm.so.6 => /lib64/libm.so.6 (0x00007f4dcf748000) librt.so.1 => /lib64/librt.so.1 (0x00007f4dcf540000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f4dcf33c000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f4dcf11f000) libc.so.6 => /lib64/libc.so.6 (0x00007f4dced8b000) /lib64/ld-linux-x86-64.so.2 (0x00007f4dcfde7000)undefined symbol: __gmpz_gcd (../valencies)undefined symbol: __gmpn_cmp (../valencies)undefined symbol: __gmpz_mul (../valencies)undefined symbol: __gmpz_fdiv_r (../valencies)undefined symbol: __gmpz_fdiv_q_2exp (../valencies)undefined symbol: __gmpz_com (../valencies)undefined symbol: __gmpn_gcd_1 (../valencies)undefined symbol: __gmpz_sub (../valencies)symbol memcpy, version GLIBC_2.14 not defined in file libc.so.6 with link time reference (../valencies)undefined symbol: __gmpz_fdiv_q (../valencies)undefined symbol: __gmpz_fdiv_qr (../valencies)undefined symbol: __gmpz_add (../valencies)undefined symbol: __gmpz_init (../valencies)undefined symbol: __gmpz_ior (../valencies)undefined symbol: __gmpz_mul_2exp (../valencies)undefined symbol: __gmpz_xor (../valencies)undefined symbol: __gmpz_and (../valencies)symbol __fdelt_chk, version GLIBC_2.15 not defined in file libc.so.6 with link time reference (../valencies)undefined symbol: __gmpz_tdiv_qr (../valencies)undefined symbol: __gmp_set_memory_functions (../valencies)undefined symbol: __gmpz_tdiv_q (../valencies)undefined symbol: __gmpz_divexact (../valencies)undefined symbol: __gmpz_tdiv_r (../valencies)$ Look, it is present in the filesystem: $ find / -name 'libgmp.so.10' 2>/dev/null /nix/store/mnmzq0qbrvw6dv1k2vj3cwz9ffdh05zr-user-environment/lib/libgmp.so.10/nix/store/fnww2w81hv5v3dl9gsb7p4llb7z7krzd-gmp-5.1.3/lib/libgmp.so.10$ What do I do so that libraries installed by nix are "visible"? Probably, the standard user-installation script of nix modifies .bash_profile to add its bin/ into PATH , but does not do something analogous for libraries. My nix setup: The only thing I have asked the root to do for me was: mkdir -m 0755 /nix && chown ivan /nix , otherwise I've followed the standard simple nix installation procedure. So now I can use custom programs from nix packages. I couldn't do this nicely without any help from the root at all, i.e., without /nix/ , because /nix/ was not available for me; I could of course use another directory, but then the pre-built binary packages wouldn't be valid and all packages would have to be rebuilt, according to the nix documentation. In my case, it was simpler to ask for /nix/ for me. Another thing I've done is adding to ~/.bash_profile : export NIX_CONF_DIR=/nix/etc/nix so that I can edit nix.conf . (It was supposed to be in the root-controlled /etc/ otherwise. I did it because I wanted to build-max-jobs and build-cores settings in it.) | TL;DR The working solution is using patchelf (if you have to deal with non-matching glibc versions: in the host system and the one nix libs have been linked with), see the second half of my story. Trying the usual approach Trying to use LD_LIBRARY_PATH Well, I have set up an environment variable for this in ~/.bash_profile : NIX_LINK=/home/ivan/.nix-profileexport LD_LIBRARY_PATH="$NIX_LINK"/lib but that's not all! Now there are problems with linking with different versions of libc : $ ldd -r ../valencies ../valencies: /lib64/libc.so.6: version `GLIBC_2.15' not found (required by ../valencies)../valencies: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by ../valencies)../valencies: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by /home/ivan/.nix-profile/lib/libgmp.so.10) linux-vdso.so.1 => (0x00007fff365ff000) /usr/local/lib/libsnoopy.so (0x00007f56c72e6000) libgmp.so.10 => /home/ivan/.nix-profile/lib/libgmp.so.10 (0x00007f56c7063000) libffi.so.5 => /usr/lib64/libffi.so.5 (0x00007f56c6e54000) libm.so.6 => /lib64/libm.so.6 (0x00007f56c6bd0000) librt.so.1 => /lib64/librt.so.1 (0x00007f56c69c7000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f56c67c3000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f56c65a6000) libc.so.6 => /lib64/libc.so.6 (0x00007f56c6211000) /lib64/ld-linux-x86-64.so.2 (0x00007f56c74f1000)symbol memcpy, version GLIBC_2.14 not defined in file libc.so.6 with link time reference (/home/ivan/.nix-profile/lib/libgmp.so.10)symbol memcpy, version GLIBC_2.14 not defined in file libc.so.6 with link time reference (../valencies)symbol __fdelt_chk, version GLIBC_2.15 not defined in file libc.so.6 with link time reference (../valencies)$ Sorting out 2 versions of glibc The most surprizing error here is: symbol memcpy, version GLIBC_2.14 not defined in file libc.so.6 with link time reference (/home/ivan/.nix-profile/lib/libgmp.so.10) because nix must have installed the version of glibc which is used by its libgmp ! And indeed, the glibc from nix is there: $ ldd -r /home/ivan/.nix-profile/lib/libgmp.so.10 linux-vdso.so.1 => (0x00007fff0f1ff000) /usr/local/lib/libsnoopy.so (0x00007f06e9919000) libc.so.6 => /nix/store/93zfs0zzndi7pkjkjxawlafdj8m90kg5-glibc-2.20/lib/libc.so.6 (0x00007f06e957c000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f06e9371000) /lib64/ld-linux-x86-64.so.2 (0x00007f06e9da7000)symbol _dl_find_dso_for_object, version GLIBC_PRIVATE not defined in file ld-linux-x86-64.so.2 with link time reference (/nix/store/93zfs0zzndi7pkjkjxawlafdj8m90kg5-glibc-2.20/lib/libc.so.6)/home/ivan/.nix-profile/lib/libgmp.so.10: error while loading shared libraries: __vdso_time: invalid mode for dlopen(): Invalid argument$ Probably, glibc was not available to the user, so when I ran my binary, the system's glibc was loaded first. Proof: $ ls ~/.nix-profile/lib/*libc*ls: cannot access /home/ivan/.nix-profile/lib/*libc*: No such file or directory$ Ok, we can try to make glibc visible to the user, too: $ nix-env -i glibc Then everything is bad: $ ldd -r ../valencies /bin/bash: error while loading shared libraries: __vdso_time: invalid mode for dlopen(): Invalid argument$ /bin/echo ok/bin/echo: error while loading shared libraries: __vdso_time: invalid mode for dlopen(): Invalid argument$ So, it seems to be not an easy job if you want to load libraries from nix when running your own binaries... For now, I'm commenting out export LD_LIBRARY_PATH="$NIX_LINK"/lib and doing in the shell session: $ unset LD_LIBRARY_PATH$ export LD_LIBRARY_PATH Need to think more. (Read about __vdso_time: invalid mode for dlopen() : having another glibc in LD_LIBRARY_PATH is expected to crash, because your ld-linux-x86-64.so.2 will not match your libc.so.6 . Having multiple versions of glibc on a single system is possible, but slightly tricky, as explained in this answer.) The needed solution: patchelf So, the path to the dynamic linker is hard-coded in the binary. And the dynamic linker being used is from the system (from the host glibc), not from nix. And because the dynamic linker doesn't match the glibc which we want and need to use, it doesn't work. A simple and working solution is patchelf . patchelf --set-interpreter /home/ivan/.nix-profile/lib/ld-linux-x86-64.so.2 ../valencies After that, it works. You still need to fiddle with LD_LIBRARY_PATH though. $ LD_LIBRARY_PATH=/home/ivan/.nix-profile/lib:/lib64/:/usr/lib64/ ../valencies If--like in my imperfect case--some of the libraries are taken from nix, but some are taken from the host system (because I haven't installed them with nix-env -i ), you have to specify both the path to the nix libs, and to your host system libs in LD_LIBRARY_PATH (it completely overrides the default search path). additional step: patchelf for the library search path (from the patchelf page) Likewise, you can change the RPATH , the linker search path embedded into executables and dynamic libraries: patchelf --set-rpath /opt/my-libs/lib:/foo/lib program This causes the dynamic linker to search in /opt/my-libs/lib and /foo/lib for the shared libraries needed by program. Of course, you could also set the environment variable LD_LIBRARY_PATH , but that’s often inconvenient as it requires a wrapper script to set up the environment. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/190719",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4319/"
]
} |
190,729 | I am trying to make a tar ball and its throwing the error, persis1@takwa228-DEV $ tar -pcrvzf ALPHA.tar.gz https-ALPHAtar: You may not specify more than one `-Acdtrux' or `--test-label' optionTry `tar --help' or `tar --usage' for more information. | As the message says, you can't combine c and r ; the former means "create an archive", the latter "append to an archive", so they can't be used simultaneously. You can simply do tar cpvzf ALPHA.tar.gz https-ALPHA | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190729",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106516/"
]
} |
190,794 | I installed Python 2.7.9 on Ubuntu 14.04 by compiling its source , by .configre , make , and make altinstall . make altinstall is because I don't want to overwrite the default Python 2.7.6. My self installed 2.7.9 is in /usr/local/bin/python2.7 and many other files in other directories under /usr/local . From README in the source installation package: On Unix and Mac systems if you intend to install multiple versions of Python using the same installation prefix (--prefix argument to the configure script) you must take care that your primary python executable is not overwritten by the installation of a different version. All files and directories installed using "make altinstall" contain the major and minor version and can thus live side-by-side. "make install" also creates ${prefix}/bin/python which refers to ${prefix}/bin/pythonX.Y. If you intend to install multiple versions using the same prefix you must decide which version (if any) is your "primary" version. Install that version using "make install". Install all other versions using "make altinstall". For example, if you want to install Python 2.5, 2.6 and 3.0 with 2.6 being the primary version, you would execute "make install" in your 2.6 build directory and "make altinstall" in the others. Now I want to uninstall my self installed 2.7.9. Fortunately I still have the source code, but unfortunately, the Makefile doesn't have uninstall section $ sudo make uninstallmake: *** No rule to make target `uninstall'. Stop. Then I tried another way: first create a deb from the source andcompilation, install the deb (hopefully overwriting the installedfiles from make altinstall ), and then uninstall the deb. But when I create the deb file by checkinstall , I am not sure ifand how I should do differently for make altinstall from for make install . What I tried is: $ checkinstall altinstall...Installing with altinstall...========================= Installation results ===========================/var/tmp/tmp.4ZzIiwqBNL/installscript.sh: 4: /var/tmp/tmp.4ZzIiwqBNL/installscript.sh: altinstall: not found... I wonder how I can create a deb so that installing the deb willduplicate the installation process of make altinstall ? Or what is your way of uninstalling my python 2.7.9? Note: the source package in the first link also has setup.py , install-sh besides README . | The following commands will remove your make altinstall -ed python: rm -f /usr/local/bin/python2.7rm -f /usr/local/bin/pip2.7rm -f /usr/local/bin/pydocrm -rf /usr/local/bin/include/python2.7rm -f /usr/local/lib/libpython2.7.arm -rf /usr/local/lib/python2.7 You might also have to do rm -f /usr/local/share/man/python2.7.1rm -rf /usr/local/lib/pkgconfigrm -f /usr/local/bin/idlerm -f /usr/local/bin/easy_install-2.7 Although make altinstall has served me well if the "system python" has a different major.minor number from the one you install, it doesn't work that well if only the micro number (the third position) differs. That number is excluded from the installed binary, and you end up with two versions pythonX.Y. This was always a problem but once distributions started shipping with system utilities based on 2.7.X this problem has been more severe as 2.7 is supposed to be the last of the Python2 series. IMO the best approach to solve this problem is to prevent it from becoming one: configure python to install in a directory not used by any other python. On my system they go under /opt/python/X.Y.Z . To use any of the Pythons installed there you use [virualenv][1] to make a new environment: virtualenv --python=/opt/python/2.7.9/bin/python2.7 venvsource venv/bin/activate or use [virtualenvwrapper][2] . I have some aliases for the latest versions in the series I work with. If you are using tox for testing against multiple versions (you should) the following alias will help it find the various version: alias tox='PATH=/opt/python/2.7.9/bin:/opt/python/2.6.9/bin:/opt/python/3.4.3/bin:/opt/python/3.3.6/bin:/opt/python/3.5-dev/bin:$PATH tox' (these are currently the latest versions, I use a slightly different setup by maintaining links from /opt/python/2.7 to the latest /opt/python/2.7.9 , and for the other minor numbers as well, within the process for downloading, building and installing a new python version) These installs are never used directly. They are always used as the basis for virtualenv environments only, hence I don't care that they are not in my normal PATH. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/190794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
190,798 | I want to locate all folders on my server that end with 'wordpress-seo'. I tried find command but it takes too long. sudo find /home/w/s -type d -name 'wordpress-seo' Now I am trying locate command but it returns all paths that have wordpress-seo. a/wp-content/plugins/wordpress-seoa/wp-content/plugins/wordpress-seo/languages/...... I want to exclude wordpress-seo/* files and folders. I just want folder names. i.e. a/wp-content/plugins/wordpress-seob/wp-content/plugins/wordpress-seo Tried regex without any luck. locate -r '/\w+wordpress\-seo/b'ORlocate '/*/wordpress-seo/' Any Help?? | Try this: locate -r "wordpress-seo$" Although i should mention that find offers huge variations of options over locate . You have found locate faster because it just reads from a database /var/lib/mlocate/mlocate.db while find search through files every time, whenever you give it something to search. locate 's database is updated by cron on a daily basis, you can also update the database manually anytime by: sudo updatedb This will make the files created after the daily cron update available in the locate database, so you will find those via locate . Also check the configuration file /etc/updatedb to see which filesystems, paths are being excluded. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190798",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106996/"
]
} |
190,814 | I can't figure this out. I need to look at every line in a file and check whether it matches a word that is given in a variable. I started with command read , but I don't know what I am supposed to use after that. I tried grep , but I probably used it wrongly. while read line; do if [ $condition ] ;then echo "ok" fidone < file.txt | Here's a quickie for you, simply what we're doing is Line 1: While reading file into variable line Line 2: Match a regex, echo the $line if matching the word "bird" echo that line. Do whatever actions you need here, in this if statement. Line 3: End of while loop, which pipes in the file foo.text #!/bin/bashwhile read line; do if [[ $line =~ bird ]] ; then echo $line; fidone <foo.text Note that "bird" is a regex. So that you could replace it with for example: bird.*word to match the same line with a regular expression. Try it with a file like so, called foo.text with the contents: my dog is brownher cat is whitethe bird is the word | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190814",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106676/"
]
} |
190,818 | While experimenting with output redirection and process substitution I stumbled upon the following command and its resulting output: me@elem:~$ echo foo > >(cat); echo bar bar me@elem:~$ foo (Yes, that empty newline at the end is intentional.) So bash echo's bar, prints my usual prompt, echo's foo, echo's a newline, and leaves my cursor there. If I hit enter again, it'll print my prompt on a new line and leave the cursor following it (as expected when someone hits enter on an empty command line). I was expecting it to write foo to a file descriptor, cat reads it and echo's foo, the second echo echo's bar, and then back to the command prompt. But that's clearly not the case. Could someone please explain what's going on? | You may see foo displayed before bar , after bar , or even after the prompt, depending on timing. Add a little delay to get consistent timing: $ echo foo > >(sleep 1; cat); echo bar; sleep 2 barfoo$ bar appears immediately, then foo after one second, then the next prompt after another second. What's happening is that bash executes process substitutions in the background. The main bash process starts a subshell to execute sleep 1; cat , and sets up a pipe to it. The main bash process executes echo foo . Since this doesn't fill up the pipe's buffer, the echo command terminates without blocking. The main bash process executes echo bar . The main bash process launches the command sleep 2 . Meanwhile, the subshell is launching the command sleep 1 . After about 1 second, sleep 1 returns in the subprocess. The subprocess proceeds to execute cat . cat copies its input to its output (which is displayed on the screen) and returns. The subshell has finished its job, it exits. After another second, sleep 2 returns. The main shell process finishes executing and you get to see the next prompt. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190818",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106343/"
]
} |
190,819 | From the command line, I want to download a file from a FTP server. Once the download completes, I want the file to be deleted on the server. Is there any way to do this? Originally I considered wget, but there is no particular reason why to use that specifically. Any tool would be fine as long as it runs on Linux. | with curl : curl ftp://example.com/ -X 'DELE myfile.zip' --user username:password | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190819",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107013/"
]
} |
190,860 | Suppose I have created two folders in /tmp called parent and child . child contains a file called test-child.txt and parent contains a file called test-parent.txt . Now let's go inside the parent and create a symbolic link to the child . Next, go inside the child and try to copy the test-parent.txt from the parent . The bash completion works but the actual file copy fails -- cd /tmppwd/tmpmkdir parentmkdir childtouch child/test-child.txtls child/test-child.txtcd parentln -sf ../child .touch test-parent.txtcd childcp ../test-parent.txt .cp: cannot stat ‘../test-parent.txt’: No such file or directory why ?? Moreover, when I am inside child and if I say -- pwd/tmp/parent/child | Shells keep track of symbolic links as a convenience for users. This has the nice effect that cd foo && cd .. always goes back to the original directory, even when foo is a symbolic link to a directory. It has two kinds of downsides: the main one is that other programs don't behave this way; additionally, symbolic directory tracking introduces problems of its own (what happens when the symbolic link changes? what happens if the process doesn't have the permission to read the symlink? etc.). Shells can only do this because they keep track of all directory changes, so they remember how you got there. When a new process starts, it doesn't get this historical information. Under the hood, it finds where it is by moving upwards from the current directory, following .. links until it hits the root¹. If you reached a directory through symbolic links, you can print out a symlink-less path with the pwd builtin, by calling pwd -P . If you call another program from the shell, don't pass it a path that contains .. after symlink components, as the program would interpret it differently. Instead, eliminate the .. components by calling pwd -P : cp "$(cd .. && pwd -P && echo /)test-parent.txt" . If you want to forget about symlinks used in past cd commands in a shell session, you can run cd "$(pwd -P && echo /.)" This changes to what is already the current directory, so it doesn't effectively change the shell process's current directory, but it changes the path that the shell has tracked for the current directory, making it symlink-less. ¹ This is how the getcwd call traditionally operates. Some kernels do keep track of the current directory, but don't track symbolic links, for backward compatibility if for no other reason (but also because of the subtle edge cases with symbolic links). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190860",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28526/"
]
} |
190,865 | This is an example of my available GRUB menu boot options : 1 menu label ^ 1) Xubuntu 14.04.02 LTS 2 menu label ^ 2) Windows 7, x86 & AMD64 3 menu label ^ 3) Windows 8, AMD64 4 menu label ^ 4) Hirens Boot CD 8.8, x86 5 menu label ^ 5) Kali 1.0.7 Live, amd64 In order to repair/reconfigure/rescue servers or machines without keyboard&screen, it would be very useful to be capable of booting remote machines to PXE Network without needing to physically go near the computer to select the PXE option in the BIOS. Adding a PXE Network Boot option to GRUB would be fantastic. Something like: 1 menu label ^ 1) Xubuntu 14.04.02 LTS - Sopalajo Mod, amd64 2 menu label ^ 2) Windows 7, x86 & AMD64 3 menu label ^ 3) Windows 8, AMD64 4 menu label ^ 4) Hirens Boot CD 8.8, x86 5 menu label ^ 5) Kali 1.0.7 Live, amd64 6 menu label ^ 6) PXE Network server on this LAN I am patching the problem for now by selecting PXE as the first boot option in the BIOS, but I don't always want PXE as the first boot option. As long as GRUB includes some really useful programs like grub-reboot or, at least, accepts remote reconfiguration, adding PXE to GRUB could be a perfect solution. Is it possible to add a PXE option to the GRUB boot menu? | Yes, you can add a (i)PXE Launcher to Grub. For dpkg -based systems like Debian&derivatives:Only apt-get install ipxe is required I would expect other distros to have integrated it as well fairly comfortably. ==> A "PXE Boot" menu entry will exist on next reboot. In case you want to know inner-working-details: The post-install hook scripts automatically adds an iPXE entry to the grub configuration, using the "template" file /etc/grub.d/20_ipxe . You end up with an entry like the following in /boot/grub/grub.cfg menuentry 'Linux NetBoot Environment' { set root='(hd0,1)' <More, less important options> linux16 /boot/ipxe.lkrn} This just means, that instead of a (linux-)kernel, grub gives full computer control to another "simple" program, in this case ipxe.lkrn . MemTestx86 is launched in basically the same way. The PXE Stack is software normally stored somewhere on the main-board. Just in this case we load it from somewhere the drivers from GRUB can access. Example of a usage scenario: You will want to install a basic GRUB on the drive, having the PXE entry first, and a fall-back on Position 2 to local chain-boot from (say) Partition 1. The configuration iPXE would use will then depend on the files residing on your boot-configuration-server. There you will make the default, first menuchoice "Boot from local Partition 1", then more choices (Boot-AV, SuperGrub, Debian NetInst...). ==> Your Users normally don't touch anything until they see the Graphical Login Prompt from the local Installation. Boot-Sequence: GRUB - iPXE - OS-in-Partition-1 (Fallback to OS-In-Partition-1, if PXE unsuccessful) ==> Physically present at the PC, you could choose other Boot-Options. ==> Not physically present at the PC, you can change the server-side PXE configuration to "one-off" boot another choice than the default. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/190865",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57439/"
]
} |
190,866 | I have a dual boot system with Mint 17.1 and Centos 6.6 I want to access a file in my CentOS user's home directory from Mint. (I cannot boot CentOS right now.) What is a clean/standard method for permissibly accessing files in a non-bootable foreign Linux partition? I can mount and access the CentOS partition The partition is encrypted; Mint allows me to enter the LUKS password through the user session, so that should not be a problem. A Mint/Mate specific option is not preferable, but it would be al-right.) | Yes, you can add a (i)PXE Launcher to Grub. For dpkg -based systems like Debian&derivatives:Only apt-get install ipxe is required I would expect other distros to have integrated it as well fairly comfortably. ==> A "PXE Boot" menu entry will exist on next reboot. In case you want to know inner-working-details: The post-install hook scripts automatically adds an iPXE entry to the grub configuration, using the "template" file /etc/grub.d/20_ipxe . You end up with an entry like the following in /boot/grub/grub.cfg menuentry 'Linux NetBoot Environment' { set root='(hd0,1)' <More, less important options> linux16 /boot/ipxe.lkrn} This just means, that instead of a (linux-)kernel, grub gives full computer control to another "simple" program, in this case ipxe.lkrn . MemTestx86 is launched in basically the same way. The PXE Stack is software normally stored somewhere on the main-board. Just in this case we load it from somewhere the drivers from GRUB can access. Example of a usage scenario: You will want to install a basic GRUB on the drive, having the PXE entry first, and a fall-back on Position 2 to local chain-boot from (say) Partition 1. The configuration iPXE would use will then depend on the files residing on your boot-configuration-server. There you will make the default, first menuchoice "Boot from local Partition 1", then more choices (Boot-AV, SuperGrub, Debian NetInst...). ==> Your Users normally don't touch anything until they see the Graphical Login Prompt from the local Installation. Boot-Sequence: GRUB - iPXE - OS-in-Partition-1 (Fallback to OS-In-Partition-1, if PXE unsuccessful) ==> Physically present at the PC, you could choose other Boot-Options. ==> Not physically present at the PC, you can change the server-side PXE configuration to "one-off" boot another choice than the default. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/190866",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104494/"
]
} |
190,907 | I've just cat /var/log/auth.log log and see, that there are many | grep "Failed password for" records. However, there are two possible record types - for valid / invalid user. It complicates my attempts to | cut them. I would like to see create a list (text file) with IP addresses of possible attackers and number of attempts for each IP address. Is there any easy way to create it? Also, regarding only ssh : What all records of /var/log/auth.log should I consider when making list of possible attackers? Example of my 'auth.log' with hidden numbers: cat /var/log/auth.log | grep "Failed password for" | sed 's/[0-9]/1/g' | sort -u | tail Result: Mar 11 11:11:11 vm11111 sshd[111]: Failed password for invalid user ucpss from 111.11.111.111 port 11111 ssh1Mar 11 11:11:11 vm11111 sshd[111]: Failed password for invalid user vijay from 111.111.11.111 port 11111 ssh1Mar 11 11:11:11 vm11111 sshd[111]: Failed password for invalid user webalizer from 111.111.11.111 port 11111 ssh1Mar 11 11:11:11 vm11111 sshd[111]: Failed password for invalid user xapolicymgr from 111.111.11.111 port 11111 ssh1Mar 11 11:11:11 vm11111 sshd[111]: Failed password for invalid user yarn from 111.111.11.111 port 11111 ssh1Mar 11 11:11:11 vm11111 sshd[111]: Failed password for invalid user zookeeper from 111.111.11.111 port 11111 ssh1Mar 11 11:11:11 vm11111 sshd[111]: Failed password for invalid user zt from 111.11.111.111 port 11111 ssh1Mar 11 11:11:11 vm11111 sshd[111]: Failed password for mysql from 111.111.11.111 port 11111 ssh1Mar 11 11:11:11 vm11111 sshd[111]: Failed password for root from 111.11.111.111 port 11111 ssh1Mar 11 11:11:11 vm11111 sshd[111]: Failed password for root from 111.111.111.1 port 11111 ssh1 | You could use something like this: grep "Failed password for" /var/log/auth.log | grep -Po "[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+" \| sort | uniq -c It greps for the string Failed password for and extracts ( -o ) the ip address. It is sorted, and uniq counts the number of occurences. The output would then look like this (with your example as input file): 1 111.111.111.1 3 111.11.111.111 6 111.111.11.111 The last one in the output has tried 6 times. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/190907",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13428/"
]
} |
190,934 | I am trying to use a for loop iterator variable in a vim search pattern to determine for a range of words how man times they occur in a file. What I do so far is: for i in range(1,40) | %s/SiTg//gn | endfor I need the iterator variable i in the search pattern %s/S\iTg//gn to be bound by the for loop. How can I achieve this in vim? | Vimscript is evaluated exactly like the Ex commands typed in the : command-line. There were no variables in ex , so there's no way to specify them. When typing a command interactively, you'd probably use <C-R>= to insert variable contents: :sleep <C-R>=timetowait<CR>m<CR> ... but in a script, :execute must be used. All the literal parts of the Ex command must be quoted (single or double quotes), and then concatenated with the variables: execute 'sleep' timetowait . 'm' In your example, you want to place the i variable into the :%s command: for i in range(1,40) | execute '%s/S' . i . 'Tg//gn' | endfor | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190934",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65869/"
]
} |
190,947 | I have a large file in the following format: 2 1019 0 12 2 1019 3 0 2 1021 0 2 2 1021 2 0 2 1022 4 52 1030 0 1 2 1030 5 0 2 1031 4 4 If the values in column 2 match, I want to sum the values in column 3 and 4 of both lines, else just the sum of the values in the unique line. So the output I am hoping for would look like this: 2 1019 15 2 1021 4 2 1022 9 2 1030 6 2 1031 8 I am able to sort files according to column 2 with awk or sort and sum the last columns with awk , but only for individual lines not for two lines where column 2 matches. | I would do this in Perl: $ perl -lane '$k{"$F[0] $F[1]"}+=$F[2]+$F[3]; END{print "$_ $k{$_}" for keys(%k) }' file 2 1019 152 1021 42 1030 62 1031 82 1022 9 Or awk: awk '{a[$1" "$2]+=$3+$4}END{for (i in a){print i,a[i]}}' file If you want the output sorted according to the second column you could just pipe to sort : awk '{a[$1" "$2]+=$3+$4}END{for (i in a){print i,a[i]}}' file | sort -k2 Note that both solutions include the 1st column as well. The idea is to use the first and second columns as keys to a hash (in perl) or an associative array (in awk). The key in each solution is column1 column2 so if two lines have the same column two but a different column one, they will be grouped separately: $ cat file2 1019 2 32 1019 4 13 1019 2 2$ awk '{a[$1" "$2]+=$3+$4}END{for (i in a){print i,a[i]}}' file3 1019 42 1019 10 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/190947",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107086/"
]
} |
190,985 | I have an assignment to put sentences in a text file on separate lines. Something like this almost works: cat file.txt | tr '.' '\n' But I don't want to lose dots, question marks and exclamation marks from my sentences. How can I make this work? | I can't be sure without seeing an actual example of your data but what you're probably looking for is adding a newline after each occurrence of . , ! and ? . I don't know how you want to deal with semicolons ( ; ) since they're not really marking an end of a sentence. That's up to you. Anyway, you could try sed : $ echo 'This is a sentence! And so is this. And this one?' | sed 's/[.!?] */&\n/g' This is a sentence! And so is this. And this one? The s/// is the substitution operator. Its general format is s/pat/replacement and it will replace pat with replacement . The g at the end makes it run the replacement on all occurrences of pat . Without it, it would stop at the first one. The & is a special sed construct which means "whatever was matched". So, here we're substituting any of . , ! , or ? with whatever was matched and a newline. If your text can include abbreviations such as e.g. , you might want to only replace if the next letter is a CAPITAL: $ echo 'This is a sentence! And so is this. And this one? Negative, i.e. no.' | sed 's/\([.!?]\) \([[:upper:]]\)/\1\n\2/g' This is a sentence!And so is this.And this one?Negative, i.e. no. Note that this will not deal with sentences like Dr. Jones said hello. correctly since it will assume that the . after Dr defines a sentence given that the next letter is capitalized. However, we are now approaching a level of complexity that is way beyond the simple Q&A format and actually requires a full-blown natural language parser. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/190985",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107109/"
]
} |
191,017 | I have a fedora 20 installation with the clock set to UTC, and no TZ environment variable. Why the date command is outputting the date in the CET timezone (which by the way is my local timezone)? [fedora@slave2 ~]$ ls -l /etc/localtimelrwxrwxrwx. 1 root root 27 Apr 8 2014 /etc/localtime -> /usr/share/zoneinfo/Etc/UTC[fedora@slave2 ~]$ echo $TZ[fedora@slave2 ~]$ env | grep TZ[fedora@slave2 ~]$ dateWed Mar 18 17:20:44 CET 2015 Also, asking for the time in a little java program outputs the time in UTC (while if I set the TZ environment to CET, then in outputs the date in CET): [fedora@slave2 ~]$ java DateDemoWed Mar 18 16:24:17 UTC 2015[fedora@slave2 ~]$ TZ=CET java DateDemoWed Mar 18 17:24:24 CET 2015 (I got the DateDemo java code from http://www.tutorialspoint.com/java/java_date_time.htm ) Additional info: [fedora@slave2 ~]$ type datedate is hashed (/usr/bin/date)[fedora@slave2 ~]$ set +h[fedora@slave2 ~]$ type datedate is /usr/bin/date[fedora@slave2 ~]$ dateWed Mar 18 17:58:26 CET 2015[fedora@slave2 ~]$ strace -fe open dateopen("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3open("/etc/localtime", O_RDONLY|O_CLOEXEC) = 3Wed Mar 18 18:02:41 CET 2015+++ exited with 0 +++ zdump: [fedora@slave2 ~]$ zdump -v /etc/localtime | head/etc/localtime -9223372036854775808 = NULL/etc/localtime -9223372036854689408 = NULL/etc/localtime Mon Dec 31 23:53:55 1900 UTC = Mon Dec 31 23:59:59 1900 LMT isdst=0 gmtoff=364/etc/localtime Mon Dec 31 23:53:56 1900 UTC = Mon Dec 31 23:53:56 1900 WET isdst=0 gmtoff=0/etc/localtime Sun Sep 29 23:59:59 1946 UTC = Sun Sep 29 23:59:59 1946 WET isdst=0 gmtoff=0/etc/localtime Mon Sep 30 00:00:00 1946 UTC = Mon Sep 30 01:00:00 1946 CET isdst=0 gmtoff=3600/etc/localtime Sun Mar 31 00:59:59 1985 UTC = Sun Mar 31 01:59:59 1985 CET isdst=0 gmtoff=3600/etc/localtime Sun Mar 31 01:00:00 1985 UTC = Sun Mar 31 03:00:00 1985 CEST isdst=1 gmtoff=7200/etc/localtime Sun Sep 29 00:59:59 1985 UTC = Sun Sep 29 02:59:59 1985 CEST isdst=1 gmtoff=7200/etc/localtime Sun Sep 29 01:00:00 1985 UTC = Sun Sep 29 02:00:00 1985 CET isdst=0 gmtoff=3600 | The date command, like almost all programs, relies on the standard library to access timezone data. On Linux (except for some embedded systems) and *BSD, the standard library determines the current timezone from the content of /etc/localtime . It appears that on your system, /etc/localtime does not contain what it seems to contain. Like Stéphane Chazelas and derobert , I strongly suspect that the file /usr/share/zoneinfo/Etc/UTC , which /etc/localtime is a symbolic link to, contains incorrect information, probably because someone who didn't know what they were doing attempted to change the system timezone and ended up overwriting a system file. I recommend reinstalling the time zone information to make sure that your system isn't corrupted. Run rpm -qf /usr/share/zoneinfo/Etc/UTC to see which package contains that file and reinstall it with yum reinstall . Then set the timezone properly, either with the timedatectl command or by changing the /etc/localtime symbolic link and the text file /etc/zoneinfo or /etc/sysconfig/clock (I think Fedora uses the latter): ln -snf /usr/share/zoneinfo/Europe/Madrid /etc/localtimeecho Europe/Madrid >/etc/zoneinfosed -i -e '/^ *ZONE=/d' /etc/sysconfig/clockecho 'ZONE="Europe/Madrid"' >>/etc/sysconfig/clock Java does things differently — it bypasses the standard library and reads /etc/timezone or /etc/sysconfig/clock instead. That's why you're seeing different timezone information from Java programs. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191017",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17326/"
]
} |
191,033 | My IOstat doesn't change...at all. It'll show a change in blocks being read and written, but it doesn't change at all when it comes to blocks/kB/MB read and written. When the server sits idle...it shows 363kB_read/s, 537kB_wrtn/s. If I put it under heavy load...it says the same thing. Is it bugged out? How do I fix it? Using Centos 6, being used a primary mysql server. | Could you list the specific command you are using? The first printout is usually the average over the life of the system it rarely changes. Run "iostat -x 1 10" that will get you 10 runs of iostat in 1 second intervals with extended statistics. run 2 - 10 should have the data you want. If it does then you can fiddle with the parameters to get exactly what you need. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191033",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107140/"
]
} |
191,122 | I am having a variable which shows on echo like this $ echo $var129 148 I have to take only 129 as output.How will I split 129 and 148? | In addition to jasonwryan's suggestion , you can use cut : echo $var | cut -d' ' -f1 The above cut s the echo output with a space delimiter ( -d' ' ) and outputs the first field ( -f1 ) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/191122",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106940/"
]
} |
191,138 | I have a folder containing approximately 320116 .pdb.gz files. I want to uncompress them all. If I use gunzip *.gz it gives me an error i.e. argument list too long. The folder is about 2GB. Please give me an appropriate suggestion. | find . -name '*.pdb.gz' -exec gunzip {} + -exec gunzip {} + will provide gunzip with many but not too many file names on its command line. This is more efficient than -exec gunzip {} \; which starts a new gunzip process for each and every file. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/191138",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107208/"
]
} |
191,205 | In Bash, how does one do base conversion from decimal to another base, especially hex. It seems easy to go the other way: $ echo $((16#55))85 With a web-search, I found a script that does the maths and character manipulation to do the conversion, and I could use that as a function, but I'd have thought that bash would already have a built-in base conversion -- does it? | With bash (or any shell, provided the printf command is available (a standard POSIX command often built in the shells)): printf '%x\n' 85 With zsh , you can also do: dec=85hex=$(([##16]dec)) That works for bases from 2 to 36 (with 0-9a-z case insensitive as the digits). $(([#16]dev)) (with only one # ) expands to 16#55 or 0x55 (as a special case for base 16) if the cbases option is enabled (also applies to base 8 ( 0125 instead of 8#125 ) if the octalzeroes option is also enabled). With ksh93 , you can use: dec=85base54=${ printf %..54 "$dec"; } Which works for bases from 2 to 64 (with 0-9a-zA-Z@_ as the digits). With ksh and zsh , there's also: $ typeset -i34 x=123; echo "$x"34#3l Though that's limited to bases up to 36 in ksh88, zsh and pdksh and 64 in ksh93. Note that all those are limited to the size of the long integers on your system ( int 's with some shells). For anything bigger, you can use bc or dc . $ echo 'obase=16; 9999999999999999999999' | bc21E19E0C9BAB23FFFFF$ echo '16o 9999999999999999999999 p' | dc21E19E0C9BAB23FFFFF With supported bases ranging from 2 to some number required by POSIX to be at least as high as 99. For bases greater than 16, digits greater than 9 are represented as space-separated 0-padded decimal numbers. $ echo 'obase=30; 123456' | bc 04 17 05 06 Or same with dc ( bc used to be (and still is on some systems) a wrapper around dc ): $ echo 30o123456p | dc 04 17 05 06 | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/191205",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88196/"
]
} |
191,206 | I wonder whether I can compile application on one Linux distribution and use it on another Linux distribution (same CPU architecture). If not what problems I can run into? Only problems which came to my mind are are concerning dynamically linked libraries: Lack of some library or version of library e.g. lack of /usr/lib/qt5.so Can compiler flags be an issue here? Are there some other possible difficulties? | With bash (or any shell, provided the printf command is available (a standard POSIX command often built in the shells)): printf '%x\n' 85 With zsh , you can also do: dec=85hex=$(([##16]dec)) That works for bases from 2 to 36 (with 0-9a-z case insensitive as the digits). $(([#16]dev)) (with only one # ) expands to 16#55 or 0x55 (as a special case for base 16) if the cbases option is enabled (also applies to base 8 ( 0125 instead of 8#125 ) if the octalzeroes option is also enabled). With ksh93 , you can use: dec=85base54=${ printf %..54 "$dec"; } Which works for bases from 2 to 64 (with 0-9a-zA-Z@_ as the digits). With ksh and zsh , there's also: $ typeset -i34 x=123; echo "$x"34#3l Though that's limited to bases up to 36 in ksh88, zsh and pdksh and 64 in ksh93. Note that all those are limited to the size of the long integers on your system ( int 's with some shells). For anything bigger, you can use bc or dc . $ echo 'obase=16; 9999999999999999999999' | bc21E19E0C9BAB23FFFFF$ echo '16o 9999999999999999999999 p' | dc21E19E0C9BAB23FFFFF With supported bases ranging from 2 to some number required by POSIX to be at least as high as 99. For bases greater than 16, digits greater than 9 are represented as space-separated 0-padded decimal numbers. $ echo 'obase=30; 123456' | bc 04 17 05 06 Or same with dc ( bc used to be (and still is on some systems) a wrapper around dc ): $ echo 30o123456p | dc 04 17 05 06 | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/191206",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27960/"
]
} |
191,218 | I have a virtual box running with CentOS. I have attached a new virtual disk to the existing CentOS VM and I'm now trying to install GRUB on this newly attached disk. Later, I will bring up a second VM with a newly prepared bootable hard disk with a custom root filesystem and kernel. I have tried the following steps: Attached a new virtual disk to the existing working CentOS machine. Created a new partition with fdisk /dev/sdb . While partitioning, I chose the options primary partition, partition number 1 and other default options. Formatted the disk with mkfs.ext3 /dev/sdb1 . Mounted the disk to /media/new_drive . Installed GRUB grub-install /dev/sdb1 --root-directory=/media/new_drive/ . After this, the second VM with the newly prepared hard disk didn't boot; I got the error: could not read from the boot medium . It seems the MBR is not updated after grub-install , but I can see GRUB installed under /boot/grub on the new drive. But the worst thing is, it has corrupted my existing CentOS GRUB: The CentOS VM hangs showing a black screen with the only text being GRUB . Why does grub-install /dev/sdb1 not modify the MBR of sdb1? Is this not the right way to install GRUB on new drive? | I'm not a grub2 expert (sorry) but try adding --skip-fs-probe to your grub-install line, I have found this prevents creation of /boot/grub/device.map which can cause booting to a grub prompt. I think that without this parameter grub-install, instead of doing what you tell it, thinks it is cleverer than you, and may do something different. Another thing is to be sure you are using the right grub-install (i.e. for grub2 and not for original grub). This isn't a problem if you are inside Centos but with SystemRecoveryCD both versions are available and so you have to use grub2-install . I learned the hard way... And as @wurtel pointed out (kudos), you should specify a drive not a partition. Grub2 installs in sector 0 of the whole disk drive, and this 'stub' is what runs at boot time, but it needs to know whereabouts on the disk it should install the files for the next stage of booting - this is what the --root-directory parameter is for. (I think.) Reading man grub-install and googling I see that --root-directory is not really meant to be used for grub2 versions 1.99++, though it does work in my experience. You are meant to use --boot-directory and refer to the actual boot directory, so this would give you: grub-install /dev/sdb --skip-fs-probe --boot-directory=/media/new_drive/boot | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191218",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106741/"
]
} |
191,219 | I want to write a sed (or awk) command to replace a string from one file with the entire contents of another file. Note that second file from which I want to get the content has more than one line. I tried this: sed -e "s/PLACEHOLDER/$(sed 's:/:\\/:g' TestOutput.txt)/" SQLInput.txt but got an error saying sed: -e expression #1, char 22: unterminated 's' command | try sed -i '/PLACEHOLDER/ r TestOutput.txt' SQLInput.txt where -i edit in place /PLACEHOLDER/ search for pattern r TestOutput.txt read file note that /PLACEHOLDER/ is not deleted. to have it deleted sed -i -e '/PLACEHOLDER/ r TestOutput.txt' -e s/PLACEHOLDER// SQLInput.txt where -e /PLACEHOLDER/d will delete entire line with PLACEHOLDER -e s/PLACEHOLDER// will delete PLACEHOLDER string | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191219",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107258/"
]
} |
191,254 | Why do many commands provide the option -q or --quiet to suppress output when you can easily achieve the same thing by redirecting standard output to the null file? | While you can easily redirect in a shell, there are other contexts where it's not as easy, like when executing the command in another language without using a shell command-line. Even in a shell: find . -type f -exec grep -q foo {} \; -printf '%s\n' to print the size of all the files that contain foo . If you redirect to /dev/null , you lose both find and grep output. You'd need to resort to -exec sh -c 'exec grep foo "$1" > /dev/null' sh {} \; (that is, spawn an extra shell). grep -q foo is shorter to type than grep foo > /dev/null Redirecting to /dev/null means the output is still written and then discarded, that's less efficient than not writing it (and not allocate, prepare that output to be written) that allows further optimisations. In the case of grep for instance, since with -q , grep knows the output is not required, it exits as soon as it finds the first match. With grep > /dev/null , it would still try to find all the matches. quiet doesn't necessarily mean silent . For some commands, it means reduce verbosity (the opposite of -v|--verbose ). For instance, mplayer has a --quiet and a --really-quiet . With some commands, you can use -qqq to decrease verbosity 3 times. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/191254",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36718/"
]
} |
191,272 | Every time I boot Debian, the boot log shows a bunch of xhci_hcd problems, and messages like usb 3-13: hub failed to enable device, error -22usb 3-13: device descriptor read/8, error -61 I'm having a hard time finding out what these error codes actually mean, though. Is there anywhere that lists what all of these error codes mean? | The first one is EINVAL (a standard POSIX C error) inverted. If you are curious it's from line 4218 in [src]/drivers/usb/core/hub.c (v. 3.19): 4217 if (udev->state != USB_STATE_DEFAULT)4218 return -EINVAL; The other one is from the hub_port_init() function in the same file. These kinds of error messages aren't really intended to provide more information to end users than what's there ("hub failed to enable device"), however. They're used in debugging, possibly including if you were to file a bug report. If you were hoping for documentation that will explain the problem in detail I think you are out of luck. Part of the reason for this is perhaps that there may not be any more accurate, detailed explanation that can be provided. Some things can fail the same way with a range of indeterminate potential causes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191272",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39688/"
]
} |
191,274 | I'm searching for files based on a regular expression, and then I'm trying to search those files for content. So, for example, I have something like #Find all C++ files that match a certain pattern and then search themfind . -name "*.cpp" | grep "<name regex>" | xargs grep "<content regex>" The problem I'm running into is that some of the paths have spaces in them, which confuses xargs . I know that if I was just using find , I could use the -print0 argument (along with the -0 argument on xargs ) to keep xargs from treating spaces as delimiters. Is there something similar with grep ? Or am I approaching this problem in the wrong way entirely? Naively, find to grep to xargs grep makes sense to me, but I'm open to other approaches that yield the same results. | Use something like this perhaps (if gnu grep). grep -r 'content pattern' --include==*.cpp man grep --include=GLOB Search only files whose base name matches GLOB (using wildcard matching as described under --exclude) Also see the options for null delimiters. -Z, --null Output a zero byte (the ASCII NUL character) instead of the character that normally follows a file name. For example, grep -lZ outputs a zero byte after each file name instead of the usual newline. This option makes the output unambiguous, even in the presence of file names containing unusual characters like newlines. This option can be used with commands like find -print0, perl -0, sort -z, and xargs -0 to process arbitrary file names, even those that contain newline characters. -z, --null-data Treat the input as a set of lines, each terminated by a zero byte (the ASCII NUL character) instead of a newline. Like the -Z or --null option, this option can be used with commands like sort -z to process arbitrary file names. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191274",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3807/"
]
} |
191,281 | An nginx server is up and running locally on my system[ubuntu 14.04]. But i am having difficulty accessing the same server from my public ip address, or from any other system using my public ip address. I have just started looking into nginx, its been great, but am stuck at this point now. Issue : Nothing displays when i use my public ip address in the browser. below is the config file "firstsite.com" server { listen 80 default_server; server_name firstsite.com; root /var/www/uwsgiSite; index index.html; location / { try_files $uri $uri.html $uri/ index.html; allow all; }} which is available in "sites-available" and soft linked into "sites-enabled" folders. Also i have an entry in the /etc/hosts file 127.0.0.1 firstsite.com All the folders/files are available in /var/www with permission. Later i would like to connect the same with a domain name [buy purchasing one] to my computer. For now, i am looking to make it work just with my public ip address, which is not working yet. I need to make it accessible. | Use something like this perhaps (if gnu grep). grep -r 'content pattern' --include==*.cpp man grep --include=GLOB Search only files whose base name matches GLOB (using wildcard matching as described under --exclude) Also see the options for null delimiters. -Z, --null Output a zero byte (the ASCII NUL character) instead of the character that normally follows a file name. For example, grep -lZ outputs a zero byte after each file name instead of the usual newline. This option makes the output unambiguous, even in the presence of file names containing unusual characters like newlines. This option can be used with commands like find -print0, perl -0, sort -z, and xargs -0 to process arbitrary file names, even those that contain newline characters. -z, --null-data Treat the input as a set of lines, each terminated by a zero byte (the ASCII NUL character) instead of a newline. Like the -Z or --null option, this option can be used with commands like sort -z to process arbitrary file names. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191281",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107302/"
]
} |
191,296 | I have 32 GB of memory in my PC. This is more than enough for a linux OS. Is there an easy to use version of Linux (Ubuntu preferably) that can be booted via optical or USB disk and be run completely within RAM? I know a live disc can be booted with a hard disk, but stuff still runs off the disc and this takes a while to load. I'd like everything loaded into RAM and then run from there, completely volatile. Any files I need to create would be saved to a USB disk. I'm aware of http://en.wikipedia.org/wiki/List_of_Linux_distributions_that_run_from_RAM but these all depend on a little bit of RAM. I'd prefer something like Ubuntu instead of these light versions. | Ubuntu can run on RAM, but it requires some manual changes: https://wiki.ubuntu.com/BootToRAM | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191296",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38537/"
]
} |
191,313 | I'm experiencing a very weird issue with a fresh Fedora 21 image on a Linode instance. I cannot reproduce it outside of Linode. The issue is that my systemd journal is not persistent across reboots. According to the documentation : By default, the journal stores log data in /run/log/journal/. Since /run/ is volatile, log data is lost at reboot. To make the data persistent, it is sufficient to create /var/log/journal/ where systemd-journald will then store the data. I have checked that /var/log/journal exists and I have also set Storage=persistent in /etc/systemd/journald.conf. The log directory contains a bunch of data: $ du -sh /var/log/journal/89M /var/log/journal/ The journal, however, only contains log entries since the last system restart: $ journalctl --list-boots 0 9f6a5a789dd64ec0b067140905e6da86 Thu 2015-03-19 15:08:48 GMT—Thu 2015-03-19 22:14:37 GMT Even if I journalctl --flush before I reboot the logs are lost. I suspect this is an issue with Linode's Fedora 21 image, and I have opened a support ticket with them. Meanwhile, I continue to search for the cause of this problem. How can I debug this? What could cause this? What can I do to fix this? | The reason for this behavior is that the machine identifier in /etc/machine-id changes at every reboot. This starts a new logging directory under /var/log/journal . Old logs can be viewed with the following command: journalctl --merge I'm still looking into the cause of the changing machine-id. Linode support is aware of the problem. I will update this answer when I know more. UPDATE -- The root cause of the problem is simply that Linode zeroed out the contents of /etc/machine-id from their filesystem images. The result is the following chain of events: The kernel loads and mounts the root filesystem read-only systemd, run from the initial ramdisk, tries to read /etc/machine-id from the root filesystem (the file exists but has zero contents) systemd cannot read the machine identifier, but can also not write a new one since the root filesystem is mounted read-only systemd mounts tmpfs on /etc/machine-id (Yes, apparently you can mount a filesystem onto a file ) systemd invokes systemd-machine-id-setup which generates a random machine-id and stores it in the now-volatile /etc/machine-id The system boots with a volatile machine identifier You can check if your system has a volatile, rather than a permanent machine-id by looking at the output of mount : $ mount | grep machine-idtmpfs on /etc/machine-id type tmpfs (ro,mode=755) The problem is easy to fix: simply write a persistent machine-id to the real /etc/machine-id . This is easier said than done, however, because you cannot unmount tmpfs from /etc/machine-id on a running system. These are the steps I took to fix it on Linode: cp /etc/machine-id /etc/machine-id.copy , then poweroff the system In the Linode Manager, go to the tab Rescue and boot into rescue mode Access the system via the Lish console Mount the root filesystem: mount /dev/xvda /mnt Move the copy created in step 1 to the real machine-id: mv /etc/machine-id.copy /etc/machine-id Reboot Such are the consequences of a missing machine-id at boot. I hope this will help a random passer-by in the future. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/191313",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18718/"
]
} |
191,314 | I would like to see how many PCI slots I have in a server and how many of them are in use. Is this possible with just some Linux commands? (lspci doesn't seem to provide the exact information I need.) | May be you can use: dmidecode -t 9 For getting number of slots: dmidecode -t 9 | grep "System Slot Information" | wc -l For getting count of available: dmidecode -t 9 | grep -A3 "System Slot Information" | grep -c -B1 "Available" More info dmidecode . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/191314",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6487/"
]
} |
191,353 | Why does the rm -f !(/var/www/wp) command have no effect? I want to remove all the files in /var/www , except for the /var/www/wp directory, which should remain. root@born:~# ls /var/wwwauthorize.php index.html INSTALL.txt README.txt UPGRADE.txtCHANGELOG.txt index.php LICENSE.txt robots.txt web.configCOPYRIGHT.txt INSTALL.mysql.txt MAINTAINERS.txt scripts wpcron.php INSTALL.pgsql.txt misc sites xmlrpc.phpdrupal install.php modules themesincludes INSTALL.sqlite.txt profiles update.phproot@born:~# rm -f !(/var/www/wp)root@born:~# ls /var/wwwauthorize.php index.html INSTALL.txt README.txt UPGRADE.txtCHANGELOG.txt index.php LICENSE.txt robots.txt web.configCOPYRIGHT.txt INSTALL.mysql.txt MAINTAINERS.txt scripts wpcron.php INSTALL.pgsql.txt misc sites xmlrpc.phpdrupal install.php modules themesincludes INSTALL.sqlite.txt profiles update.php | If you're running bash ≥4.3, then if you have backups, now would be a good time to find them . I assume you're using Bash. The !(...) filename expansion pattern expands to every existing path that doesn't match the pattern at the point it's used . That is: echo rm -f !(/var/www/wp) expands to every filename in the current directory that isn't "/var/www/wp". That is every file in the current directory. In essence, you ran rm -f * in ~ . Do not run the rm command above . To get the effect you wanted, use the pattern only for the part of the path you want it (not) to match, just like you would for * , {a,b,c} , or any other pattern. The command: echo rm -f /var/www/!(wp) will print out the command you wanted to run. I don't, to be honest, suggest doing things this way - it's prone to exactly the sort of issue you had here, and others. Something with find is easier to follow. At the very least, echo the command before you run it and you'll see what is happening. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/191353",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102745/"
]
} |
191,370 | I checked my /var/log/messages log file, on every 2 secs interval there is some log getting added.. Mar 20 11:42:30 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844Mar 20 11:42:32 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844Mar 20 11:42:34 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844Mar 20 11:42:36 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844Mar 20 11:42:38 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844Mar 20 11:42:40 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844Mar 20 11:42:42 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844Mar 20 11:42:44 localhost kernel: EXT4-fs error (device dm-0): ext4_lookup: deleted inode referenced: 184844 I didn't do any kind of operation on the system, but still error is getting logged. I suppose FS is corrupted. What should I do? | I am sharing the answer, as how I resolved this issue. I edited the /etc/fstab and provided the root FS with FSCK=1, /dev/mapper/vg_vipin-lv_root / ext4 defaults 0 1 And then I did a reboot. fsck will be performed and now everything is back to normal. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/191370",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106040/"
]
} |
191,378 | I'm wondering how some terminal magic works internally. While playing around with docker containers, the environment variable $TERM was not set. This led to strange-looking console applications like vim and tmux, but also to CTRL+l (clear screen) being ignored. I'm pretty sure that all feature like partial screen updates, colors, commands like screen reset etc. are realized using escape codes, right? So where is this variable interpreted and allows for example resetting my terminal screen using CTRL+l if I set the right value there? Who checks for example which colors are supported (xterm vs xterm-256color)? The shell? The application or a library like ncurses? And where are the possible values / terminal types defined? | $TERM is read and interpreted by the terminfo system. terminfo also refers to the database of terminal descriptions which you can find on most systems in /usr/share/terminfo . $TERM must match one of the entries in that database. There was also an older library called termcap which had fewer capabilities, but terminfo has replaced it. In modern systems, terminfo is part of the ncurses library. Applications usually either fetch terminal capabilities directly using library functions like tigetstr() or they use higher level curses interfaces to manage the layout of the screen. Either way, $TERM and the terminfo database will be consulted. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191378",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9266/"
]
} |
191,594 | I'm reading the /proc directory (or pseudo-fs) to find all processes. I'm getting the information I need from /proc/[pid]/status but there's something else I need. Is there any way to figure out which processes are critical to system? for example using parent-pid or UID of the process? By system process, I mean processes that would otherwise exist on a fresh installation of the OS, and before installing any application or services. This might not mean kernel threads, or system processes at all, but to sum it up, I mean processes, that their termination, would disrupt the fundamental structure of the system. PS. I'm working on an android app, but since this part is done using pure Linux file system I asked it here and I don't suppose that there would be any different. | If you have htop you can press Shift + k to toggle the display of kernel threads. If you press F5 for tree mode, they should all appear as children of kthreadd . There are some visible differences between a kernel thread and a user-space thread: /proc/$pid/cmdline is empty for kernel threads - this is the method used by ps and top to distinguish kernel threads. The /proc/$pid/exe symbolic link has no target for kernel threads - which makes sense since they do not have a corresponding executable on the filesystem. More specifically, the readlink() system call returns ENOENT ("No such file or directory"), despite the fact that the link itself exists, to denote the fact that the executable for this process does not exist (and never did). Therefore, a reliable way to check for kernel threads should be to call readlink() on /proc/$pid/exe and check its return code. If it succeeds then $pid is a user process. If it fails with ENOENT , then an extra stat() on /proc/$pid/exe should tell apart the case of a kernel thread from a process that has just terminated. /proc/$pid/status is missing several fields for most kernel threads - more specifically a few fields related to virtual memory. The Above answer from Identifying kernel threads Another way to distinguish kernel threads from other process is to run top -c . From the top manual: 3. COMMAND -- Command Name or Command Line Display the command line used to start a task or the name of the associated program. You toggle between command line and name with 'c', which is both a command-line option and an interactive com‐ mand. When you've chosen to display command lines, processes without a command line (like kernel threads) will be shown with only the program name in brackets, as in this example: [ mdrecoveryd ] Running ps aux also displays processes that were launched without a command in square brackets ( and will have an empty /proc/[pid]/cmdline file ). Example: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMANDroot 19 0.0 0.0 0 0 ? S< Mar02 0:00 [kworker/1:0H] See package procps-3.2.8 file /proc/readproc.h . // Basic data structure which holds all information we can get about a process.// (unless otherwise specified, fields are read from /proc/#/stat)//// Most of it comes from task_struct in linux/sched.h | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191594",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107484/"
]
} |
191,595 | I need practical example how get rid folders which are not in the list in Linux.So i do not need to compare its contents or md5sums, just compare folders names. For example, one folder has few folders inside target_folder/├── folder1├── folder2├── folder3└── folder4 and my folders name list is txt file, includes folder1, folder2 and not folder3 and folder4. How to remove folder3 and folder4 via bash script? This has been answered on serverfault as GLOBIGNORE=folder1:folder2rm -r *uset GLOBIGNORE but my real task to delete bunch of folders. The txt list contains around 100 folders and target folder to clean is 200 folders. Note that this should work both in Linux and FreeBSD. EDIT: target_folder may contain folders with sub-folders and also files. No spaces and leading dots and names are not similar: foo.com bar.org emptydir file.txt simplefile. But all these items should be deleted except those names in the list. First answer is more obvious and simple. Second one more advanced and flexible, it allows you to delete based on item type as well. | Assuming your file names do not contain any of :\[*? , you could still use GLOBIGNORE . Just format your list of directories accordingly. For example: $ cat names.txtfolder1folder3 That is very easy to transform into a colon separated list: $ paste -s -d : names.txtfolder1:folder3 So, you can now set that as the value of GLOBIGNORE: GLOBIGNORE=$(paste -s -d : ../names.txt) And proceed to delete them normally: rm -r -- * I just tested this on Linux with 300 directories and it worked perfectly. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191595",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107487/"
]
} |
191,618 | I know several ways to reset root user's password, but want to know which is the best and why it is. For example: A method: Grub > e init=/bin/sh (Remove rhgb and quiet tags if necessary) > Ctrl + x /usr/sbin/load_policy -i mount -o remount,rw / passwd root or passwd mount -o remount,ro / B method: Grub > e rd.break > Ctrl + x mount -o remount,rw /sysroot/ chroot /sysroot/ passwd root or passwd touch /.autorelabel Which is the best? Why is it best? I'm preparing for RHCSA (Red Hat Certified System Admin) exam. I need to know the relative merits of each approach. Is one of them more portable? Safer? Is there a reason to choose one over the other? | I think the best way is as is shown in Red Hat documentation . This is your second method. For GRUB2/RHEL7 single/emergency mode should not work since it will use sulogin to authenticate you before presenting the command prompt. So lets mark off different methods. For RHEL5, RHEL6, append 1 , s or init=/bin/bash to kernel cmdline For RHEL7, RHEL8, CentOS7, CentOS8, append rd.break or init=/bin/bash to kernel cmdline It appears that the second method is not available on RHEL5 and RHEL6. But for RHEL7 I will prefer the first because adding init=/bin/bash is a bit tricky when single mode is password protected and may be appending rd.break is a way to standardize it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191618",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106608/"
]
} |
191,632 | I have a daily backups named like this: yyyymmddhhmm.zip // pattern201503200100.zip // backup from 20. 3. 2015 1:00 I'm trying to create a script that deletes all backups older than 3 days. The script should be also able to delete all other files in the folder not matching the pattern (but there would be a switch for that in the script to disable this). To determine the file age I don't want to use backups timestamps as other programs also manipulate with the files and it can be tampered. With the help of: Remove files older than 5 days in UNIX (date in file name, not timestamp) I got: #!/bin/bashDELETE_OTHERS=yesBACKUPS_PATH=/mnt/\!ARCHIVE/\!backups/THRESHOLD=$(date -d "3 days ago" +%Y%m%d%H%M)ls -1 ${BACKUPS_PATH}????????????.zip | while read A DATE B FILE do [[ $DATE -le $THRESHOLD ]] && rm -v $BACKUPS_PATH$FILE doneif [ $DELETE_OTHERS == "yes" ]; then rm ${BACKUPS_PATH}*.* // but I don't know how to not-delete the files matching patternfi But it keeps saying: rm: missing operand Where is the problem and how to complete the script? | The first problem in your code is that you are parsing ls . This means it will break very easily, if you have any spaces in your file or directory names for example. You should use shell globbing or find instead. A bigger problem is that you are not reading the data correctly. Your code: ls -1 | while read A DATE B FILE will never populate $FILE . The output of ls -1 is just a list of filenames so, unless those file names contain whitespace, only the first of the 4 variables you give to read will be populated. Here's a working version of your script: #!/usr/bin/env bashDELETE_OTHERS=yesBACKUPS_PATH=/mnt/\!ARCHIVE/\!backupsTHRESHOLD=$(date -d "3 days ago" +%Y%m%d%H%M)## Find all files in $BACKUPS_PATH. The -type f means only files## and the -maxdepth 1 ensures that any files in subdirectories are## not included. Combined with -print0 (separate file names with \0),## IFS= (don't break on whitespace), "-d ''" (records end on '\0') , it can## deal with all file names.find ${BACKUPS_PATH} -maxdepth 1 -type f -print0 | while IFS= read -d '' -r filedo ## Does this file name match the pattern (13 digits, then .zip)? if [[ "$(basename "$file")" =~ ^[0-9]{12}.zip$ ]] then ## Delete the file if it's older than the $THR [ "$(basename "$file" .zip)" -le "$THRESHOLD" ] && rm -v -- "$file" else ## If the file does not match the pattern, delete if ## DELETE_OTHERS is set to "yes" [ $DELETE_OTHERS == "yes" ] && rm -v -- "$file" fidone | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191632",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40503/"
]
} |
191,646 | I'd like to run a command at startup which pings a certain address every 10 minutes and writes the result to a file. I've figured out now how to do the pinging and file writing and the 10 min intervals: while true; do my-command-here; sleep 600; done My question is, can I put this in /etc/init.d/rc.local or should I be putting it in /etc/rc.local or somewhere else entirely? I'm specifically concerned because it's an infinite loop so I'm not sure if I could put it in one of these startup scripts. Some help would be appreciated. I'm using Ubuntu 12.04.5 | This isn't really an infinite loop; it's a task that needs to run every ten minutes. As such the task can go into the task scheduler, cron . Run the command crontab -e and add this single line to the bottom of the file: */10 * * * * /path/to/my-command-here Ensure that my-command-here is an executable script ( chmod u+x my-command-here ) and that its first line starts with #! and the name of the script interpreter (typically #!/bin/bash ). Each entry in the pattern */10 * * * * maps to the minute(0-59), hour(0-23), day(1-31), month(1-12), and day of week(0-6, with 0=Sunday). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191646",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107516/"
]
} |
191,655 | I do not have access to netcat or nmap so I'm trying to use bash and the /dev/udp/ special files to test ports. I could do something like: echo "" > /dev/udp/example.com/8000 But $? is always 0 when using UDP. I'm assuming that's because that is the return value of the echo "" command correct? I am basically trying to replicate what I am able to do with nmap and netcat : nmap -sU -p 8000 example.com | grep open >/dev/null && echo 'open'nc -z -u example.com 8000 && echo 'open' How would I do this with /dev/udp ? | For tcp, just checking $? . If connection failed, $? won't be 0 : $ >/dev/tcp/google.com/81bash: connect: Network is unreachablebash: /dev/tcp/google.com/81: Network is unreachable$ echo $?1 It will take time for bash to realize that the connection failed. You can use timeout to trigger bash : $ timeout 1 bash -c '>/dev/tcp/google.com/80' && echo Port open || echo Port closePort open Testing udp port is more complex. Strictly speaking, there is no open state (of course, udp is stateless protocol) with udp. There're only two states with udp, listening or not . If the state is not , you will get an ICMP Destination Unreachable . Unfortunately, firewall or router often drop those ICMP packets, so you won't be sure what state of udp port. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191655",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2372/"
]
} |
191,662 | I am trying to set up a staging environment in a VM, in order to test updates before applying them to my main system. In order to do so, I have done a basic installation of Debian Wheezy (same as on the main system) in the VM, then ran as root from within the VM: # dpkg --clear-selections# dpkg --add-architecture i386# apt-get update# ssh me@main-system 'dpkg --get-selections | grep -v deinstall' | \ dpkg --set-selections The i386 architecture is unfortunately needed in my case; the system is amd64 native. The problem is with dpkg --set-selections run in the VM. I do have some packages that require special handling (those are actually the main reason why I want a staging environment in the first place) but when I run the last command above, I get about a gazillion lines of output like: dpkg: warning: package not in database at line NNN: package-name for packages that really should be available in the base system. Examples include xterm , yelp and zip . Now for my question: What is the specific process for transferring the package selection list from one Debian system to another (assuming same Debian release level, in Wheezy) and then subsequently applying those changes? The goal is that both have the same list of installed packages, ideally such that doing a diff between the outputs of dpkg --get-selections or dpkg --list on the two comes back showing no differences. The grep -v deinstall part is borrowed from Prevent packages from being removed after doing dpkg --set-selections over on Ask Ubuntu. I have changed the source in the VM to be the same as on the main system, also installing apt-transport-https : deb https://ftp-stud.hs-esslingen.de/debian/ wheezy main non-freedeb-src https://ftp-stud.hs-esslingen.de/debian/ wheezy main non-freedeb https://ftp-stud.hs-esslingen.de/debian/ wheezy-updates main non-freedeb-src https://ftp-stud.hs-esslingen.de/debian/ wheezy-updates main non-freedeb [arch=amd64] http://archive.zfsonlinux.org/debian wheezy main Looking at the --set-selections output, I'm seeing: dpkg: warning: package not in database at line 1: a2psdpkg: warning: package not in database at line 1: abiworddpkg: warning: package not in database at line 1: abiword-commondpkg: warning: package not in database at line 1: abiword-plugin-grammardpkg: warning: package not in database at line 1: abiword-plugin-mathviewdpkg: warning: package not in database at line 1: accountsservicedpkg: warning: package not in database at line 1: acldpkg: warning: package not in database at line 4: aglfndpkg: warning: package not in database at line 4: aisleriotdpkg: warning: package not in database at line 4: alacartedpkg: warning: package not in database at line 4: alien... The line numbers looked odd, and the corresponding portion of the output of --get-selections is: a2ps installabiword installabiword-common installabiword-plugin-grammar installabiword-plugin-mathview installaccountsservice installacl installacpi-support-base installacpid installadduser installaglfn installaisleriot installalacarte installalien install Notice that in between acl and aglfn are acpi-support-base , acpid and adduser for which no errors are being reported . It seems that the packages for which errors are being reported are either un according to dpkg -l , or dpkg -l doesn't have any idea at all about them ( dpkg-query: no packages found matching ... ). I know there are some locally installed packages, but not many. i386 doesn't figure until gcc-4.7-base:i386 install much farther down the list (line 342 in the --get-selections output). | To clone a Debian installation, use the apt-clone utility. It's available (as a separate package, not part of the default installation) in Debian since wheezy and in Ubuntu since 12.04. On the existing machine, run apt-clone clone foo This creates a file foo.apt-clone.tar.gz . Copy it to the destination machine, and run apt-get install apt-cloneapt-clone restore foo.apt-clone.tar.gz If you're working with an old system where apt-clone isn't available, or if you just want to replicate the list of installed packages but not any configuration file, here are the manual steps. On the source machine: cat /etc/apt/sources.list /etc/apt/sources.list.d >sources.listdpkg --get-selections >selections.listapt-mark showauto >auto.list On the target machine: cp sources.list /etc/apt/apt-get update/usr/lib/dpkg/methods/apt/update /var/lib/dpkg/dpkg --set-selections <selections.listapt-get dselect-upgradexargs apt-mark auto <auto.list I believe that you're affected by an incompatible change in dpkg that first made it into wheezy. See bug #703092 for background. The short story is that dpkg --set-selections now only accepts package names that are present in the file /var/lib/dpkg/status or /var/lib/dpkg/available . If you only use APT to manage packages, like most people, then /var/lib/dpkg/available is not kept up-to-date. After running apt-get update and before running dpkg --set-selections and apt-get -u dselect-upgrade , run the following command: apt-cache dumpavail >/tmp/apt.availdpkg --merge-avail /tmp/apt.avail From jessie onwards, you can simplify this to apt-cache dumpavail | dpkg --merge-avail Alternatively, run /usr/lib/dpkg/methods/apt/update /var/lib/dpkg/ or even simpler apt-get install dctrl-toolssync-available Another simple method that doesn't require installing an additional package but will download the package lists again is dselect update See the dpkg FAQ for more information. (This is mentioned in the dpkg man page, but more in a way that would remind you of the issue if you were already aware, not in a way that explains how to solve the problem!) Note that cloning a package installation with dpkg --set-selections doesn't restore the automatic/manual mark in APT. See Restoring all data and dependencies from dpkg --set-selections '*' for more details. You can save the marks on the source system with apt-mark showauto >auto.list and restore them on the target system with xargs apt-mark auto <auto.list | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/191662",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2465/"
]
} |
191,694 | I would like to create a file by using the echo command and the redirection operator, the file should be made of a few lines. I tried to include a newline by "\n" inside the string: echo "first line\nsecond line\nthirdline\n" > foo but this way no file with three lines is created but a file with only one line and the verbatim content of the string. How can I create using only this command a file with several lines ? | You asked for using some syntax with the echo command: echo $'first line\nsecond line\nthirdline' > foo (But consider also the other answer you got.) The $'...' construct expands embedded ANSI escape sequences. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/191694",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102788/"
]
} |
191,719 | I have a file open in Vim inside a Linux virtual machine guest and I then try to open the file on the Windows host, and I do not get that warning that goes "Swap file blah.swp already exists!" (The file is shared to the guest.) I want that warning because that is the only way I can find out I am already editing the file somewhere else, like in this case, in the VM! It doesn't matter whether I edit the file on Windows first and then use Vim on Linux in the VM, or I edit the file in the Linux VM and then open the file in Vim on Windows: it's the same result, no warning. You could say the behavior is uniform then from Linux to Windows. In both cases Vim creates a .swo file silently, without complaining as it (I believe) should. However, if the file is opened a second time on the VM while being already open on the VM, I do get the warning, and same thing on Windows (for those who want to ask about my Vim settings). Reading :help recovery does not give anything informative. Version is Vim 7.4 in both cases. | You asked for using some syntax with the echo command: echo $'first line\nsecond line\nthirdline' > foo (But consider also the other answer you got.) The $'...' construct expands embedded ANSI escape sequences. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/191719",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107551/"
]
} |
191,754 | Btrfs calculates a crc32c checksum for each file. Is there a way I can view what checksum is stored (as opposed to just reading the file and recalculating it)? | Btrfs calculates a crc32c checksum for each file. This is not correct. Both of the open-source checksumming file-systems (ZFS and BTRFS) calculate a checksum for each logical block (the unnamed source Awe used is correct). This is a checksum of the on-disk data. If the file-system has compression enabled (an increasingly common setting), this checksum is of the data after compression. This means that, even if the file fits in one logical block, it's possible (and increasingly likely) that the file-system's checksum data will be useless to you. If you need a file checksum, the best way to get it would be to calculate it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191754",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26068/"
]
} |
191,762 | I can use find /search/location -type l to list all symbolic links inside /search/location . How do I limit the output of find to symbolic links that refer to a valid directory, and exclude both, broken symbolic links and links to files? | With GNU find (the implementation on non-embedded Linux and Cygwin): find /search/location -type l -xtype d With find implementations that lack the -xtype primary, you can use two invocations of find , one to filter symbolic links and one to filter the ones that point to directories: find /search/location -type l -exec sh -c 'find -L "$@" -type d -print' _ {} + or you can call the test program: find /search/location -type l -exec test {} \; -print Alternatively, if you have zsh, it's just a matter of two glob qualifiers ( @ = is a symbolic link, - = the following qualifiers act on the link target, / = is a directory): print -lr /search/location/**/*(@-/) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191762",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9266/"
]
} |
191,770 | I try to delete this file on my solaris machine rm "-Insi"rm: illegal option -- Irm: illegal option -- nrm: illegal option -- s I also try this rm "\-Insi" -Insi: No such file or directory rm '\-Insi' -Insi: No such file or directory so what other option do I have? | Try: rm -- -Insi or: rm ./-Insi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191770",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67059/"
]
} |
191,795 | I have string which I would like to format. I would like to remove everything between second ; and second last ; . Input cellular organisms;Eukaryota;Opisthokonta;Metazoa;Eumetazoa;Bilateria;Protostomia;Ecdysozoa;Panarthropoda;Arthropoda;Mandibulata;Pancrustacea;Hexapoda;Insecta;Dicondylia;Pterygota;Neoptera;Endopterygota;Coleoptera;Polyphaga;Cucujiformia;Tenebrionoidea;Tenebrionidae;Tenebrionidae incertae sedis;Tribolium;Tribolium castaneum; Output cellular organisms;Eukaryota;Tribolium castaneum; I have tried using sed sed 's/;[^;]*//' <<<"cellular organisms;Eukaryota;Opisthokonta;Metazoa;Eumetazoa;Bilateria;Protostomia;Ecdysozoa;Panarthropoda;Arthropoda;Mandibulata;Pancrustacea;Hexapoda;Insecta;Dicondylia;Pterygota;Neoptera;Endopterygota;Coleoptera;Polyphaga;Cucujiformia;Tenebrionoidea;Tenebrionidae;Tenebrionidae incertae sedis;Tribolium;Tribolium castaneum;" produces cellular organisms;Opisthokonta;Metazoa;Eumetazoa;Bilateria;Protostomia;Ecdysozoa;Panarthropoda;Arthropoda;Mandibulata;Pancrustacea;Hexapoda;Insecta;Dicondylia;Pterygota;Neoptera;Endopterygota;Coleoptera;Polyphaga;Cucujiformia;Tenebrionoidea;Tenebrionidae;Tenebrionidae incertae sedis;Tribolium;Tribolium castaneum; | You can do this easily with awk : awk -F\; '{print $1 ";" $2 ";" $(NF-1) ";" $NF}' This splits the input using ; ( -F\; ), and prints the first ( $1 ), second ( $2 ), second-to-last and last fields ( $(NF-1) and $NF ; NF contains the number of fields). The following variant re-uses the specified field separator in the output: awk -F\; '{print $1 FS $2 FS $(NF-1) FS $NF}' Janis suggested an improved version using OFS too: awk 'BEGIN{FS=OFS=";"} {print $1,$2,$(NF-1),$NF}' or, if you want to keep the separator as another parameter: awk -F\; 'BEGIN{OFS=FS} {print $1,$2,$(NF-1),$NF}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191795",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107595/"
]
} |
191,821 | How can I get the process ID of the driver of a FUSE filesystem? For example, I currently have two SSHFS filesystems mounted on a Linux machine: $ grep sshfs /proc/mountshost:dir1 /home/gilles/net/dir1 fuse.sshfs rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0host:dir2 /home/gilles/net/dir2 fuse.sshfs rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0$ pidof sshfs15031 15007 How can I know which of 15007 and 15031 is dir1 and which is dir2? Ideally I'd like to automate that, i.e. run somecommand /home/gilles/net/dir1 and have it display 15007 (or 15031, or “not a FUSE mount point”, as appropriate). Note that I'm looking for a generic answer, not an answer that's specific to SSHFS, like tracking which host and port the sshfs processes are connected to, and what files the server-side process has open — which might not even be possible at all due to connection sharing. I'm primarily interested in a Linux answer, but a generic answer that works on all systems that support FUSE would be ideal. Why I want to know: to trace its operation, to kill it in case of problems, etc. | You can do this easily with awk : awk -F\; '{print $1 ";" $2 ";" $(NF-1) ";" $NF}' This splits the input using ; ( -F\; ), and prints the first ( $1 ), second ( $2 ), second-to-last and last fields ( $(NF-1) and $NF ; NF contains the number of fields). The following variant re-uses the specified field separator in the output: awk -F\; '{print $1 FS $2 FS $(NF-1) FS $NF}' Janis suggested an improved version using OFS too: awk 'BEGIN{FS=OFS=";"} {print $1,$2,$(NF-1),$NF}' or, if you want to keep the separator as another parameter: awk -F\; 'BEGIN{OFS=FS} {print $1,$2,$(NF-1),$NF}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191821",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/885/"
]
} |
191,885 | When I run nix-shell nix-shell ~/dev/nixpkgs -A pythonPackages.some-package and then edit phases of pythonPackages.some-package , how to reload nix-shell environment with new changes? Quit nix-shell and rerun is one option, but are there alternatives? | No other simple options, sorry. I can only think of rewriting nix-shell to accomplish what you want. Probably it's not even that hard. Reparse the expression, cleanup the env and refill the env. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191885",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107644/"
]
} |
191,894 | I am trying to fill a file with a sequence of random 0 and 1s with a user-defined number of lines and number of characters per line. the first step is to get a random stream of 0 and 1s: cat /dev/urandom | tr -dc 01 then I tried to fill a file with this stream (and end the process of filling by ctrl+c) cat /dev/urandom | tr -dc 01 > foo when I count the numbers of lines of the so created foo file I get 0 lines. cat foo | wc -l0 Now I tried to control the stream, so I created a named pipe and directed the stream into it. Then I made a connection to the named pipe with the dd command in vain hope to control this way the amount of characters per line and number of lines in the file. makefifo namedpipecat /dev/urandom | tr -dc 01 > namedpipedd if=namedpipe of=foo bs=10 count=5 the foo file got indeed filled with 50 byte of 0 and 1 , but the number of lines was still 0. How can I solve it, I guess maybe I have to insert each number of characters a newline into the file, but if so , I do not know how. | How about fold ? It's part of coreutils... $ tr -dc 01 < /dev/urandom | fold -w 30 | head -n 5001010000111110001100101101101000101110011011100100101111000111010101011100101010110111001111011000000000101111110110100110011010111001110011010100011 Or if that's not available, some flavour of awk : $ tr -dc 01 < /dev/urandom | awk \$0=RT RS=.\{,30} | head -n 5000100010010001110100110100111101010010100100110111010001110100011100101001010111101001111010010100111100101101100010100001101100000101001111011011000 Or you could just do something with a loop... $ for line in $(seq 1 5)> do> echo $(tr -dc 01 < /dev/urandom | head -c 30)> done100101100111011110010010100000000000010000010010110111101011010000111110010010000000010100001110110001111011101011001001001010111011000111110001100110 I'm sure there are other ways... I thought maybe hexdump with a custom format could do it, but apparently not... ;) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/191894",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102788/"
]
} |
191,924 | I have 4 files which are like file A >TCONS_00000867 >TCONS_00001442 >TCONS_00001447 >TCONS_00001528 >TCONS_00001529 >TCONS_00001668 >TCONS_00001921 file b >TCONS_00001528 >TCONS_00001529 >TCONS_00001668 >TCONS_00001921 >TCONS_00001922 >TCONS_00001924 file c >TCONS_00001529 >TCONS_00001668 >TCONS_00001921 >TCONS_00001922 >TCONS_00001924 >TCONS_00001956 >TCONS_00002048 file d >TCONS_00001922 >TCONS_00001924 >TCONS_00001956 >TCONS_00002048 All files contain more than 2000 lines and are sorted by first column. I want to find common lines in all files. I tried awk and grep and comm but not working. | Since the files are already sorted: comm -12 a b | comm -12 - c | comm -12 - d comm finds comm on lines between files. By default comm prints 3 TAB-separated columns: The lines unique to the first file, The lines unique to the second file, The lines common to both files. With the -1 , -2 , -3 options, we suppress the corresponding column. So comm -12 a b reports the lines common to a and b . - can be used in place of a file name to mean stdin. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/191924",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106326/"
]
} |
191,977 | I am using a minimal Debian system which does not have the top program installed. I tried to install top with sudo apt-get install top , but top is not a package name. It seems that top is a part of some other package. How can I find out which package I should install to get it? More generally, how can I find the package that contains a program? | The direct answer is procps . Here is how you can find this out for yourself: # Install apt-file, which allows you to search# for the package containing a filesudo apt-get install apt-file# Update the package/file mapping databasesudo apt-file update# Search for "top" at the end of a pathapt-file search --regexp '/top$' The output of the final command should look something like this: crossfire-maps: /usr/share/games/crossfire/maps/santo_dominion/magara/well/topcrossfire-maps-small: /usr/share/games/crossfire/maps/santo_dominion/magara/well/topliece: /usr/share/emacs/site-lisp/liece/styles/toplxpanel: /usr/share/lxpanel/profile/two_panels/panels/topprocps: /usr/bin/topquilt: /usr/share/quilt/top You can see that only procps provides an executable in your standard PATH, which gives a clue that it might be the right one. You can also find out more about procps to make sure like it seems like the right one: $ apt-cache show procpsPackage: procpsVersion: 1:3.3.3-3[...]Description-en: /proc file system utilities This package provides command line and full screen utilities for browsing procfs, a "pseudo" file system dynamically generated by the kernel to provide information about the status of entries in its process table (such as whether the process is running, stopped, or a "zombie"). . It contains free, kill, pkill, pgrep, pmap, ps, pwdx, skill, slabtop, snice, sysctl, tload, top, uptime, vmstat, w, and watch. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/191977",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31995/"
]
} |
192,005 | How I can capture the X11 protocol's traffic? I need find a way to capture X11 traffic between two machines and also between an X11 server and an X11 client on local machine. | You can talk X11 over TCP, or over a Unix domain socket or (on Linux) on a Unix domain socket in the abstract namespace. When DISPLAY is set to host:4 , short for tcp/host:4 , clients use TCP to connect to the server. The TCP port is then 6000 plus the display number (in that case 6004). In that case, you can capture the traffic with any network sniffer like tcpdump or wireshark by capturing the TCP traffic on that port. When $DISPLAY is only :4 (short for unix/:4 ), then clients use a unix domain socket. Either /tmp/.X11-unix/X4 or the same path in the ABSTRACT namespace (usually shown as @/tmp/.X11-unix/X4 in netstat output). Capturing the traffic is then trickier. If your X server listens on TCP (but they tend not to anymore nowadays), the easiest is to change DISPLAY to localhost:4 instead of :4 and capture the network traffic on port 6004 on the loopback interface. If it doesn't, you can use socat as a man in the middle that accepts connections as TCP and forwards them as unix or abstract : socat tcp-listen:6004,reuseaddr,fork unix:/tmp/.X11-unix/X4 You can then set $DISPLAY to localhost:4 and capture the network traffic as above or tell socat to dump it with -x -v . Now, if you can't change $DISPLAY and want to capture the traffic of an already running local X application that uses unix domain sockets, that's where it gets tricky. One approach could be to use strace (or the equivalent command on your system if not Linux) to trace the send/receive system calls that your application does to communicate with the X server. Here for xterm , I observe it does writev() , recvfrom() and recvmsg() system calls on file descriptor 3 for that. So I can do: strace -qqxxttts9999999 -e writev,recvmsg,recvfrom -p "$xterm_pid" 2>&1 | perl -lne ' if (($t,$f,$p) = /^([\d.]+) (writev|recvmsg|recvfrom)\(3, (.*)/) { @p = ($p =~ /\\x(..)/g); $dir = $f eq "writev" ? "O" : "I"; while (@p) {print "$dir $t 0000 " . join(" ", splice @p,0,64000)} }' | text2pcap -T6000,1234 -Dqt %s. - - | wireshark -ki - (or tshark -Vi - ). The idea being to extract the timestamp and bytes sent/received from the output of strace and use text2pcap to convert that into a pcap (adding dummy TCP headers on port 6000 with -T6000,1234 ) before feeding to wireshark . We also split packets to avoid the 64kiB limit on the maximum length of a pcap record. Note that for text2pcap to work properly with regards to getting the traffic direction right, you need a relatively recent version of wireshark. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/192005",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84921/"
]
} |
192,008 | Is there a tool that debugs routing tables on a Linux machine? I mean one that I can use by inputting an ip address into it, it'll take the existing routing table into account and output the matches from the table, so I can get an idea where the packets will go? | Use ip route get . From Configuring Network Routing : The ip route get command is a useful feature that allows you to query the route on which the system will send packets to reach a specified IP address, for example: # ip route get 23.6.118.140 23.6.118.140 via 10.0.2.2 dev eth0 src 10.0.2.15 cache mtu 1500 advmss 1460 hoplimit 64 In this example, packets to 23.6.118.140 are sent out of the eth0 interface via the gateway 10.0.2.2. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/192008",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3570/"
]
} |
192,012 | Why does sed -i executed on symlink destroys that link and replaces it with destination file? How to avoid this? eg. $ ls -l pet*-rw-rw-r-- 1 madneon madneon 4 mar 23 16:46 petlrwxrwxrwx 1 madneon madneon 6 mar 23 16:48 pet_link -> pet$ sed -i 's/cat/dog/' pet_link$ ls -l pet*-rw-rw-r-- 1 madneon madneon 4 mar 23 16:48 pet-rw-rw-r-- 1 madneon madneon 4 mar 23 16:49 pet_link And why isn't it considered a bug? | The -i / --in-place flag edits a file in place. By default, sed reads the given file, processes it outputting into a temporary file, then copies the temporary file over the original, without checking whether the original was a symlink. GNU sed has a --follow-symlinks flag, which makes it behave as you want: $ echo "cat" > pet$ ln --symbolic pet pet_link$ sed --in-place --follow-symlinks 's/cat/dog/' pet_link$ cat petdog | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/192012",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107312/"
]
} |
192,023 | For example, using this script: #!/bin/bashfor a in $@do echo $adone And running: ./script "x y" z returns: xyz and not: x yz Why is that? And how would I pass string arguments with spaces to bash? I use Bash 4.3.33 . | Quote $@ : #!/bin/bashfor a in "$@"do echo "$a"done Output: x yz | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/192023",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104950/"
]
} |
192,026 | I'm using a Thinkpad USB Trackpoint keyboard, and when I try to scroll, using middle-click and the trackpoint, sometimes it performs a middle-click paste. I don't recall this ever happening with the built in keyboard on my laptop. Is there a way to configure the middle "mouse" button, so it doesn't misinterpret middle click scrolling as a middle click paste? Note:I don't want to disable the middle mouse button. I want to be able to scroll. | Here is the Ubuntu Wiki entry on how to disable the middle mouse button. This should work on any system using X. Example: Disabling middle-mouse button paste on a scrollwheel mouse Scrollwheel mice support a middle-button click event when pressing the scrollwheel. This is a great feature, but you may find it irritating. Fortunately it can be disabled. First, you need to know the id of the mouse, like this: $ xinput list | grep 'id='"Virtual core pointer" id=0 [XPointer]"Virtual core keyboard" id=1 [XKeyboard]"AT Translated Set 2 keyboard" id=2 [XExtensionKeyboard]"Macintosh mouse button emulation" id=3 [XExtensionPointer]"Logitech USB-PS/2 Optical Mouse" id=4 [XExtensionPointer] My mouse has the Logitech logo printed on it, so I gather I need the last entry. I can view the current button mapping thusly: $ xinput get-button-map 41 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 10 Really, only the first three numbers have meaning for me. They represent the left, middle, and right mouse buttons. $ xinput get-button-map 4 I can turn the middle mouse button off by setting it to 0: $ xinput set-button-map 4 1 0 3 Or I can turn the middle-mouse button into a left-mouse button by setting it to 1: $ xinput set-button-map 4 1 1 3 To make this set on a per-user basis, I can plug that line into my ~/.xstartup or other init file. It can also be done via configuring a matching InputClass section on xorg.conf. The above example does not disable scrolling; if you want to do that see here . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/192026",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29617/"
]
} |
192,042 | I am currently looking ways to suppress error command in Linux, in particular, the command cp . I do: root@ubuntu:~$ cp /srv/ftp/201*/wha*/*.jj ~/.cp: cannot stat `/srv/ftp/201*/wha*/*.jj': No such file or directory How do I suppress the error message that gets printed on the screen? I.e., I don't want to see this error message in my monitor. | To suppress error output in bash , append 2>/dev/null to the end of your command. This redirects filehandle 2 (STDERR) to /dev/null . There are similar constructs in other shells, though the specific construct may vary slightly. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/192042",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107013/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.