source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
165,369
Today I'm learning some basic git knowledge via reading this doc online: http://git-scm.com/book/en/v2/Git-Basics-Viewing-the-Commit-Hi And at that chapter, I'm start to learn using git log --pretty=format:" " to show log info at my taste. But some how, I saw in the format table two similar options, %H for Commit Hash , %P for Parent Hash and %T for Tree Hash . I experimented them on my command line, it comes out they are all hash values of same length with different value. I googled and stackoverflowed, no obvious hints so far. I have idea about this Hash value , it's a checksum of that git commit. But what does Parent Hash and Tree hash do? PS: Ah, I got some ideas now, did the Parent Hash mean the hash value of the directly origin of a branch?
Parent hashes: $ git log --graph* commit c06c4c912dbd9ee377d14ec8ebe2847cf1a3ec7e|\ Merge: 79e6924 3113760| | Author: linjie <[email protected]>| | Date: Mon Mar 14 16:02:09 2016 +0800| || | commit5| || | Merge branch 'dev'| || * commit 31137606f85d8960fa1640d0881682a081ffa9d0| | Author: linjie <[email protected]>| | Date: Mon Mar 14 16:01:26 2016 +0800| || | commit3| |* | commit 79e69240ccd218d49d78a72f33002fd6bc62f407|/ Author: linjie <[email protected]>| Date: Mon Mar 14 16:01:59 2016 +0800|| commit4|* commit 7fd4e3fdddb89858d925a89767ec62985ba07f3d| Author: linjie <[email protected]>| Date: Mon Mar 14 16:01:00 2016 +0800|| commit2|* commit 316dd3fb3c7b501bc9974676adcf558a18508dd4 Author: linjie <[email protected]> Date: Mon Mar 14 16:00:34 2016 +0800 commit1$ git log --pretty=format:'%<(82)%P %s'79e69240ccd218d49d78a72f33002fd6bc62f407 31137606f85d8960fa1640d0881682a081ffa9d0 commit57fd4e3fdddb89858d925a89767ec62985ba07f3d commit47fd4e3fdddb89858d925a89767ec62985ba07f3d commit3316dd3fb3c7b501bc9974676adcf558a18508dd4 commit2 commit1 You can see commit4 and commit3 is parent of commit5 , commit2 is parent of commit3 and commit4 , commit1 is parent of commit2 . Tree hash: $ git log --pretty=format:'%T %s'f3c7cee96f33938631a9b023ccf5d8743b00db0e commit5e0ecb42ae45ddc91c947289f928ea5085c70b208 commit4d466aea17dc07516c449c58a73b2dc3faa9d11a1 commit3b39f2e707050e0c5bbb3b48680f416ef05b179ba commit25706ec2b32605e27fa04cbef37d582325d14dda9 commit1$ git cat-file -p f3c7ce100644 blob 8bb2e871e94c486a867f5cfcbc6f30d004f6a9e5 dev100644 blob 47f16c8e00adba77ec5c176876e99c8e9f05d69b master$ git cat-file -p 5706ec100644 blob fc0bfde0d44bb4d6c7d27b6e587ebedd34ba5911 master The command's function: Pretty-print the contents of <object> based on it's type. git cat-file -p In git,all the content is stored as tree and blob objects, with trees corresponding to UNIX directory entries and blobs corresponding more or less to inodes or file contents. A single tree object contains one or more tree entries, each of which contains a SHA-1 pointer to a blob or subtree with its associated mode, type, and filename. Git normally creates a tree by taking the state of your staging area or index and writing a series of tree objects from it. Commit objects have the information about who saved the tree object, when they saved or why they were saved. This is the basic information that the commit object stores for you. Conclusion: Commit hash, Parent hash, Tree hash are all SHA-1. Commit hash and Parent hash is identical except Parent hash has child. Tree hash is represent a Tree object. Commit hash and Parent hash represent a commit object. Reference: Git Internals - Git Objects git-cat-file - Provide content or type and size information for repository objects
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/165369", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74226/" ] }
165,374
The man for cpuset doesn't seem to clearly list how to figure out which numbers map to which processing units. My current machine has two Intel Xeon E5645 s, each of which has 6 cores and hyperthreading enabled, so I have 24 total processing units I can refer to with cpusets. My challenges are 1) determine which cpuset ID numbers map to which processor 2) determine which cpuset id numbers are paired (e.g. siblings on a core) Are the numbers that lscpu outputs the same identifiers I should use to refer to cpu set processors? If so, it seems the numbers are alternated here, and this answers (1) with "evens are one processor, odds are the other processor", but I'm not sure if I'm reading it correctly. $ lscpuArchitecture: x86_64CPU op-mode(s): 32-bit, 64-bitByte Order: Little EndianCPU(s): 24On-line CPU(s) list: 0-23Thread(s) per core: 2Core(s) per socket: 6Socket(s): 2NUMA node(s): 2Vendor ID: GenuineIntelCPU family: 6Model: 44Stepping: 2CPU MHz: 2393.964BogoMIPS: 4788.01Virtualization: VT-xL1d cache: 32KL1i cache: 32KL2 cache: 256KL3 cache: 12288KNUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23 lstopo from the hwloc package seems to show me the answer to (2), and if I'm reading the man page correctly the P#... bits are the identifier "used by the OS", which leads me to believe these are the ones I need to pass to cpu sets. So limiting a process to cpus 0 and 12 would be allowing use of two threads on the same core, while limiting it to cpus 0 and 2 would be two threads on two different cores. Does that seem correct? $ lstopoMachine (35GB) NUMANode L#0 (P#0 18GB) + Socket L#0 + L3 L#0 (12MB) L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 PU L#0 (P#0) PU L#1 (P#12) L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 PU L#2 (P#2) PU L#3 (P#14) L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2 PU L#4 (P#4) PU L#5 (P#16) L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3 PU L#6 (P#6) PU L#7 (P#18) L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4 PU L#8 (P#8) PU L#9 (P#20) L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5 PU L#10 (P#10) PU L#11 (P#22) NUMANode L#1 (P#1 18GB) + Socket L#1 + L3 L#1 (12MB) L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6 PU L#12 (P#1) PU L#13 (P#13) L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7 PU L#14 (P#3) PU L#15 (P#15) L2 L#8 (256KB) + L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8 PU L#16 (P#5) PU L#17 (P#17) L2 L#9 (256KB) + L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9 PU L#18 (P#7) PU L#19 (P#19) L2 L#10 (256KB) + L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10 PU L#20 (P#9) PU L#21 (P#21) L2 L#11 (256KB) + L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11 PU L#22 (P#11) PU L#23 (P#23) HostBridge L#0 PCIBridge PCI 14e4:163a Net L#0 "eth0" PCI 14e4:163a Net L#1 "eth1" PCIBridge PCI 102b:0532 PCI 8086:2921 Block L#2 "sda" PCI 8086:2926
Parent hashes: $ git log --graph* commit c06c4c912dbd9ee377d14ec8ebe2847cf1a3ec7e|\ Merge: 79e6924 3113760| | Author: linjie <[email protected]>| | Date: Mon Mar 14 16:02:09 2016 +0800| || | commit5| || | Merge branch 'dev'| || * commit 31137606f85d8960fa1640d0881682a081ffa9d0| | Author: linjie <[email protected]>| | Date: Mon Mar 14 16:01:26 2016 +0800| || | commit3| |* | commit 79e69240ccd218d49d78a72f33002fd6bc62f407|/ Author: linjie <[email protected]>| Date: Mon Mar 14 16:01:59 2016 +0800|| commit4|* commit 7fd4e3fdddb89858d925a89767ec62985ba07f3d| Author: linjie <[email protected]>| Date: Mon Mar 14 16:01:00 2016 +0800|| commit2|* commit 316dd3fb3c7b501bc9974676adcf558a18508dd4 Author: linjie <[email protected]> Date: Mon Mar 14 16:00:34 2016 +0800 commit1$ git log --pretty=format:'%<(82)%P %s'79e69240ccd218d49d78a72f33002fd6bc62f407 31137606f85d8960fa1640d0881682a081ffa9d0 commit57fd4e3fdddb89858d925a89767ec62985ba07f3d commit47fd4e3fdddb89858d925a89767ec62985ba07f3d commit3316dd3fb3c7b501bc9974676adcf558a18508dd4 commit2 commit1 You can see commit4 and commit3 is parent of commit5 , commit2 is parent of commit3 and commit4 , commit1 is parent of commit2 . Tree hash: $ git log --pretty=format:'%T %s'f3c7cee96f33938631a9b023ccf5d8743b00db0e commit5e0ecb42ae45ddc91c947289f928ea5085c70b208 commit4d466aea17dc07516c449c58a73b2dc3faa9d11a1 commit3b39f2e707050e0c5bbb3b48680f416ef05b179ba commit25706ec2b32605e27fa04cbef37d582325d14dda9 commit1$ git cat-file -p f3c7ce100644 blob 8bb2e871e94c486a867f5cfcbc6f30d004f6a9e5 dev100644 blob 47f16c8e00adba77ec5c176876e99c8e9f05d69b master$ git cat-file -p 5706ec100644 blob fc0bfde0d44bb4d6c7d27b6e587ebedd34ba5911 master The command's function: Pretty-print the contents of <object> based on it's type. git cat-file -p In git,all the content is stored as tree and blob objects, with trees corresponding to UNIX directory entries and blobs corresponding more or less to inodes or file contents. A single tree object contains one or more tree entries, each of which contains a SHA-1 pointer to a blob or subtree with its associated mode, type, and filename. Git normally creates a tree by taking the state of your staging area or index and writing a series of tree objects from it. Commit objects have the information about who saved the tree object, when they saved or why they were saved. This is the basic information that the commit object stores for you. Conclusion: Commit hash, Parent hash, Tree hash are all SHA-1. Commit hash and Parent hash is identical except Parent hash has child. Tree hash is represent a Tree object. Commit hash and Parent hash represent a commit object. Reference: Git Internals - Git Objects git-cat-file - Provide content or type and size information for repository objects
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/165374", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54389/" ] }
165,423
I use this rsync invocation to backup my home directory: rsync -aARrx --info= --force --delete --info=progress2 -F "$USER_HOME" "$BACKUP_MNTPOINT" rsync man page says that -a implies -g and -o (among other switches), which should preserve ownership. However I've noticed that if a directory does not exist under $BACKUP_MNTPOINT/$USER_HOME , it is created with root:root ownership instead of the correct one. (This only happens with directories right under $BACKUP_MNTPOINT/$USER_HOME ). Why is that? $BACKUP_MNTPOINT is a localy mounted drive. $BACKUP_MNTPOINT/$USER_HOME does have the right ownership and permissions. Neither $USER_HOME nor $BACKUP_MNTPOINT end with a slash. Both the source and the target filesystems are XFS and running mkdir $BACKUP_MNTPOINT/$USER_HOME creates a directory with the expected ownership.
I had a similar problem when using rsync to backup my system to my server. I used: rsync -aAXSHPr \-e ssh \--rsync-path="sudo /usr/bin/rsync/" \--numeric-ids \--delete \--progress \--exclude-from="/path/to/file/that/lists/excluded/folders.txt" \--include-from="/path/to/file/that/lists/included/folders.txt" \/ USER@SERVER:/path/to/folder/where/backup/should/go/ The solution is that there is not really a problem. I suspect that you aborted the rsync process once you saw that it creates folders with wrong permissions set. The crux is that rsync only sets the permissions of a parent-folder once it is done syncing all subfolders and files of it.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/165423", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74534/" ] }
165,447
I have a usb lamp which I specifically bought in order to turn it off programmatically at a certain time, thus I need to remove the power to its usb port. I believe I have a usb-hub at usb6. The lamp is connected to one of the ports in this hub: #myhost$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub......Bus 008 Device 006: ID 050d:0234 Belkin Components F5U234 USB 2.0 4-Port Hub Here's what I've tried: Two solutions are here , the first suggests: echo disabled > /sys/bus/usb/devices/usb1/power/wakeup echo suspend > /sys/bus/usb/devices/usb1/power/level # turn off but I get write error: Invalid argument when trying to write to /sys/bus/usb/devices/usb1/power/level : $sudo bash -c 'echo disabled > /sys/bus/usb/devices/usb6/power/wakeup'$echo suspend|sudo tee /sys/bus/usb/devices/usb6/power/level suspendtee: /sys/bus/usb/devices/usb6/power/level: Invalid argument$sudo bash -c 'echo suspend> /sys/bus/usb/devices/usb6/power/level'bash: line 0: echo: write error: Invalid argument The second solution: sudo bash -c 'echo 0 > /sys/bus/usb/devices/usb6/power/autosuspend_delay_ms; echo auto > /sys/bus/usb/devices/usb6/power/control' which does turn off power to the usb-hub device. I was also trying to follow this : But the output of lsusb -t just hangs: $lsusb -t4-1:0.0: No such file or directory4-1:0.1: No such file or directory^C Which prevents me from using this method to get the '2-1.1' part to this: echo '2-1.1' > /sys/bus/usb/drivers/usb/unbind Is there an alternative way of getting this information? Alternatively, is there a way to shut off power to the entire usb subsystem? Something like modprobe -r usb_etc ? My kernel is: $uname -r3.2.0-4-amd64
See Controlling a USB power supply (on/off) with linux , short version, for newer kernels "suspend" does not work anymore: echo "0" > "/sys/bus/usb/devices/usbX/power/autosuspend_delay_ms"echo "auto" > "/sys/bus/usb/devices/usbX/power/control" But it doesn't literally cut the power: it signals the device to poweroff, and it's up to the device to implement power management and do the right thing. You have a lot of details in the official documentation in the kernel, there it explains the various files in /sys/bus/usb/devices/.../power/ and how to manage the devices and ports. For things that are not real USB devices (does your USB lamp show up in lsusb?) you might be out of luck, I have tried myself with an usb lamp and with a GPS logger that charges its battery and transfers data through USB that shows up as a cp210x USB-to-serial, and neither does poweroff. I can "disconnect" the GPS with echo '5-4.6' > /sys/bus/usb/drivers/usb/unbind and reconnect it with echo '5-4.6' > /sys/bus/usb/drivers/usb/bind but the battery charging light is always on. But it seems that some hubs do it properly .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/165447", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50978/" ] }
165,455
When I use the ls command with the option -l , the first string of letters gives the info about each file, and the first letter in this string gives the file's type. ( d = directory, - = standard file, l = link, etc.) How can I filter the files according to that first letter?
You can filter out everything but directories using grep this way: ls -l | grep '^d' the ^ indicates that the pattern is at the beginning of the line. Replace d with - , l , etc., as applicable. You can of course use other commands to directly search for specific types (e.g. find . -maxdepth 1 -type d ) or use ls -l | sort to group similar types together based on this first character, but if you want to filter you should use grep to only select the appropriate lines from the output.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/165455", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87511/" ] }
165,463
So I have a repo with some of my config files and I'm trying to create a makefile to install them in the homedir. The problem I have is that when I run the following command straight in bash install -m 755 -d ~/path/to/dotfilesDir/ ~/ seemingly nothing happens while install -m 755 ~/path/to/dotfilesDir/{file1,file2,...} ~/ works as intended. Why doesn't the first (easier and cleaner) solution work?
From a look at the man page , it seems that install will not do what you want. Indeed, the Synopsis section indicates a usage of the form: install [OPTION]... -d DIRECTORY... and later on, the man page says: -d, --directory treat all arguments as directory names; create all components of the specified directories So it seems to me that the point of this option is to be able to install a complicated (but empty) directory structure à la mkdir -p ... . You can accomplish what you want with a loop: for file in /path/to/DotFiles/dir/*;do install -m 755 "$file" ~/done Or, if there are many levels under /path/to/DotFiles/dir , you can use find : find /path/to/DotFiles/dir/ -type f -exec 'install -m 755 "{}" ~/' +
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/165463", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77635/" ] }
165,477
I would like to map my CapsLock to Escape . How do I represent CapsLock in .vimrc ? I know to map space to a command I would do something like this: :map <space> viw How would I map CapsLock to Escape without doing a registry hack - I'm looking for a Vim command? If that is not possible without a hack or additional software I would like to assign the shortcut j j to ESC in .vimrc . I'm currently doing this: inoremap jj <esc> However if I'm in visual mode this does not work. How could I make j j emulate the escape key?
I don't think you can map CapsLock from within Vim. You remap it within X using setxkbmap : setxkbmap -option caps:swapescape For remapping in the console, if your distro uses systemd , you can use a custom keyboard layout in /etc/vconsole.conf as described on the Arch Wiki , and for other init systems see this U&L answer .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/165477", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87762/" ] }
165,485
I would like to split a .ape album into individual tracks in .flac format using a .cue sheet. For this I followed a tutorial I found. In short I pass this command to the terminal: cuebreakpoints example.cue | shnsplit -o flac example.ape But I get the following error back: shnsplit: warning: failed to read data from input file using format: [ape]shnsplit: + you may not have permission to read file: [example.ape]shnsplit: + arguments may be incorrect for decoder: [mac]shnsplit: + verify that the decoder is installed and in your PATHshnsplit: + this file may be unsupported, truncated or corruptshnsplit: error: cannot continue due to error(s) shown above Unfortunately I don't know how to overcome this issue. One thing I think can be discarded is the file being corrupt since I had the same error with another .ape and I have followed this procedure with origin .flac files with no problem. How can I solve this problem?
shntool on Ubuntu 14.04 sudo add-apt-repository -y ppa:flaconsudo apt-get updatesudo apt-get install -y flaconshntool split -f *.cue -o flac -t '%n - %p - %t' *.ape flacon is a GUI for shntool , but it comes with all the codecs it needs... In particular, the flacon PPA furnishes the mac package (Monkey's Audio Console), on which flacon depends, which has the mac CLI tool, which seems to be the main missing ingredient.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/165485", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89990/" ] }
165,554
In the past I have used Virtual Box which has very good support for sharing a folder on the host with a Windows guest. I am looking for similar functionality for QEMU. The documentation suggests to expose a Samba server running somewhere in the network, or use the -net user,smb=/path/to/folder to start a samba server. I had no luck with the -net user,smb option of QEMU. All it does is starting smbd (which conflicts with another service running locally due to a port conflict). Suffice to say, this is unusable, especially with multiple guests in mind. (For Linux, -virtfs (Plan 9) can be used for easy folder sharing.) Other problems with Samba is that it is not limited to folder sharing, it also does printer sharing, user mapping and whatsnot. All I need is to share one (or more?) folders with the Windows guest. Does there exist an alternative folder sharing method for QEMU that works with a Windows guest? Or is there a way to configure Samba to restrict itself to a very limited set of features and integrate it into QEMU? It should: Not everyone in the network should be able to access the folder. local users included (if feasible). Not provide other functionality (printer sharing). Use case: expose a git directory to Windows, compile it in Windows and use Linux for analysis. Have an acceptable speed, Windows uses virtio-scsi and virtio-net. Be able to share a folder from a Linux host with a Windows 7 guest.
QEMU's built-in Samba service The not-functioning -net user,smb option was caused by an incompatibility with newer Samba versions (>= 4). This is fixed in QEMU v2.2.0 and newer with these changes: b87b8a8 slirp/smb: Move ncalrpc directory to tmp (since v2.1.0) 44d8d2b net/slirp: specify logbase for smbd (since v2.2.0) 7912d04 slirp/smbd: modify/set several parameters in generated smbd.conf (since v2.2.0, disables printer too) (Debian has backported the latter two patches to 2.1+dfsg-6 which is present in Jessie.) Usage You can export one folder as \\10.0.2.4\qemu when using User networking: qemu-system-x86_64 \ -net user,smb=/absolute/path/to/folder \ -net nic,model=virtio \ ... When QEMU is successfully started with these options, a new /tmp/qemu-smb.*-*/ directory will be created containing a smb.conf . If you are fast enough, then this file could be modified to make paths read-only or export more folders. Mode of operation The samba daemon is executed whenever ports 139 or 445 get accessed over a "user" network. Communication happens via standard input/output/error of the smbd process. This is the reason why newer daemons failed, it would write its error message to the pipe instead of protocol messages. Due to this method of operation, the daemon will not listen on host ports, and therefore will only be accessible to the guest. So other clients in the network and even local users cannot gain access to folders using this daemon. Since QEMU v2.2.0 printer sharing is completely disabled through the samba configuration, so another worry is gone here. The speed depends on the network adapter, so it is recommended to use the virtio netkvm driver under Windows. Also note that the daemon is executed by its absolute path (typically /usr/sbin/smbd ) as specified at compile time (using the --smbd option). Whenever you need to try a new binary or interpose smbd , you will need to modify the file at that path. Other caveats Executables ( *.exe ) must be executable on the host ( chmod +x FILE ) for the guest to have execute permissions. To allow execution of any file, add the acl allow execute always = True option to a share. Example read-only smb.conf configuration which allows execution of any file (based on QEMU v2.2.0): ...[qemu]path= /home/peter/windows read only= yes guest ok=trueforce user= peter acl allow execute always = True
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/165554", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8250/" ] }
165,579
I am using tmux after logging into our university server. I have multiple screens (created with Ctrl - B c ), some for editing different files. Some for running programs. I go through the tasks with Ctrl - B n and Ctrl - B p , but sometimes this takes long to find the right one. Is there a shortcut to a screen running some specific program. Or is there some other way to manage the screens (sometimes ten or more)?
You can get a list of "screens" with Ctrl + B w .This shows the main program running without any options, so that help a bit but not much. You should name your screens with Ctrl + B , after you make them, that will make that list much more useful. This is what I get after Ctrl + B w , you can select the entry with ↑ and ↓ followed by Return , or by clicking with the mouse.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/165579", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90029/" ] }
165,588
Is there any way to view or manipulate the mount namespace for an arbitrary process? For example, a docker container is running which has a local mount to an NFS server. It can be seen from inside the container, but on the outside, the host has no knowledge of it. With network namespaces this is doable. e.g. pipework However, I see nothing about this for mount namespaces. Is there an API or sysfs layer exposed to view these mounts and manipulate or create new ones?
Yes. You can look at its /proc/$PID/mountinfo or else you can use the findmnt -N switch - about which findmnt --help says: -N, --task <tid> use alternative namespace ( /proc/<tid>/mountinfo file) findmnt also tracks the PROPAGATION flag which is a mountinfo field which reports on exactly this information - which processes share which mounts. Also, you can always nsenter any type of namespace you like - provided you have the correct permissions, of course. nsenter --helpUsage: nsenter [options] <program> [args...]Options: -t, --target <pid> target process to get namespaces from -m, --mount [=<file>] enter mount namespace -u, --uts [=<file>] enter UTS namespace (hostname etc) -i, --ipc [=<file>] enter System V IPC namespace -n, --net [=<file>] enter network namespace -p, --pid [=<file>] enter pid namespace -U, --user [=<file>] enter user namespace -S, --setuid <uid> set uid in user namespace -G, --setgid <gid> set gid in user namespace -r, --root [=<dir>] set the root directory -w, --wd [=<dir>] set the working directory -F, --no-fork do not fork before exec'ing <program> -h, --help display this help and exit -V, --version output version information and exitFor more details see nsenter(1).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/165588", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1549/" ] }
165,589
I'm trying to edit my nginx.conf file programmatically, which contains a line like this: access_log /var/log/nginx/access.log; which I want to look like this: access_log /dev/stdout; I believe the regex ^\s*access_log([^;]*); will capture the /var/log/nginx/access.log part in a capture group, but I'm not sure how to correctly replace the capture group with sed? I've tried echo " access_log /var/log/nginx/access.log;" | sed 's/^\s*access_log([^;]*);/\1\\\/dev\\\/stdout/' but I'm getting the error: sed: -e expression #1, char 45: invalid reference \1 on `s' command's RHS if I try sed -r then there is no error, but the output is not what I expect: /var/log/nginx/access.log\/dev\/stdout I'm trying to be smart with the capture group and whatnot and not search directly for "access_log /var/log/nginx/access.log;" in case the distribution changes the default log file location.
A couple of mistakes there. First, since sed uses basic regular expressions, you need \( and \) to make a capture group. The -r switch enables extended regular expressions which is why you don't get the error. See Why do I need to escape regex characters in sed to be interpreted as regex characters? . Second, you are putting the capture group in the wrong place. If I understand you correctly, you can do this: sed -e 's!^\(\s*access_log\)[^;]*;!\1 /dev/stdout;!' your_file Note the use of ! as regex delimiters to avoid having to escape the forward slashes in /dev/stdout .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/165589", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17420/" ] }
165,718
This question/answer has some good solutions for deleting identical lines in a file, but won't work in my case since the otherwise duplicate lines have a timestamp. Is it possible to tell awk to ignore the first 26 characters of a line in determining duplicates? Example: [Fri Oct 31 20:27:05 2014] The Brown Cow Jumped Over The Moon[Fri Oct 31 20:27:10 2014] The Brown Cow Jumped Over The Moon[Fri Oct 31 20:27:13 2014] The Brown Cow Jumped Over The Moon[Fri Oct 31 20:27:16 2014] The Brown Cow Jumped Over The Moon[Fri Oct 31 20:27:21 2014] The Brown Cow Jumped Over The Moon[Fri Oct 31 20:27:22 2014] The Brown Cow Jumped Over The Moon[Fri Oct 31 20:27:23 2014] The Brown Cow Jumped Over The Moon[Fri Oct 31 20:27:24 2014] The Brown Cow Jumped Over The Moon Would become [Fri Oct 31 20:27:24 2014] The Brown Cow Jumped Over The Moon (keeping the most recent timestamp)
You can just use uniq with its -f option: uniq -f 4 input.txt From man uniq : -f, --skip-fields=N avoid comparing the first N fields Actually this will display the first line: [Fri Oct 31 20:27:05 2014] The Brown Cow Jumped Over The Moon If that is a problem you can do: tac input.txt | uniq -f 4 or if you don't have tac but your tail supports -r : tail -r input.txt | uniq -f 4
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/165718", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20107/" ] }
165,766
When I have an output stream: abcde How can I double the newlines: abcde
sed G is a well known one-liner for that. Performance-wise, the most effective with the standard Unix tool chest would probably be: paste -d '\n' - /dev/null If you don't want to add an empty line after the last line: sed '$!G' To add the empty lines before the input lines: paste -d '\n' /dev/null - Or: sed 'i\/'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/165766", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33055/" ] }
165,771
I have created a virtual image for Scientific Linux and came across this after I finished installing it: [root@ftpserver home]# pwd/home[root]@ftpserver home]# ls When I cd into ~ I get this: [root@ftpserver ~]# pwd/root What is the overall difference between /home and /root ?
According to the Filesystem Hierarchy Standard (FHS) : /home : User home directories (optional)/root : Home directory for the root user (optional) A typical non-root user's home directory would be /home/$USER . /root is also special in that (in many distros) /root is readable only to root ( 700 ), but a normal user's home directory has read access to others ( 755 ) as well.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/165771", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72902/" ] }
165,858
I have two servers. One of them has 15 million text files (about 40 GB). I am trying to transfer them to another server. I considered zipping them and transferring the archive, but I realized that this is not a good idea. So I used the following command: scp -r usrname@ip-address:/var/www/html/txt /var/www/html/txt But I noticed that this command just transfers about 50,000 files and then the connection is lost. Is there any better solution that allows me to transfer the entire collection of files? I mean to use something like rsync to transfer the files which didn't get transferred when the connection was lost. When another connection interrupt would occur, I would type the command again to transfer files, ignoring those which have already been transferred successfully. This is not possible with scp , because it always begins from the first file.
As you say, use rsync : rsync -azP /var/www/html/txt/ username@ip-address:/var/www/html/txt The options are: -a : enables archive mode, which preserves symbolic links and works recursively-z : compress the data transfer to minimise network usage-P : to display a progress bar and enables you to resume partial transfers As @aim says in his answer, make sure you have a trailing / on the source directory (on both is fine too). More info from the man page
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/165858", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48597/" ] }
165,862
Sice a few days (I suspect since I have upgraded to gnome 3.14) on ArchLinux I can't print anymore. If I open the printing panel of gnome control center I get a message like (it's translated from Italian): "System service for printing seems not be available" So from terminal I tryed: $ sudo systemctl start cupsFailed to start cups.service: Unit cups.service failed to load: No such file or directory. I also tried reinstalling cups but no luck. I also googled around and tried the various solutions proposed but none of them works for me.
As of cups v. 2.0.0 the service name has been changed . You'll have to disable the old service: systemctl disable cups.service before enabling and starting the new one: systemctl enable org.cups.cupsd.servicesystemctl daemon-reloadsystemctl start org.cups.cupsd.service
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/165862", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57287/" ] }
165,865
I have a directory that contains data shared between a number of users. Access to this directory and anything underneath, will be controlled by the directory's group, which will be added to the users in question.As such I created the folder "sticky group" chmod g+s set.The directory will contain a tree structure with directories and files, with the total amount of files likely being a few million. The files will be fairly small, I don’t anticipate anything bigger than 50MB. My problem is that the owner of the file or directory is still the user that created it. As such, even if i should remove that user from the access group, I would not remove his access completely. So: Are there other options I missed for ensuring that all files and sub-directories have the same owner? I expect I could periodically surf through the entire directory with a cron-job, but that strikes me as inefficient for what is essentially a once-pr-file command. I found an example using INotify but that strikes me as high-maintenance, since it requires scripting. I haven't been able to figure out if ACL can help me with forced ownership. Is there a smarter way to do this? What I want is to have a directory that can be shared by adding a group to a user. Anything created in this directory inherits the permission scheme from its parent. If there is a better way than what I’m attempting, I’m all ears.
Setting a default owner "automatically" would require a directory setuid behaving like setgid . However, while this can be configured on FreeBSD, other UNIX & Linux systems just ignore u+s . In your case however, there might be another solution. What I want is to have a directory that can be shared by adding a group to a user. Anything created in this directory inherits the permission scheme from its parent. If there is a better way than what I’m attempting, I’m all ears. So, basically, from what I see, you want to control the access to a directory using the groups mechanism. However, this does not require you to restrict the permissions in the whole directory structure. Actually, the directory --x execution bit could be just what you need. Let me give you an example. Assuming that... The group controlling the access to the group_dir directory is ourgroup . Only people in the ourgroup group can access group_dir . user1 and user2 belong to ourgroup . The default umask is 0022. ... consider the following setup: drwxrws--- root:ourgroup |- group_dir/drwxr-sr-x user1:ourgroup |---- group_dir/user1_submission/drwxr-sr-x user2:ourgroup |---- group_dir/user2_submission/-rw-r--r-- user2:ourgroup |-------- group_dir/user2_submission/README Here, let's assume every item was created by its owner. Now, in this setup: All directories can be browsed freely by everyone in ourgroup . Anyone from the group can create, move, delete files anywhere inside group_dir (but not deeper). Anyone who's not in ourgroup will be blocked at group_dir , and will therefore be unable to manipulate anything under it. For instance, user3 (who isn't a member of ourgroup ), cannot read group_dir/user2_submission/README (even though he has r-- permission on the file itself). However, there's a little problem in this case: because of the typical umask, items created by users cannot be manipulated by other members of the group. This is where ACLs come in. By setting default permissions, you'll make sure everything's fine despite the umask value: $ setfacl -dRm u::rwX,g::rwX,o::0 group_dir/ This call sets: Default rw(x) permissions for the owner. Default rw(x) permissions for the group. No permissions by default for the others. Note that since the others can't access group_dir anyway, it does not really matter what their permissions are below it. Now, if I create an item as user2 : $ touch group_dir/user2_submission/AUTHORS$ ls -l group_dir/user2_submission/AUTHORSrw-rw---- user2:ourgroup group_dir/user2_submission/AUTHORS With this ACL in place, we can try rebuilding our previous structure: drwxrws---+ root:ourgroup |- group_dir/drwxrws---+ user1:ourgroup |---- group_dir/user1_submission/drwxrws---+ user2:ourgroup |---- group_dir/user2_submission/-rw-rw----+ user2:ourgroup |-------- group_dir/user2_submission/README Here again, each item is created by its owner. Additionally, if you'd like to give a little bit more power/security to those using the directory, you might want to consider a sticky bit. This would, for instance, prevent user1 from deleting user2_submission (since he has -w- permission on group_dir ) : $ chmod +t group_dir/ Now, if user1 tries to remove user2 's directory, he'll get a lovely Operation not permitted . Note however that while this prevents directory structure modifications in group_dir , files and directories below it are still accessible: user1@host $ rm -r user2_submissionOperation not permitteduser1@host $ > user2_submission/READMEuser1@host $ file user2_submission/READMEuser2_submission/README: empty (uh-oh) Another thing to take into account is that the ACLs we used set up default permissions. It is therefore possible for the owner of an item to change the permissions associated to it. For instance, user2 can perfectly run... $ chown g= user2_submission/ -Ror$ chgrp nobody user2_submission -R ... hence making his full submission directory unavailable to anyone in the group. However, since you're originally willing to give full rws access to anyone in the group, I'm assuming you're trusting these users, and that you wouldn't expect too many malicious operations from them.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/165865", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77210/" ] }
165,875
How do I resume a partially downloaded file using a Linux commandline tool? I downloaded a large file partially, i.e. 400 MB out of 900 MB due to power interruption, but when I start downloading again it resumes from scratch.How do I start from 400 MB itself?
Since you didn't specify, I'm assuming you are using wget to download the file. If this is the case, try using it with the -c option (e.g. wget -c <URL> ). Please notice that in case the protocol used is ftp (the URL looks like ftp://... ) there is a chance the remote server uses an old/ancient ftp daemon which doesn't support resuming downloads (newer ftp daemons do support it for more than a decade anyway, so this is just a small chance). If this is the case, though, you may be out of luck. In the other hand you should have no issues if the protocol used is http. (UPDATE: According to other experts (including Gilles in the comments below), resuming while using http is also subject to the server support, so this apply to both ftp and http). Good luck.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/165875", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89599/" ] }
165,916
When I run yum repolist , I don't see EPEL listed. # yum repolistLoaded plugins: downloadonly, fastestmirror, protectbase, refresh-packagekit, : securityLoading mirror speeds from cached hostfile * base: centos.mia.host-engine.com * extras: mirror-centos.hostingswift.com * updates: centos-mirror.jchost.net0 packages excluded due to repository protectionsrepo id repo name statusbase CentOS-6 - Base 6,518extras CentOS-6 - Extras 35updates CentOS-6 - Updates 209repolist: 6,762 I followed http://xmodulo.com/how-to-set-up-epel-repository-on-centos.html Those instructions show me how to install the RPM and the GPG key. Which I've done: # sudo rpm -Uvh http://mirrors.kernel.org/fedora-epel/6/i386/epel-release-6-8.noarch.rpmRetrieving http://mirrors.kernel.org/fedora-epel/6/i386/epel-release-6-8.noarch.rpmPreparing... ########################################### [100%] package epel-release-6-8.noarch is already installed
Have you ensured that it's enabled? If a repo isn't enabled then it won't show up in repolist . Check the files in /etc/yum.repos.d/*.repo . For example: [root@xxx01 ~]# yum repolist 2>&1 | grep epelepel EPEL Repo 11,148 Shows that EPEL is installed and listed in repolist so I go to disable it and check repolist again: [root@xxx01 ~]# sed -i 's/enabled=1/enabled=0/g' /etc/yum.repos.d/epel.repo[root@xxx01 ~]# yum repolist 2>&1 | grep epel[root@xxx01 ~]# EDIT: You can also temporarily enable the repo by using the --enablerepo option which overrides the enabled setting in the repo's config.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/165916", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66709/" ] }
165,919
This post is about removing muliple files from the remote server, when sftp password less connection is setup. I have the code as below. Only first file in the variable $file_list gets deleted, when I have the variable set as, $file_list="file1 file2"sftp $USER@$HOSTrm $file_listquitSFTP-Session I even tried executing the commands in prompt mode. sftp $USER@$HOSTrm file1 file2 However, I still see that only file1 is getting deleted. I am not sure if I am missing any basic command.I tried mdelete / mdel / mrm , which were rejected as Invalid command in sftp prompt window.
Here is one possible solution that can be added to bash script. This is not ideal as it will make a new connection for each file. #!/bin/bash# set variablesUSER="username"HOST="hostname"file_list="file1 file1 file3 file4"# delete each filefor file in $file_list; do echo "rm $file" | sftp $USER@$HOSTdoneexit 0 This one-liner is far better! file1-9 being file names to remove, use a variable if you like, it's the same thing. for file in file1 file2 file3 file4 file5 file6 file7 file8 file9; do echo -e "rm $file" >> sftp_batch; done; sftp -b sftp_batch username@hostname; rm sftp_batch
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/165919", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90257/" ] }
165,924
On our server a cronjob has logged a count of files in a shared directory. The log is of the form: 2003-07-03T16:05 2792003-07-03T16:10 2832003-07-03T16:15 282 By now this file has far over a million entries. I am interested in finding the biggest changes we ever had (negative and positive). I can write a program to find this, but is there some tool that can give me a list of deltas? The original is on Solaris, but I have a copy of the file on my Linux Mint system.
If you have the package num-utils installed, you can do: cut -d ' ' -f 2 inputfile | numinterval | sort -u The first and the last number there give the min, resp. max changes. If that list is too long and you also have moreutils installed you can do: cut -d ' ' -f 2 inputfile | numinterval | sort -u | pee "tail -1" "head -1" On Mint you should be able to install those packages, on Solaris you probably have to compile from source.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/165924", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90256/" ] }
165,937
I have a perplexing problem. I have a library which uses sg for executing customized CDBs. There are a couple of systems which routinely have issues with memory allocation in sg . Usually, the sg driver has a hard limit of around 4mb, but we're seeing it on these few systems with ~2.3mb requests. That is, the CDBs are preparing to allocate for a 2.3mb transfer. There shouldn't be any issue here: 2.3 < 4.0. Now, the profile of the machine. It is a 64 bit CPU but runs CentOS 6.0 32-bit (I didn't build them nor do I have anything to do with this decision). The kernel version for this CentOS distro is 2.6.32. They have 16gb of RAM. Here is what the memory usage looks like on the system (though, because this error occurs during automated testing, I have not verified yet if this reflects the state when this errno is returned from sg ). top - 00:54:46 up 5 days, 22:05, 1 user, load average: 0.00, 0.01, 0.21Tasks: 297 total, 1 running, 296 sleeping, 0 stopped, 0 zombieCpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%stMem: 15888480k total, 9460408k used, 6428072k free, 258280k buffersSwap: 4194296k total, 0k used, 4194296k free, 8497424k cached I found this article from Linux Journal which is about allocating memory in the kernel. The article is dated but does seem to pertain to 2.6 (some comments about the author at the head). The article mentions that the kernel is limited to about 1gb of memory (though it's not entirely clear from the text if that 1gb each for physical and virtual or total). I'm wondering if this is an accurate statement for 2.6.32. Ultimately, I'm wondering if these systems are hitting this limit. Though this isn't really an answer to my problem, I'm wondering about the veracity of the claim for 2.6.32. So then, what is the actual limit of memory for the kernel? This may need to be a consideration for troubleshooting. Any other suggestions are welcome. What makes this so baffling is that these systems are identical to many others which do not show this same problem.
The 1 GiB limit for Linux kernel memory in a 32-bit system is a consequence of 32-bit addressing, and it's a pretty stiff limit. It's not impossible to change, but it's there for a very good reason; changing it has consequences. Let's take the wayback machine to the early 1990s, when Linux was being created. Back in those days, we'd have arguments about whether Linux could be made to run in 2 MiB of RAM or if it really needed 4 whole MiB . Of course, the high-end snobs were all sneering at us, with their 16 MiB monster servers. What does that amusing little vignette have to do with anything? In that world, it's easy to make decisions about how to divide up the 4 GiB address space you get from simple 32-bit addressing. Some OSes just split it in half, treating the top bit of the address as the "kernel flag": addresses 0 to 2 31 -1 had the top bit cleared, and were for user space code, and addresses 2 31 through 2 32 -1 had the top bit set, and were for the kernel. You could just look at the address and tell: 0x80000000 and up, it's kernel-space, otherwise it's user-space. As PC memory sizes ballooned toward that 4 GiB memory limit, this simple 2/2 split started to become a problem. User space and kernel space both had good claims on lots of RAM, but since our purpose in having a computer is generally to run user programs, rather than to run kernels, OSes started playing around with the user/kernel divide. The 3/1 split is a common compromise. As to your question about physical vs virtual, it actually doesn't matter. Technically speaking, it's a virtual memory limit, but that's just because Linux is a VM-based OS. Installing 32 GiB of physical RAM won't change anything, nor will it help to swapon a 32 GiB swap partition. No matter what you do, a 32-bit Linux kernel will never be able to address more than 4 GiB simultaneously. (Yes, I know about PAE . Now that 64-bit OSes are finally taking over, I hope we can start forgetting that nasty hack. I don't believe it can help you in this case anyway.) The bottom line is that if you're running into the 1 GiB kernel VM limit, you can rebuild the kernel with a 2/2 split, but that directly impacts user space programs. 64-bit really is the right answer.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/165937", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50289/" ] }
165,944
I host a handful of domains on a single box (Linux, Ubuntu 13.x) and am attempting to use Postfix 2.10.2-1 to route mail to them. (I used sendmail for the past 10 years, so this is new to me.) All my MX records are set up accordingly, so external mail is reaching the box just fine. I followed the various online tutorials and have the following relevant lines in my configs: main.cf-------myhostname = foo.a.commyorigin = /etc/mailname # this just has 'a.com' inside itmydestination = a.com localhost.localdomain localhostvirtual_alias_domains = b.com c.orgvirtual_alias_maps = hash:/etc/postfix/virtual At the moment I just want all mail addressed to postmaster at any of the domains to go to the postmaster account, and all mail addressed to anyone else at any of the domains to go to the associated account. [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] userC Unfortunately, all mail addressed to anyone at any of these domains is being routed to userA . Until I enabled verbose logging, I thought that the virtual domains were not being respected. After enabling verbose logging, however, I see this happening for every incoming mail: mail.log--------...postfix/cleanup: maps_find: virtual_alias_maps: hash:/etc/postfix/virtual(0.lock|fold_fix): @b.com = userBpostfix/cleanup: mail_addr_find: [email protected] -> userBpostfix/cleanup: send attr request = rewritepostfix/cleanup: send attr rule = localpostfix/cleanup: send attr address = userB But then trivial-rewrite runs again (for about the 80th time processing this particular piece of mail) and decides that userB is not the true final Unix account recipient, but that it instead represents a mail address at myorigin (or perhaps it's picking the first element in mydestination ... that would make more sense). postfix/trivial-rewrite: master_notify: status 0postfix/trivial-rewrite: rewrite socket: wanted attribute: requestpostfix/trivial-rewrite: input attribute name: request...postfix/trivial-rewrite: 'local' 'userB' -> '[email protected]' # <-- d'oh!postfix/trivial-rewrite: send attr flags = 0postfix/trivial-rewrite: send attr address = [email protected]/trivial-rewrite: master_notify: status 1 Then postfix/cleanup takes over postfix/cleanup: maps_find: virtual_alias_maps: hash:/etc/postfix/virtual(0.lock|fold_fix): @a.com = userApostfix/cleanup: mail_addr_find: [email protected] -> userApostfix/cleanup: send attr request = rewritepostfix/cleanup: send attr rule = localpostfix/cleanup: send attr address = userA and concludes that userA is the true intended Unix account recipient. I know almost nothing about Postfix, but it seems from the logs that it only decides on the final recipient address after running it through rewrite twice and getting the same result twice in a row. How do I prevent it from interpreting the local account userB as a mail address userB@mydestination[0] ? I want it to stop as soon as it finds the local userB and just deliver it to him. Am I not allowed to list @mydestination[0] within /etc/postfix/virtual? While that could perhaps solve my current problem (because these users are real Unix users on the box, so the mail would get to the right place), Postfix would still wind up thinking that [email protected] is the final recipient. That seems wrong to me. It should just know that userB is a Unix account and stop there.
The 1 GiB limit for Linux kernel memory in a 32-bit system is a consequence of 32-bit addressing, and it's a pretty stiff limit. It's not impossible to change, but it's there for a very good reason; changing it has consequences. Let's take the wayback machine to the early 1990s, when Linux was being created. Back in those days, we'd have arguments about whether Linux could be made to run in 2 MiB of RAM or if it really needed 4 whole MiB . Of course, the high-end snobs were all sneering at us, with their 16 MiB monster servers. What does that amusing little vignette have to do with anything? In that world, it's easy to make decisions about how to divide up the 4 GiB address space you get from simple 32-bit addressing. Some OSes just split it in half, treating the top bit of the address as the "kernel flag": addresses 0 to 2 31 -1 had the top bit cleared, and were for user space code, and addresses 2 31 through 2 32 -1 had the top bit set, and were for the kernel. You could just look at the address and tell: 0x80000000 and up, it's kernel-space, otherwise it's user-space. As PC memory sizes ballooned toward that 4 GiB memory limit, this simple 2/2 split started to become a problem. User space and kernel space both had good claims on lots of RAM, but since our purpose in having a computer is generally to run user programs, rather than to run kernels, OSes started playing around with the user/kernel divide. The 3/1 split is a common compromise. As to your question about physical vs virtual, it actually doesn't matter. Technically speaking, it's a virtual memory limit, but that's just because Linux is a VM-based OS. Installing 32 GiB of physical RAM won't change anything, nor will it help to swapon a 32 GiB swap partition. No matter what you do, a 32-bit Linux kernel will never be able to address more than 4 GiB simultaneously. (Yes, I know about PAE . Now that 64-bit OSes are finally taking over, I hope we can start forgetting that nasty hack. I don't believe it can help you in this case anyway.) The bottom line is that if you're running into the 1 GiB kernel VM limit, you can rebuild the kernel with a 2/2 split, but that directly impacts user space programs. 64-bit really is the right answer.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/165944", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31958/" ] }
165,946
I thought it is possible via nslookup , but when I use nslookup stackoverflow.com it gives me different ip ( 198.252.206.16 ) than whois cf-dns02.stackoverflow.com (173.245.59.4)
The default record returned by nslookup is the A record, in this case 198.252.206.16 . You should use nslookup with the querytype flag for soa record. # nslookup -querytype=soa stackexchange.com ...Non-authoritative answer:stackexchange.com origin = cf-dns01.stackexchange.com
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/165946", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90274/" ] }
165,961
I would like to tell if a string $string would be matched by a glob pattern $pattern . $string may or may not be the name of an existing file. How can I do this? Assume the following formats for my input strings: string="/foo/bar"pattern1="/foo/*"pattern2="/foo/{bar,baz}" I would like to find a bash idiom that determines if $string would be matched by $pattern1 , $pattern2 , or any other arbitrary glob pattern. Here is what I have tried so far: [[ "$string" = $pattern ]] This almost works, except that $pattern is interpreted as a string pattern and not as a glob pattern. [ "$string" = $pattern ] The problem with this approach is that $pattern is expanded and then string comparison is performed between $string and the expansion of $pattern . [[ "$(find $pattern -print0 -maxdepth 0 2>/dev/null)" =~ "$string" ]] This one works, but only if $string contains a file that exists. [[ $string =~ $pattern ]] This does not work because the =~ operator causes $pattern to be interpreted as an extended regular expression, not a glob or wildcard pattern.
There is no general solution for this problem. The reason is that, in bash, brace expansion (i.e., {pattern1,pattern2,...} and filename expansion (a.k.a. glob patterns) are considered separate things and expanded under different conditions and at different times. Here is the full list of expansions that bash performs: brace expansion tilde expansion parameter and variable expansion command substitution arithmetic expansion word splitting pathname expansion Since we only care about a subset of these (perhaps brace, tilde, and pathname expansion), it's possible to use certain patterns and mechanisms to restrict expansion in a controllable fashion. For instance: #!/bin/bashset -fstring=/foo/barfor pattern in /foo/{*,foo*,bar*,**,**/*}; do [[ $string == $pattern ]] && echo "$pattern matches $string"done Running this script generates the following output: /foo/* matches /foo/bar/foo/bar* matches /foo/bar/foo/** matches /foo/bar This works because set -f disables pathname expansion, so only brace expansion and tilde expansion occur in the statement for pattern in /foo/{*,foo*,bar*,**,**/*} . We can then use the test operation [[ $string == $pattern ]] to test against pathname expansion after the brace expansion has already been performed.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/165961", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34505/" ] }
166,006
We are intermittently seeing kernel: martian source log entries for eth0 on a couple of our servers. The interesting thing is that they are to and from the same IP. For instance: Nov 4 02:20:27 tcffmppr6db09 kernel: martian source 10.153.242.13 from 10.153.242.13, on dev eth0.3171 This only happens on a couple servers. There are about 60 which have eth0 configured in the same manner (different IP, obviously). What should I be looking at to track this down? EDIT: The route for this particular interface is the default route so I don't think it is a matter of being sent out the wrong interface.
Problem I encountered the same problem today, where martian packets flooded my kernel logs. All the martian packets are from the same public IP address of eth0 to the same public IP address of eth0 (the real IPs and header is removed). IPv4: martian source x.x.x.x from x.x.x.x, on dev eth0ll header: 00000000: aa bb cc dd ee ff gg hh ii jj kk ll 08 00 After some research, I realized the reason is hidden in the ll header of the martian packets. Theory Assuming this in a Ethernet connection, ll header actually shows the beginning part of a Ethernet Type II Frame, which contains the destination MAC address, source MAC address, and a ID indicates the type of the rest part of the packet. As you see, the first 6 bytes are the destination MAC address, the next 6 bytes are the source MAC address, and a code in last 2 bytes. Common codes are: 08 00 : IP Packets 86 dd : IPv6 Packet 08 06 : ARP Packet Explanation Back to my example. IPv4: martian source x.x.x.x from x.x.x.x, on dev eth0ll header: 00000000: aa bb cc dd ee ff gg hh ii jj kk ll 08 00 This tells us, there was a packet received with the SAME source and destination IP address. It was sent by GG:HH:II:JJ:KK:LL , which is a MAC address I don't know. Its destination is AA:BB:CC:DD:EE:FF , which is my own MAC address. It was an IP packet ( 08 00 ). If a packet has the same source and destination IP addresses, it must be sent by the same network interface, but the MACs for source and destination are different! How can it be possible? Thus, it is clear that the packet comes from Mars, either there are some routing problems, a machine within the network is configured, or someone is trying to spoof the IP/MAC addresses. The next step is checking the source MAC address in question.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166006", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77650/" ] }
166,033
I want to remove lines which exist already in the previous line from the command line in UNIX. I have the following data in a file. <xref id="gi_525506931_ref_NP_001266519.1__brain_aromatase"/> <entry ac="IPR002401" desc="Cytochrome P450, E-class, group I" name="Cyt_P450_E_grp-I" type="FAMILY"> <entry ac="IPR001128" desc="Cytochrome P450" name="Cyt_P450" type="FAMILY"> <entry ac="IPR001128" desc="Cytochrome P450" name="Cyt_P450" type="FAMILY"> <entry ac="IPR001128" desc="Cytochrome P450" name="Cyt_P450" type="FAMILY"> So, it should look like this: <xref id="gi_525506931_ref_NP_001266519.1__brain_aromatase"/> <entry ac="IPR002401" desc="Cytochrome P450, E-class, group I" name="Cyt_P450_E_grp-I" type="FAMILY"> <entry ac="IPR001128" desc="Cytochrome P450" name="Cyt_P450" type="FAMILY">
If you can guarantee that the identical lines will be consecutive, you can use uniq your_file and cuonglm's answer will work even if they're not.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166033", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77543/" ] }
166,057
I need to understand the Linux versioning system and distribution concepts. what are public, stable or final terms for versions?
Kernel versioning is independent of distro versioning, except to the extent that distros include patches of their own. This is indicated by tacking an extra identifier onto the version, e.g.: 3.16.6-203 Here 3.16.6 is the version of the vanilla (unmodified) kernel, and -203 is appended by the distro to indicate the relative version of their modifications to it. It's not necessary for you to understand that in any depth, just that the Linux kernel is an independent entity. Distros combine it with userland software and (in most cases) everything is precompiled to work together on a specific computer architecture (the most common one today being X86-64; some distros call this AMD64 -- they refer to exactly the same thing). Most of the fundamental userland software actually comes from GNU , an organization distinct from that of the Linux kernel, hence the proper generic name for the OS is usually considered to be "GNU/Linux" . Software is bundled together in repositories managed by the distribution. When you install software, it comes from a default repository (e.g. "stable"), but you may configure the system to use a different one (e.g. "testing") to access different versions of individual things by default. You might also be able to specify a version and if it is not in the default repository, the installer will check a list of others for it. Distributions come in two different sorts with regard to versioning themselves. While individual software is always being updated, most distros are also upgraded as a whole occasionally, e.g., from Ubuntu 12.x to Ubuntu 13.x. This enables them to make sweeping changes that might be too difficult to do otherwise. However, some distros instead use a rolling release model where there are not distinct releases, there is just one continuously updated version. To summarize, the concept of versioning might apply to: The Linux kernel , an independent piece of software. Distribution releases, which often have nicknames (e.g. Ubuntu 14.10: "Utopic Unicorn", or Fedora 20: "Heisenbug"). The version of individual packages in a distro, which are usually separated into distinct repositories. Like the kernel, these have independent sources that may be modified slightly by the distribution packagers. The "target architecture" a package is compiled for (e.g. X86-64). These may also be separated into repos. An example of a full package version string, which combines several of these things: foobar-4.11.3-4.fc20.x86_64 This is for a Fedora .rpm but other distros use similar schemes. Foobar is the package name, 4.11.3 is the version of the source used, -4 indicates the version of the patches applied by Fedora, f20 is the distro version for which this package was compiled, and x86_64 is the architecture for which it was compiled.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166057", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90073/" ] }
166,064
Is there a way to modify a file without writing the contents to another file, without sed and awk ? For example: $ cat test.txtaaaaaabbbcccddd Replacing using sed with -i option, sed -i 's/aaa/NNN/g' test.txt will produce the following: NNNNNNbbbcccddd How to do that without awk and sed ?
You can use a vi script: $ vi test.txt -c '%s/aaa/NNN/ | wq'$ cat test.txtNNNNNNbbbcccddd You're simply automating what would normally be entered when using vi in command mode (accessed using Esc : usually): % - carry out the following command on every line: s/aaa/NNN/ - subtitute aaa with NNN | - command delimiter w - write changes to file q - quit
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/166064", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77543/" ] }
166,095
Jenkins has stopped building and I can see from df:that /var is maxed. Following this I can see the culprit is /var/lib/jenkins 52K alternatives...4.0K games2.3G jenkins <--this one4.0K logrotate.status...12M yum2.4G total However, when I enter the jenkins dir to see which folder is consuming most of this 2.3GB, there are some strange results: $ sudo du -csh *16K config.xml0 Connection Activity monitoring to slaves.log0 Download metadata.log0 Fingerprint cleanup.log7.3M fingerprints4.0K hudson.maven.MavenModuleSet.xml4.0K hudson.model.UpdateCenter.xml4.0K hudson.scm.CVSSCM.xml4.0K hudson.scm.SubversionSCM.xml4.0K hudson.tasks.Ant.xml4.0K hudson.tasks.Mailer.xml4.0K hudson.tasks.Maven.xml4.0K hudson.tasks.Shell.xml4.0K hudson.triggers.SCMTrigger.xml4.0K identity.key.enc4.0K jenkins.diagnostics.ooom.OutOfOrderBuildMonitor4.0K jenkins.model.ArtifactManagerConfiguration.xml4.0K jenkins.model.DownloadSettings.xml4.0K jenkins.model.JenkinsLocationConfiguration.xml4.0K jenkins.mvn.GlobalMavenConfig.xml4.0K jenkins.security.QueueItemAuthenticatorConfiguration.xml160M jobs4.0K nodeMonitors.xml12K Out of order build detection.log41M plugins4.0K proxy.xml4.0K queue.xml.bak4.0K secret.key0 secret.key.not-so-secret32K secrets1.1M updates8.0K userContent12K users4.0K Workspace clean-up.log209M total From 2.3GB to 209M. Can anyone explain this so I can create some space? I've deleted the Jenkins workspaces from the web frontend but still see this result.
With this command, sudo du -csh * you are missing hidden directories, i.e. * expands to all names starting with anything but a dot ( . ). That means all directory names starting with a dot are not passed to the du command and their size is not taken into account. In most situations, adding .??* to the parameters would fit the needs : sudo du -csh .??* * The extra .??* is expanding to all names starting with a dot and having more than two characters. The goal is to exclude .. , i.e. the parent directory from the arguments. Hypothetical files and directories like .a would still be missed, and those starting with a dash ( - ) would be at best ignored and at worst trigger an error. If you have files or directories names starting with a dash or names starting with a dot followed by a single character, this enhanced version will properly report all directory sizes: sudo du -csh -- .[^.] .??* * Here the -- is telling du to take arguments starting with a dash as names, not options and .[^.] is adding to the processed names all file and directories starting with a dot followed by any single character but a dot. This encompass all possible names for files and directories. Several shells provide custom workarounds to these naming issues: bash : shopt -s dotglobsudo du -csh -- * zsh : sudo du -csh -- *(D) ksh93 : FIGNORE='@(.|..)'sudo du -csh -- *
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166095", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90371/" ] }
166,116
When I run gdbserver on a device (like gdbserver :2345 myapp ) , gdbserver completely blocks the terminal. Neither adding an ampersand & , nor pressing ^z makes it running in background. I also checked: it is reproducible on Kubuntu too. I really need to use shell commands, and since I've no idea how to execute these via gdbserver, after it's running I feel myself crippled.
This seems to have worked for the OP. gdbserver :2345 ls > /dev/null 2>&1 & I think the reason for this is because when a program is daemonized it closes all the STDIO 0,1 & 2. The next IO to open will be 0. If the program tries to use 0,1 or 2 with things like printf or scanf it will be acting on the wrong IO or a closed IO. For example if it is daemon-ized the socket opens on 0 were STDIN was and if the printf is called it will be writing to a non-open FD what would cause the program to crash.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166116", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59928/" ] }
166,146
How do I construct a list like this: 1 2 3 4 56 7 8 9 1011 12 13 14 15 Where I run command 15 or something. Or if I specify 100 it would make it with 100 numbers or 10000 and it would make it like this but 10000 numbers. It should be five numbers on each line (as seen above).
you simply do seq 1 n | xargs -n 5 echo n being the number you want to reach If your OS has bash but not seq, here is an alternative (thx to @cuonglm and @jimmyj for their remarks) echo {1..n} | xargs -n5 (you may have to be careful when reaching very high number with that one, depending on the OS and bash version, and if bash actually tried to expand first or in that case is clever enough to feed little by little without trying to fit the whole 1..n as a string in memory and feed that to echo...) And thanks to cuonglm and StephaneChazelas, I add an alternative that is very, very less CPU heavy than my first xargs solution (in which xargs calls /bin/echo, instead of being able to use the shell's builtin, every 5 numbers) (it's probably similar to the 2nd one where xargs doesn't invoke echo) : printf '%s %s %s %s %s\n' {1..n} That 2nd and 3rd solution differs from the 1st in that the shell have first to expand 1..n, before printf (or xargs) can start printing, if I'm not mistaken... so it starts later (especially if n is big)... And could reach some limits (line length, or memory, depending on the implementation and the OS) if n is very big.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/166146", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79979/" ] }
166,159
I have a .zip created on a Windows machine (outside of my control). The zip file contains paths that I need to preserve when I unzip. However, when I unzip, all files end up like: unzip_dir/\window\path\separator\myfile.ext I've tried both, with and without -j option.My issue is that I need that path information under \window\path\separator\ . I need that file structure to be created when I unzip. I can mv the file and flip the \ to / easily enough in a script, but then there are errors that the destination path directories do not exist. My workaround for now is to mkdir -p the paths (after converting \ to / ) and then cp the files to those paths. But there are a lot of files, and these redundant mkdir -p statements for every file really slows things down. Is there any more elegant way to convert a zip file with Windows paths to Linux paths?
I think something went wrong with the creation of the zip file, because when I create a zip file on Windows is has (portable) forward slashes: zip.exe -r pip pipupdating: pip/ (244 bytes security) (stored 0%) adding: pip/pip.log (164 bytes security) (deflated 66%) But now that you have the files with file names that contain "paths" with backslashes, you can run the following program in unzip_dir : #! /usr/bin/env python# already created directories, walk works topdown, so a child dir# never creates a directory if there is a parent dir with a file.made_dirs = set()for root, dir_names, file_names in os.walk('.'): for file_name in file_names: if '\\' not in file_name: continue alt_file_name = file_name.replace('\\', '/') if alt_file_name.startswith('/'): alt_file_name = alt_file_name[1:] # cut of starting dir separator alt_dir_name, alt_base_name = alt_file_name.rsplit('/', 1) print 'alt_dir', alt_dir_name full_dir_name = os.path.join(root, alt_dir_name) if full_dir_name not in made_dirs: os.makedirs(full_dir_name) # only create if not done yet made_dirs.add(full_dir_name) os.rename(os.path.join(root, file_name), os.path.join(root, alt_file_name)) This handles files in any directory under the directory from where the program is started. Given the problem that you describe, the unzip_dir probably doesn't have any subdirectories to start with, and the program could just walk over the files in the current directory only.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166159", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90408/" ] }
166,207
I ran the program pstree -p 31872 which printed the following output: ruby(31872)─┬─{ruby}(31906) └─{ruby}(32372) The man page for pstree says: Child threads of a process are found under the parent process and are shown with the process name in curly braces, e.g. icecast2---13*[{icecast2}] (The above is displayed differently because of the missing -p option, which disables compaction.) Running pstree 31872 without -p gives: ruby───2*[{ruby}] When I try to observe those PIDS using ps , no results are found. However, the pids, exist in /proc. My question is, why would threads have different pids? I would expect them to be the same (31872) as the process. The same behavior is observed when running htop.
The mistake was to presume those numbers were PIDS, when in fact they are TIDS (thread IDs). See Linux function gettid(2). Reading up on clone(2) gives a lot of extra (and interesting) details.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166207", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14074/" ] }
166,220
What command could print pi for me? I want to specify how many digits it prints, I couldn't find anything online. I just want to be able to print pi.
You can use this command: echo "scale=5; 4*a(1)" | bc -l3.14159 Where scale is the number of digits after decimal point. Reference: http://www.tux-planet.fr/calculer-le-chiffre-pi-en-ligne-de-commande-sous-linux/
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/166220", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79979/" ] }
166,303
I'm moving a large number of files (400K+) from one directory to another and I have the following script to do so (too many files for the mv command to work directly): for file in *;do mv $file ..done If I run this script twice (or more) at the same time, will there be a race condition when/if the mv commands are trying to access the same file? I've looked around on the web but haven't found any definite answer. Thanks!
There is indeed a race condition (one that wouldn't cause harm, though). The * is expanded on entry to the loop. If you run a second instance of this script simultaneously then it will probably do nothing because all files it tries to move have already been moved. If no files are created in the source directory during the moving operation then the error messages should be your biggest problem. But in general this structure is a very bad idea. * expands to a sorted list. AFAIK it is not possible to deactivate that. Obviously, the sorting alone is a nightmare with 400K files. See man bash , section "Pathname Expansion": After word splitting, unless the -f option has been set, bash scans each word for the characters *, ?, and [. If one of these characters appears, then the word is regarded as a pattern, and replaced with an alphabetically sorted list of file names matching the pattern. Furthermore you should not run one mv instance per file as you can move several files at once. This is a better solution (in the GNU world): find . -mindepth 1 -maxdepth 1 -exec mv --target-directory=DIRECTORY {} +
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166303", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90485/" ] }
166,341
I have a file called file1 I want in a script, whenever there is a change in it, do something, a beep sound actually. How do I do that?
If you have inotify-tools installed (at least that's the package name on Debian) when you can do something like this: while inotifywait -q -e modify filename >/dev/null; do echo "filename is changed" # do whatever else you need to dodone This waits for the "modify" event to happen to the file named "filename". When that happens the inotifywait command outputs filename MODIFY (which we discard by sending the output to /dev/null) and then terminates, which causes the body of the loop to be entered. Read the manpage for inotifywait for more possibilities.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166341", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88768/" ] }
166,343
I want to get a list of packages which depend on requested package. For example, I want to get all packages, which depend on telnet . I came up with this script: for i in `rpm -qa | sort`; do rpm -qR $i | grep telnet > /dev/null; if [ $? -eq 0 ]; then echo $i; fi;done Is there a better way to go? Thanks.
The command you need is: rpm -q --whatrequires <packagename> Therefore: rpm -q --whatrequires telnet From the man page --whatrequires CAPABILITY Query all packages that require CAPABILITY for proper functioning.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/166343", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55407/" ] }
166,359
I need to retrieve the expiry date of an SSL cert. The curl application does provide this information: $ curl -v https://google.com/* Hostname was NOT found in DNS cache* Trying 212.179.180.121...* Connected to google.com (212.179.180.121) port 443 (#0)* successfully set certificate verify locations:* CAfile: none CApath: /etc/ssl/certs* SSLv3, TLS handshake, Client hello (1):* SSLv3, TLS handshake, Server hello (2):* SSLv3, TLS handshake, CERT (11):* SSLv3, TLS handshake, Server key exchange (12):* SSLv3, TLS handshake, Server finished (14):* SSLv3, TLS handshake, Client key exchange (16):* SSLv3, TLS change cipher, Client hello (1):* SSLv3, TLS handshake, Finished (20):* SSLv3, TLS change cipher, Client hello (1):* SSLv3, TLS handshake, Finished (20):* SSL connection using ECDHE-ECDSA-AES128-GCM-SHA256* Server certificate:* subject: C=US; ST=California; L=Mountain View; O=Google Inc; CN=*.google.com* start date: 2014-10-22 13:04:07 GMT* expire date: 2015-01-20 00:00:00 GMT* subjectAltName: google.com matched* issuer: C=US; O=Google Inc; CN=Google Internet Authority G2* SSL certificate verify ok.> GET / HTTP/1.1> User-Agent: curl/7.35.0> Host: google.com> Accept: */*> < HTTP/1.1 302 Found< Cache-Control: private< Content-Type: text/html; charset=UTF-8< Location: https://www.google.co.il/?gfe_rd=cr&ei=HkxbVMzCM-WkiAbU6YCoCg< Content-Length: 262< Date: Thu, 06 Nov 2014 10:23:26 GMT* Server GFE/2.0 is not blacklisted< Server: GFE/2.0< <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8"><TITLE>302 Moved</TITLE></HEAD><BODY><H1>302 Moved</H1>The document has moved<A HREF="https://www.google.co.il/?gfe_rd=cr&amp;ei=HkxbVMzCM-WkiAbU6YCoCg">here</A>.</BODY></HTML>* Connection #0 to host google.com left intact However, when piping the output via grep the result is not less information on the screen, but rather much more : $ curl -v https://google.com/ | grep expire* Hostname was NOT found in DNS cache % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 212.179.180.84...* Connected to google.com (212.179.180.84) port 443 (#0)* successfully set certificate verify locations:* CAfile: none CApath: /etc/ssl/certs* SSLv3, TLS handshake, Client hello (1):} [data not shown]* SSLv3, TLS handshake, Server hello (2):{ [data not shown] 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* SSLv3, TLS handshake, CERT (11):{ [data not shown]* SSLv3, TLS handshake, Server key exchange (12):{ [data not shown]* SSLv3, TLS handshake, Server finished (14):{ [data not shown]* SSLv3, TLS handshake, Client key exchange (16):} [data not shown]* SSLv3, TLS change cipher, Client hello (1):} [data not shown]* SSLv3, TLS handshake, Finished (20):} [data not shown]* SSLv3, TLS change cipher, Client hello (1):{ [data not shown]* SSLv3, TLS handshake, Finished (20):{ [data not shown]* SSL connection using ECDHE-ECDSA-AES128-GCM-SHA256* Server certificate:* subject: C=US; ST=California; L=Mountain View; O=Google Inc; CN=*.google.com* start date: 2014-10-22 13:04:07 GMT* expire date: 2015-01-20 00:00:00 GMT* subjectAltName: google.com matched* issuer: C=US; O=Google Inc; CN=Google Internet Authority G2* SSL certificate verify ok.> GET / HTTP/1.1> User-Agent: curl/7.35.0> Host: google.com> Accept: */*> < HTTP/1.1 302 Found< Cache-Control: private< Content-Type: text/html; charset=UTF-8< Location: https://www.google.co.il/?gfe_rd=cr&ei=IkxbVMy4K4OBbKuDgKgF< Content-Length: 260< Date: Thu, 06 Nov 2014 10:23:30 GMT* Server GFE/2.0 is not blacklisted< Server: GFE/2.0< { [data not shown]100 260 100 260 0 0 714 0 --:--:-- --:--:-- --:--:-- 714* Connection #0 to host google.com left intact I suspect that curl detects that it is not printing to a terminal and is thus gives different output, not all of which is recognized by grep as being stdout and is thus passed through to the terminal. However, the closest thing to this that I could find in man curl (don't ever google for that!) is this: PROGRESS METER curl normally displays a progress meter during operations, indicating the amount of transferred data, transfer speeds and estimated time left, etc. curl displays this data to the terminal by default, so if you invoke curl to do an operation and it is about to write data to the terminal, it disables the progress meter as otherwise it would mess up the output mixing progress meter and response data. If you want a progress meter for HTTP POST or PUT requests, you need to redirect the response output to a file, using shell redirect (>), -o [file] or similar. It is not the same case for FTP upload as that operation does not spit out any response data to the terminal. If you prefer a progress "bar" instead of the regular meter, -# is your friend. How can I get just the expiry line out of the curl output? Furthermore, what should I be reading to understand the situation better? Seems like this would be a good use case for a "stdmeta" file descriptor .
curl writes the output to stderr, so redirect that and also suppress the progress: curl -v --silent https://google.com/ 2>&1 | grep expire The reason why curl writes the information to stderr is so you can do: curl <url> | someprgram without that information clobbering the input of someprogram
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/166359", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9760/" ] }
166,370
Let say I have a text file in this format field1afield2afield3afield1bfield2bfield3b I want to club 3 (or in general case N) consecutive lines, how will I do it with sed or other command line utility in bash shell? expected output field1a:field2a:field3afield1b:field2b:field3b
paste -sd '::\n' file Or: awk '{ORS=NR%3?":":"\n";print}' < file (note the difference if the number of records in the input is not a multiple of 3 though).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166370", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50943/" ] }
166,371
The script executes on a Mac machine and creates the output file, although on an Ubuntu machine it generates error message. Bash shell is used on both instances.: 1 - /var2 - /etc : 1: bad variable name: read: wordfirst_part(1).sh: 6: first_part(1).sh: Syntax error: newline unexpected (expecting ")") - echo "To scan through the directories /var and /etc type 1 or 2: "echo "1 - /var"echo "2 - /etc : "read wordcase $word in 1) find /var -type d -follow -ls | awk '{print $3, $5, $6, $11}' > var.txt echo "Your file containing /var information has been created." ;; 2) find /etc -type d -follow -ls | awk '{print $3, $5, $6, $11}' > etc.txt echo "Your file containing /etc information has been created." ;; *) echo "Please insert a valid input" continue ;;esac
If you execute the file by using sh filename.sh , then one problem is that, on your Ubuntu system, this might not execute bash but some other shell. On my Ubuntu 12.04 system sh is /bin/sh and is soft linked to /bin/dash (with a d ; see "Dash as /bin/sh" ). You should use bash filename.sh , or use a shebang line and make the file executable ( chmod +x filename.sh ). #!/bin/bashecho "To scan through the directories /var and /etc type 1 or 2: "echo "1 - /var".. One thing to check when moving files from Mac to Ubuntu are the newlines of the file (use od -c file_name ), if there are '\r' characters in the output, but not \n you have to convert e.g. using: tr '\r' '\n' < file_name > new_file_name .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166371", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62033/" ] }
166,382
I have a dedicated box running minidlna on Debian sourcing media files from a shared cifs drive. When I add a new file to the share not using the midia box, minidlna does not recognize the new files as an inotify event is not created. I found a workaround to make it recognize new files running touch from the media box from time to time but it does not work for folders as minidlna only identify IN_CREATE & IN_MOVED_TO events for folders and touch does not create these events for folders. So I'd like to know if there is any other way to create this "fake" events or if you know how can I have one of these events for a folder without having to move it.
If you execute the file by using sh filename.sh , then one problem is that, on your Ubuntu system, this might not execute bash but some other shell. On my Ubuntu 12.04 system sh is /bin/sh and is soft linked to /bin/dash (with a d ; see "Dash as /bin/sh" ). You should use bash filename.sh , or use a shebang line and make the file executable ( chmod +x filename.sh ). #!/bin/bashecho "To scan through the directories /var and /etc type 1 or 2: "echo "1 - /var".. One thing to check when moving files from Mac to Ubuntu are the newlines of the file (use od -c file_name ), if there are '\r' characters in the output, but not \n you have to convert e.g. using: tr '\r' '\n' < file_name > new_file_name .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166382", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90527/" ] }
166,393
I simply forgot to use sudo : usr@arch ~[0] $ iptables -Liptables v1.4.21: can't initialize iptables table `filter': Permissiondenied (you must be root)Perhaps iptables or your kernel needs to be upgraded.usr@arch ~[3] $ <--- My bash PS1 prompt echoes the last command exit status ($?). The iptables manpages doesn't refer to a return code of 3: Various error messages are printed to standard error. The exit codeis 0 for correct functioning. Errors which appear to be caused byinvalid or abused command line parameters cause an exit code of 2, andother errors cause an exit code of 1. The SUSv3/POSIX discusses exit status for commands . 1 A command such as mount - which has 7 different exit statuses for error conditions - executed without privileges returns 1; something that's documented in its manpages: incorrect invocation or permissions . Q. So why does iptables and mount differ in that respect - is it purely application specific ? Why is it that doing an strace on the former outputs things such as: socket(PF_INET, SOCK_RAW, IPPROTO_RAW) = -1 EPERM (Operation not permitted) - shouldn't it be EACCES instead? Why is it that tracing unprivileged mount calls does not reveal similar errors and do these have an impact on the exit status; or is -1 a failure whatever the reason? Where does that 3 come from? 1. Also: GNU Bash ; more generally ; random weirdness ; recent question with reference to codes being application specific + historical /usr/include/sysexits.h etc.
The documentation is incomplete. The code contains the following list of error codes used internally: enum xtables_exittype { OTHER_PROBLEM = 1, PARAMETER_PROBLEM, VERSION_PROBLEM, RESOURCE_PROBLEM, XTF_ONLY_ONCE, XTF_NO_INVERT, XTF_BAD_VALUE, XTF_ONE_ACTION,}; And when it tries to initialize, it does: if (!*handle) xtables_error(VERSION_PROBLEM, "can't initialize iptables table `%s': %s", *table, iptc_strerror(errno)); xtables_error prints the error message and exits with the given exit code. The code seems to be deficient, IMHO, in assuming that a failure here is due to a version problem, without checking the errno to see that it's actually EPERM .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166393", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
166,398
I looked at the stackexchange site but couldn't find anything. I looked at the wikipedia entry on Linux container https://en.wikipedia.org/wiki/LXC and as well as hypervisor https://en.wikipedia.org/wiki/Hypervisor but the explanation to both is beyond a person who has not worked on either will understand. I also saw http://www.linux.com/news/enterprise/cloud-computing/785769-containers-vs-hypervisors-the-battle-has-just-begun but that also doesn't explain it. I have played with VM's such as virtualbox. One of the starting ideas to my limited understanding might have been for Virtual Machines were perhaps to test software in a sandbox environment (Having a Solaris box when you cannot buy/afford to have the machine and still have some idea how the software you are developing for that target hardware is working.) While being limited had it uses. This is probably one of the ways it made the jump in cloud computing as well. The questions are broad so this is how I distill it - Can some people explain what a hypervisor and a *nix container is (with analogies if possible.)? Is a *nix hypervisor the same as virtual machine or is there a difference?
A Virtual Machine (VM) is quite a generic term for many virtualisation technologies. There are a many variations on virtualisation technologies, but the main ones are: Hardware Level Virtualisation Operating System Level Virtualisation qemu-kvm and VMWare are examples of the first. They employ a hypervisor to manage the virtual environments in which a full operating system runs. For example, on a qemu-kvm system you can have one VM running FreeBSD, another running Windows, and another running Linux. The virtual machines created by these technologies behave like isolated individual computers to the guest. These have a virtual CPU, RAM, NIC, graphics etc which the guest believes are the genuine article. Because of this, many different operating systems can be installed on the VMs and they work "out of the box" with no modification needed. While this is very convenient, in that many OSes will install without much effort, it has a drawback in that the hypervisor has to simulate all the hardware, which can slow things down. An alternative is para-virtualised hardware, in which a new virtual device and driver is developed for the guest that is designed for performance in a virtual environment. qemu-kvm provide the virtio range of devices and drivers for this. A downside to this is that the guest OS must be supported; but if supported, the performance benefits are great. lxc is an example of Operating System Level Virtualisation, or containers. Under this system, there is only one kernel installed - the host kernel. Each container is simply an isolation of the userland processes. For example, a web server (for instance apache ) is installed in a container. As far as that web-server is concerned, the only installed server is itself. Another container may be running a FTP server. That FTP server isn't aware of the web-server installation - only it's own. Another container can contain the full userland installation of a Linux distro (as long as that distro is capable of running with the host system's kernel). However, there are no separate operating system installations when using containers - only isolated instances of userland services. Because of this, you cannot install different platforms in a container - no Windows on Linux. Containers are usually created by using a chroot . This creates a separate private root ( / ) for a process to work with. By creating many individual private roots, processes (web-servers, or a Linux distro, etc) run in their own isolated filesystem. More advanced techniques, such as cgroups can isolate other resources such as network and RAM. There are pros and cons to both and many long running debates as to which is best. Containers are lighter, in that a full OS isn't installed for each; which is the case for hypervisors. They can therefore run on lower spec'd hardware. However, they can only run Linux guests (on Linux hosts). Also, because they share the kernel, there is the possibility that a compromised container may affect another. Hypervisors are more secure and can run different OSes because a full OS is installed in each VM and guests are not aware of other VMs. However, this utilises more resources on the host, which has to be relatively powerful.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/166398", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
166,434
I have a tab delimited file like this: chr1 53736473 54175786chr1 56861276 56876438chr1 57512145 57512200 I want to concatenate the three fields result like this: chr1:53736473-54175786chr1:56861276-56876438chr1:57512145-57512200 I tried with paste -d ':-' file , which apparently didn't work. Could anyone help? Ideally could be with simple unix command, I know it is rather easy with higher language.
With sed: $ sed 's/\(.*\)\t\(.*\)\t/\1:\2-/' filechr1:53736473-54175786chr1:56861276-56876438chr1:57512145-57512200 printf: printf "%s:%s-%s\n" $(< file)chr1:53736473-54175786chr1:56861276-56876438chr1:57512145-57512200
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166434", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86155/" ] }
166,447
Just a quick question really. I noticed that on either partition, Linux or Windows, that I may suspend either system and then boot into the other. Since suspending an OS does not write data to disk, the contents of RAM are preserved. As such, would it not be true then that booting into Linux would overwrite the contents of RAM and thus the context of Windows (or just the other OS that was suspended) prior to suspending? I have been able to, for example: Boot into Windows Suspend Windows Boot into Linux Suspend/Shutdown Linux Resume Windows without error How is this possible?
With sed: $ sed 's/\(.*\)\t\(.*\)\t/\1:\2-/' filechr1:53736473-54175786chr1:56861276-56876438chr1:57512145-57512200 printf: printf "%s:%s-%s\n" $(< file)chr1:53736473-54175786chr1:56861276-56876438chr1:57512145-57512200
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166447", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72608/" ] }
166,473
I am running Debian 7 Wheezy and I need to start some screens on startup as soon as there is a fully functional internet connection. However, not, if the internet connection broke and was connected again. So only on the first functional internet connection after boot. Could you please post a dummy script for this and tell me where to put it and make it be executed under the given conditions? The script only needs to start the screen and then terminate but the screen should continue. EDIT I have already heard of the /etc/network/if-up.d/ folder. But how can I make sure that the script is not executed again if the internet connection is lost and then re-established?
Put your script in /etc/network/if-up.d and make it executable. It will be automatically run each time a network interface comes up. To make it do work only the first time it is run on every boot, have it check for existence of a flag file which you create after the first time. Example: #!/bin/shFLAGFILE=/var/run/work-was-already-donecase "$IFACE" in lo) # The loopback interface does not count. # only run when some other interface comes up exit 0 ;; *) ;;esacif [ -e $FLAGFILE ]; then exit 0else touch $FLAGFILEfi: here, do the real work.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/166473", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81949/" ] }
166,477
This is what I'm running: alexandma@ALEXANDMA-1-MBP ./command.sh &[2] 30374alexandma@ALEXANDMA-1-MBP[2] + suspended (tty output) ./command.sh I don't want it to start suspended, I want it to keep running in the background. I'm going to be running a bunch of these in a loop, so I need something that will work that way. How can I keep it running?
It stops because of the reason given: it tries to output to tty. You can try to redirect the output if ./command.sh supports that, or run the command in a tmux or screen window of it's own. E.g. tmux new-window -n "window name" ./command.sh and then view the list of windows created with tmux list-windows and attach to tmux with tmux attach . That way the program will still wait for input/output to happen, but you can easily provide input once you go to the appropriate window and the output will just be captured without any activity.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166477", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3558/" ] }
166,482
I have a script that outputs various information and the text fields of an html form (method POST). When I attempt to cat <&0, it displays properly. However a few lines down, I try to cat <&0 again and nothing is printed. What am I doing wrong? ....cat <&0echo Content length is $CONTENT_LENGTHcat <&0 | sed -e 's/&/\n/g' | cut -d'=' -f2....
You are doing nothing wrong: this is what we should expect. The first cat <&0 consumes the entire contents of standard input because that's what cat does: it reads all of its input until the end. When the second cat <&0 runs, there is nothing left to consume on standard input: the end of file was already reached previously. If, in a shell script, you need to make 2 or more passes through your standard input, you have to dump it into a temp file, then process the temporary file as many times as you want. Securely creating temporary files in /tmp and making sure they are correctly disposed of when your script terminates or dies is left as an exercise for you :-) By the way, the <&0 is unnecessary and does nothing. Its function is to point standard input to file descriptor 0... which is standard input... which is by definition where standard input already points! You can just make that command cat alone instead.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166482", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86370/" ] }
166,541
I am running the following command on my ubuntu server root@slot13:~# lxc-stop --name pavan --logfile=test1.txt --logpriority=trace It seems to hang indefinitely. Whenever this happened on AIX, I simply used to get the PID of the offending process and say $ procstack <pid_of_stuck_process> and it used to show the whole callstack of the process. Is there any equivalent of procstack in linux/ubuntu?
My first step would be to run strace on the process, best strace -s 99 -ffp 12345 if your process ID is 12345. This will show you all syscalls the program is doing. How to strace a process tells you more. If you insist on getting a stacktrace, google tells me the equivalent is pstack. But as I do not have it installed I use gdb: tweedleburg:~ # sleep 3600 & [2] 2621 tweedleburg:~ # gdb (gdb) attach 2621 (gdb) bt #0 0x00007feda374e6b0 in __nanosleep_nocancel () from /lib64/libc.so.6 #1 0x0000000000403ee7 in ?? () #2 0x0000000000403d70 in ?? () #3 0x000000000040185d in ?? () #4 0x00007feda36b8b05 in __libc_start_main () from /lib64/libc.so.6 #5 0x0000000000401969 in ?? () (gdb)
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/166541", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17829/" ] }
166,546
I intend to pipe the output of a program into a while read VAR loop and break when a pattern is found, but it doesn't. Proof of concept: inotifywait -qm -e create . | while read line; do echo $line; break; done./ CREATE newfile .. tail -f /var/log/syslog | while read line; do echo $line; break; doneNov 6 22:44:05 section9 ntpdate[2381]: adjust time server 91.189.89.199 offset 0.272779 sec These never exit no matter what the source program outputs. Setting set -x beforehand suggests that the loop never iterates to the second read . $BASH_SUBSHELL is 1 in these examples. Shouldn't tail , inotifywait , etc. receive SIGPIPE and exit? Note that process substitution ( while read ... break; done < <(tail -f ...) ) works OK. $BASH_SUBSHELL is 0 in this case.
The key is that, under bash (other may shells differ), a pipeline is not terminated until all commands in the pipeline are finished. To understand, let's consider: inotifywait -qm -e create . | while read line; do echo $line; break; done When read reads a line, it is echoed and then break is executed and the last process terminates. The first process, however, continues until a write to stdout fails. Thus, the loop will continue at least until inotifywait tries to write its second line of output. Because of the vagaries of buffering, it may not happen even then. It may take several lines before the discovery occurs. When that attempted write fails, then SIGPIPE is issued. Now, consider the other case: while read line; do echo $line; break; done < <(inotifywait -qm -e create .) Here, there is no pipeline. When break is executed, the while loop is done. Documentation From the "Pipelines" section of man bash : The shell waits for all commands in the pipeline to terminate.... This behavior is a choice made by bash . It is not mandated by POSIX. POSIX states: If the pipeline is not in the background (see Asynchronous Lists), the shell shall wait for the last command specified in the pipeline to complete, and may also wait for all commands to complete.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/166546", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67931/" ] }
166,558
When I check free in one of Prod server it showing 70% of memory is being used: total used free shared buffers cachedMem: 164923172 141171860 23751312 0 4555616 20648048-/+ buffers/cache: 115968196 48954976Swap: 8388600 0 8388600 But I didn’t find what process is using the memory, I tried the top command and it is showing process using memory only 1.1 and 5.4 % How can I find which process is using the memory? Below are the top command results: 15085 couchbas 25 0 2784m 2.4g 40m S 183.7 1.5 299597:00 beam.smp28248 tibco 18 0 124m 100m 3440 S 20.9 0.1 2721:45 tibemsd15334 couchbas 15 0 9114m 8.6g 3288 S 9.0 5.4 12996:28 memcached15335 couchbas 18 0 6024 600 468 S 2.0 0.0 1704:54 sigar_port15319 couchbas 15 0 775m 2516 944 S 0.7 0.0 269:13.41 i386-linux-godu12167 tibco 16 0 11284 1464 784 R 0.3 0.0 0:00.04 top12701 root 15 0 451m 427m 2140 S 0.3 0.3 18:25.02 controller13163 root 11 -5 0 0 0 S 0.3 0.0 289:58.58 vxglm_thread
This will show you top 10 process that using the most memory: ps aux --sort=-%mem | head Using top : when you open top , pressing m will sort processes based on memory usage. But this will not solve your problem, in Linux everything is either file or process. So the files you opened will eating the memory too. So this will not help. lsof will give you all opened files with the size of the file or the file offset in bytes.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/166558", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90639/" ] }
166,562
How do I get just the link part in the http-source of a link? I have <a href="http://unix.stackexchange.com/users/20661/">Unix &amp; Linux and would like to get just http://unix.stackexchange.com/users/20661/ I tried sed 's/^.*(http.*)".*$/\1/g' but that gives an error: sed: -e expression #1, char 22: invalid reference \1 on `s' command's RHS
Try this: sed -r 's/.*(http[^"]*)".*/\1/g' On Mac OSX, try: sed -E 's/.*(http[^"]*)".*/\1/g' Notes There are several items to note about this sed command: sed 's/^.*(http.*)".*$/\1/g' The ^ is unnecessary. sed's regular expressions are always greedy . That means that, if a regex that begins with .* matches at all, it will always match from the beginning of the line. To make ( into a grouping character, it can either be escaped or extended regex can be turned on with the -r flag ( -E on OSX). This flag often greatly reduces the number of escapes that you will need. Also, because regex are greedy, (http.*)" will match to the last double quote on the line, not the first. The URL will, however, end with the first double-quote. Instead, use (http[^"]*)" and the match will never extend beyond the first " . The dollar sign in .*$ is also superfluous. Again, because regex are greedy, if a regular expression that ends with .* matches, it will match to the end of the line.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166562", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20661/" ] }
166,573
I've recently made 2 script that take care of mounting and dismounting a harddrive in linux. The thing that I could not figure out is the following. I am currently mounting /dev/sdc1 to /home/media/externalHardDrive . The thing is, when I insert another usb device (like usb stick) while the harddrive is not being inserted it will most likely put the usb-stick on /dev/sdc1 . I would like to learn a way to identify the device before mounting it, so I can make sure that only the harddrive is affected by this script. These are my scripts: unmount_script.sh: #!/bin/bashMOUNT="/home/media/externalHardDrive"if grep -qs "$MOUNT" /proc/mounts; then umount "$MOUNT" if [ $? -eq 0 ]; then echo "HardDrive kan veilig worden verwijderd :D" else echo "Er is iets mis gegaan, blijf overal vanaf :(" fielse echo "Er is geen HardDrive gemount op $MOUNT, deze kan daarom niet verwijderd worden!"fi mount_script.sh #!/bin/bashMOUNT="/home/media/externalHardDrive"if grep -qs "$MOUNT" /proc/mounts; then echo "HardDrive is al gemount op $MOUNT ;)"else mount /dev/sdc1 "$MOUNT" if [ $? -eq 0 ]; then echo "HardDrive is succesvol gemount :D" fifi Can somebody point me in the right direction?I am running these scripts on a debian server.
Try this: sed -r 's/.*(http[^"]*)".*/\1/g' On Mac OSX, try: sed -E 's/.*(http[^"]*)".*/\1/g' Notes There are several items to note about this sed command: sed 's/^.*(http.*)".*$/\1/g' The ^ is unnecessary. sed's regular expressions are always greedy . That means that, if a regex that begins with .* matches at all, it will always match from the beginning of the line. To make ( into a grouping character, it can either be escaped or extended regex can be turned on with the -r flag ( -E on OSX). This flag often greatly reduces the number of escapes that you will need. Also, because regex are greedy, (http.*)" will match to the last double quote on the line, not the first. The URL will, however, end with the first double-quote. Instead, use (http[^"]*)" and the match will never extend beyond the first " . The dollar sign in .*$ is also superfluous. Again, because regex are greedy, if a regular expression that ends with .* matches, it will match to the end of the line.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166573", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90571/" ] }
166,579
I am helping a friend to recover data from damage external hard drive . (USB 2, 120GB, WD, single partition, FAT32) Problem: When plug in this HDD into a windows PC, the HDD can be detected, but the drive did not show up.Checked using 'Disk Management', Found 'Disk 1' and shows that Disk 1 is not initialized. 1st Attempt: Tried the 'Freezer trick' twice. First time: manage to view the drive for 5 seconds, tried to copy all files, it stuck there after press Ctrl + C . Second time: No luck, back to Not Initialized condition. 2nd Attempt: After browsing through few article, I decided to try Ubuntu for the first time. I am running Ubuntu on a DVD disk.After boot up the system, the external drive is not mounted.Through some trial and error using Terminal , I manage to find out that: /dev/sda is my laptop HDD (750GB, with multiple drive) /dev/sdb is the external HDD (120GB, Damaged drive) At first, I try to use Testdisk 6.14 to recover data, but due to the external HDD is not mounted, Testdisk unable to detect it. So I tried to mount it using Command in Terminal: sudo mount /dev/sdb /media/ubuntu -t ext2 Result: mount: wrong fs type, bad option, bad superblock on /dev/sdb, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Then I tried dmesg | tail Message (*) shows [33935.683953] sd 6:0:0:0: [sdb] [33935.683954] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE[33935.683955] sd 6:0:0:0: [sdb] [33935.683956] Sense Key : Medium Error [current] [33935.683958] sd 6:0:0:0: [sdb] [33935.683959] Add. Sense: Unrecovered read error[33935.683960] sd 6:0:0:0: [sdb] CDB: [33935.683961] Read(10): 28 00 00 00 00 02 00 00 02 00[33935.683965] end_request: critical medium error, dev sdb, sector 2[33935.683991] EXT4-fs (sdb): unable to read superblock Question: What is the meaning of Message (*)? Is the data in this HDD still recoverable? What should I do next?
Try this: sed -r 's/.*(http[^"]*)".*/\1/g' On Mac OSX, try: sed -E 's/.*(http[^"]*)".*/\1/g' Notes There are several items to note about this sed command: sed 's/^.*(http.*)".*$/\1/g' The ^ is unnecessary. sed's regular expressions are always greedy . That means that, if a regex that begins with .* matches at all, it will always match from the beginning of the line. To make ( into a grouping character, it can either be escaped or extended regex can be turned on with the -r flag ( -E on OSX). This flag often greatly reduces the number of escapes that you will need. Also, because regex are greedy, (http.*)" will match to the last double quote on the line, not the first. The URL will, however, end with the first double-quote. Instead, use (http[^"]*)" and the match will never extend beyond the first " . The dollar sign in .*$ is also superfluous. Again, because regex are greedy, if a regular expression that ends with .* matches, it will match to the end of the line.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166579", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90647/" ] }
166,584
I want to remove all files in a directory while leaving just some specified files, they don't have anything in common by name. How could I achieve that? For example, the file names I want to keep are: file_1.png , another_file.jpg , some_music.mp3
If you are using bash: shopt -s extglobrm -- !(file1|file2|file3) The first line just activates extended pattern matching, and after that we use one of them: !(pattern-list) matches anything except one of the given patterns and the pattern-list is a list of one or more patterns separated by a | . Or with zsh setopt extendedglobrm -- ^(file1|file2) Or, more portable, using find : find . -maxdepth 1 ! -name 'file1' ! -name 'file2' -type f -exec rm -v {} +
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166584", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45317/" ] }
166,609
I am using zsh and I have defined few utility shell function in some shell scripts, few of them called from ~/.zshrc , so let's assume that we don't know the location of these functions. One function is: function k.pstree.n { if [ "$1" != "" ] then pstree -p | grep -C3 "$1" else printf " Please specify the name of the process you want to show!\n" fi} How can I print the code of that shell function? I can think of a search & grep like: find $(pwd) -name "*sh*" -type f -printf "\"%p\"\n" | xargs grep -C5 "k.pstree.n" but this assumes that I roughly know the location which is not true here.
There is built-in command functions in zsh for this purpose functions k.pstree.n For example in case of my preexec function: $ functions preexecpreexec () { local cmd=${1:-} cmd=${cmd//\\/\\\\} [[ "$TERM" =~ screen* ]] && cmd="S $cmd" inf=$(print -Pn "%n@%m: %3~") print -n "\e]2;$cmd $inf\a" cmd_start=$SECONDS } Or use typeset -fp function_name which has the benefit of also working in ksh , bash and yash . In zsh , the function definition is also available in the $functions special associative array (the key is the function name, the value the function body).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/166609", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/25848/" ] }
166,675
I'm processing a log file and I need reorder each line (not sort). The log file looks like this: 11-06-2014 - 10:49:06PM lat = 41.858657; lon = -91.34514211-06-2014 - 10:49:49PM lat = 42.864653; lon = -92.34991411-06-2014 - 10:50:35PM lat = 43.874808; lon = -93.35036411-06-2014 - 10:51:21PM lat = 44.885047; lon = -94.35005811-06-2014 - 10:52:08PM lat = 45.895078; lon = -95.34992011-06-2014 - 10:53:30PM lat = 46.905178; lon = -96.34983711-06-2014 - 10:54:50PM lat = 47.910303; lon = -97.350606 and I want to move the date/time to the end of each line: lat = 41.858657; lon = -91.345142 11-06-2014 - 10:49:06PMlat = 42.864653; lon = -92.349914 11-06-2014 - 10:49:49PMlat = 43.874808; lon = -93.350364 11-06-2014 - 10:50:35PMlat = 44.885047; lon = -94.350058 11-06-2014 - 10:51:21PMlat = 45.895078; lon = -95.349920 11-06-2014 - 10:52:08PMlat = 46.905178; lon = -96.349837 11-06-2014 - 10:53:30PMlat = 47.910303; lon = -97.350606 11-06-2014 - 10:54:50PM sed ? awk ? how? Bonus question: my end goal is to turn this into gpx/xml, and it's probably just as easy to add the intermediate text as each line is processed, so that the out looks like this: <wpt lat="41.858657" lon="-91.345142"> <time>11-06-2014 - 10:49:06PM</time></wpt><wpt lat"="42.864653" lon="-92.349914"> <time>11-06-2014 - 10:49:49PM</time></wpt><wpt lat"="43.874808" lon="-93.350364"> <time>11-06-2014 - 10:50:35PM</time></wpt><wpt lat"="44.885047" lon="-94.350058"> <time>11-06-2014 - 10:51:21PM</time></wpt><wpt lat"="45.895078" lon="-95.349920"> <time>11-06-2014 - 10:52:08PM</time></wpt><wpt lat"="46.905178" lon="-96.349837"> <time>11-06-2014 - 10:53:30PM</time></wpt><wpt lat"="47.910303" lon="-97.350606"> <time>11-06-2014 - 10:54:50PM</time></wpt>
using awk : awk '{print $4,$5,$6,$7,$8,$9,$1,$2,$3}' log_file you can do it directly from you log file like this: awk '{printf("<wpt %s%s\"%s\" %s%s\"%s\">\n<time>%s %s %s</time>\n</wpt>\n",$4,$5,substr($6,0,length($6)),$7,$8,$9,$1,$2,$3)}' log_file output: <wpt lat="41.858657" lon="-91.345142"><time>11-06-2014 - 10:49:06PM</time></wpt><wpt lat="42.864653" lon="-92.349914"><time>11-06-2014 - 10:49:49PM</time></wpt><wpt lat="43.874808" lon="-93.350364"><time>11-06-2014 - 10:50:35PM</time></wpt><wpt lat="44.885047" lon="-94.350058"><time>11-06-2014 - 10:51:21PM</time></wpt><wpt lat="45.895078" lon="-95.349920"><time>11-06-2014 - 10:52:08PM</time></wpt><wpt lat="46.905178" lon="-96.349837"><time>11-06-2014 - 10:53:30PM</time></wpt><wpt lat="47.910303" lon="-97.350606"><time>11-06-2014 - 10:54:50PM</time></wpt>
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166675", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2998/" ] }
166,686
I understand that "Everything is a file" is one of the major concepts of Unix, but sockets use different APIs that are provided by the kernel (like socket, sendto, recv, etc.), not like normal file system interfaces. How does this "Everything is a file" apply here?
sockets use different APIs That's not entirely true. There are some additional functions for use with sockets, but you can use, e.g., normal read() and write() on a socket fd. how does this "Everything is a file" apply here? In the sense that a file descriptor is involved. If your definition of "file" is a discrete sequence of bytes stored in a filesystem, then not everything is a file. However, if your definition of file is more handle like -- a conduit for information, i.e., an I/O connection -- then "everything is a file" starts to make more sense. These things inevitably do involve sequences of bytes, but where they come from or go to may differ contextually. It's not really intended literally, however. A daemon is not a file, a daemon is a process; but if you are doing IPC your method of relating to another process might well be mitigated by file style entities.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/166686", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85790/" ] }
166,689
I'm looking for a UNIX system I can connect to to explore UNIX without installing it. Does anyone have good reccomendations of what systems are available and how to connect?
Super Dimensional Fortress Public Access UNIX System The SDF.org Public Access UNIX system is a great resource, and an ethical social network. Users can chat to each other in communications mode . I will briefly walk you through how to connect with PuTTY. First, download the PuTTY Client : When you open the client, you will see a window that looks like this: Fill in the information. When you connect, type "new" as the user, and you will be connected to the mkacct server. Once you are connected, you can hang around in "com" mode until someone validates your account. Then you will have full shell access. NOTE: Com mode is accessible by typing com and hitting Enter after you are connected.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166689", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90719/" ] }
166,704
Let's say I have 2 volumes 2 GB each (kept as single group). And I would copy there 3GB file. Will it be split? I just hope the answer is "no", and the files are atomic. The reason is -- with guarantee "no split", it is possible to mount single volume, and copy files to another one.
Super Dimensional Fortress Public Access UNIX System The SDF.org Public Access UNIX system is a great resource, and an ethical social network. Users can chat to each other in communications mode . I will briefly walk you through how to connect with PuTTY. First, download the PuTTY Client : When you open the client, you will see a window that looks like this: Fill in the information. When you connect, type "new" as the user, and you will be connected to the mkacct server. Once you are connected, you can hang around in "com" mode until someone validates your account. Then you will have full shell access. NOTE: Com mode is accessible by typing com and hitting Enter after you are connected.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166704", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5884/" ] }
166,707
Are there some conventions in naming the installation packages of a program? That can help to decide which package to download and install on a computer system. For example, rom http://www.gtlib.gatech.edu/pub/apache/hadoop/common/hadoop-2.4.1/ [ ] hadoop-2.4.1-src.tar.gz 03-Nov-2014 11:54 15M [ ] hadoop-2.4.1.tar.gz 03-Nov-2014 11:54 132M hadoop-2.4.1-src.tar.gz seems to indicate it is for installation from compiling source code. Then what kind of installation is the one from hadoop-2.4.1.tar.gz ? It is much larger, and I wonder it if is a cross-platform binary installation? Can I install it on Ubuntu 12.04?
Super Dimensional Fortress Public Access UNIX System The SDF.org Public Access UNIX system is a great resource, and an ethical social network. Users can chat to each other in communications mode . I will briefly walk you through how to connect with PuTTY. First, download the PuTTY Client : When you open the client, you will see a window that looks like this: Fill in the information. When you connect, type "new" as the user, and you will be connected to the mkacct server. Once you are connected, you can hang around in "com" mode until someone validates your account. Then you will have full shell access. NOTE: Com mode is accessible by typing com and hitting Enter after you are connected.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166707", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
166,755
Trying to run an executable file on terminal (I am using Tails live OS), but I keep receiving an error message. I have set permissions already. The command I wrote: sudo ./home/amnesia/myfile I receive "Command not found"? I tried running it with or without sudo: $ sudo /home/amnesia/myfilesudo: unable to execute /home/amnesia/myfile: No such file or directory$ /home/amnesia/myfilebash: /home/amnesia/myfile: No such file or directory Information about the file (it's a binary, not a script): $ ls -l /home/amnesia/myfile-rwxrwxrwx 1 amnesia amnesia 15327 Sep 3 2013 /home/amnesia/myfile$ file /home/amnesia/myfileELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared lies), for GNU/Linux 2.6.9, not stripped Information about my system: $ uname -aLinux amnesia 3.16-3-amd64 #1 SMP Debian 3.16.5-1 (2014-10-10) x86_64 GNU/Linux$ file /bin/ls/bin/ls: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.26, BuildID[sha1]=0xd3280633faaabf56a14a26693d2f810a32222e51, stripped
$ /home/amnesia/myfilebash: /home/amnesia/myfile: No such file or directory$ file /home/amnesia/myfileELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared lies), for GNU/Linux 2.6.9, not stripped So myfile exists, but running it gives the message “No such file or directory”. This happens in the following circumstance: The file depends on a loader — it's a dynamically linked executable, and these need a loader program to load the dynamically linked libraries. (The loader can also be the interpreter designated by a shebang line, but bash detects this case and gives a different error message.) The loader file is not present. The message “No such file or directory” is really about the loader, but the shell doesn't know that the loader is involved, so it reports the name of the original file. I explain this in more detail in “No such file or directory” lies on Optware installed binaries . Why can't you run this program? Because you don't have the dynamic loader for 64-bit executables. $ uname -aLinux amnesia 3.16-3-amd64 #1 SMP Debian 3.16.5-1 (2014-10-10) x86_64 GNU/Linux$ file /bin/ls/bin/ls: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.26, BuildID[sha1]=0xd3280633faaabf56a14a26693d2f810a32222e51, stripped Your system has a 64-bit kernel, but the rest of the system is 32-bit. Linux supports this configuration (a 64-bit kernel can run both 64-bit programs and 32-bit programs, but a 32-bit kernel can only run 32-bit programs). The kernel can load the program just fine; you would be able to run a statically-linked amd64 executable. However, you don't have the 64-bit loader ( /lib64/ld-linux-x86-64.so.2 ), nor presumably any 64-bit library. So you can't run dynamically-linked amd64 executables. Why would you run a 64-bit kernel with a 32-bit userland? To use more than about 3GB of physical memory. (This isn't the only way — another possibility is to run a 32-bit kernel that supports PAE.) To be able to run 64-bit binaries, e.g. by booting on the live OS and then chrooting into an installed 64-bit system somewhere. To reduce maintenance effort for the distribution: provide a single kernel for recent hardware, and make it 64-bit. To run 64-bit virtual machines (some VM engines require a 64-bit kernel to run a 64-bit VM). I don't think Tails provides a 64-bit system. You should get a 32-bit version of the executable. If you can't, use some other distribution (possibly in a virtual machine).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166755", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90745/" ] }
166,757
As you may know, flashing the BeagleBone Black eMMC with a new Linux Image from the SD card takes quite a long time - up to 45 minutes. Is there a reason why, and is there a way to monitor the progress to make sure it is not stalled out? Writing the image to the SD card took less than 5 minutes on my PC, and my understanding was that eMMC memory is several times faster than SD memory. Why does it take so long then?
$ /home/amnesia/myfilebash: /home/amnesia/myfile: No such file or directory$ file /home/amnesia/myfileELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared lies), for GNU/Linux 2.6.9, not stripped So myfile exists, but running it gives the message “No such file or directory”. This happens in the following circumstance: The file depends on a loader — it's a dynamically linked executable, and these need a loader program to load the dynamically linked libraries. (The loader can also be the interpreter designated by a shebang line, but bash detects this case and gives a different error message.) The loader file is not present. The message “No such file or directory” is really about the loader, but the shell doesn't know that the loader is involved, so it reports the name of the original file. I explain this in more detail in “No such file or directory” lies on Optware installed binaries . Why can't you run this program? Because you don't have the dynamic loader for 64-bit executables. $ uname -aLinux amnesia 3.16-3-amd64 #1 SMP Debian 3.16.5-1 (2014-10-10) x86_64 GNU/Linux$ file /bin/ls/bin/ls: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.26, BuildID[sha1]=0xd3280633faaabf56a14a26693d2f810a32222e51, stripped Your system has a 64-bit kernel, but the rest of the system is 32-bit. Linux supports this configuration (a 64-bit kernel can run both 64-bit programs and 32-bit programs, but a 32-bit kernel can only run 32-bit programs). The kernel can load the program just fine; you would be able to run a statically-linked amd64 executable. However, you don't have the 64-bit loader ( /lib64/ld-linux-x86-64.so.2 ), nor presumably any 64-bit library. So you can't run dynamically-linked amd64 executables. Why would you run a 64-bit kernel with a 32-bit userland? To use more than about 3GB of physical memory. (This isn't the only way — another possibility is to run a 32-bit kernel that supports PAE.) To be able to run 64-bit binaries, e.g. by booting on the live OS and then chrooting into an installed 64-bit system somewhere. To reduce maintenance effort for the distribution: provide a single kernel for recent hardware, and make it 64-bit. To run 64-bit virtual machines (some VM engines require a 64-bit kernel to run a 64-bit VM). I don't think Tails provides a 64-bit system. You should get a 32-bit version of the executable. If you can't, use some other distribution (possibly in a virtual machine).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166757", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60734/" ] }
166,758
Indeed, I would like to know the exact format of '-I' parameter (string, variant, etc.) for a script like this, seq 15 | xargs -I numseq 15 | xargs -I num bash -c "echo num" will work The 'num' here I regard as a parameter for the execution of the script in bash -c ""while I'm not sure about the format of the num when it's introduced into the bash Try seq 15 | xargs -I num bash -c "name=num; echo name" regarded it as a string while failed. Try seq 15 | xargs -I num bash -c "name=num; echo $name" also didn't work. I just want to try the multithreading with --max-procs to limit the threads number, while I'm not quite sure about such problem while I guess maybe it's something about the '=' thing.How can I get this to work as I want.
$ /home/amnesia/myfilebash: /home/amnesia/myfile: No such file or directory$ file /home/amnesia/myfileELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared lies), for GNU/Linux 2.6.9, not stripped So myfile exists, but running it gives the message “No such file or directory”. This happens in the following circumstance: The file depends on a loader — it's a dynamically linked executable, and these need a loader program to load the dynamically linked libraries. (The loader can also be the interpreter designated by a shebang line, but bash detects this case and gives a different error message.) The loader file is not present. The message “No such file or directory” is really about the loader, but the shell doesn't know that the loader is involved, so it reports the name of the original file. I explain this in more detail in “No such file or directory” lies on Optware installed binaries . Why can't you run this program? Because you don't have the dynamic loader for 64-bit executables. $ uname -aLinux amnesia 3.16-3-amd64 #1 SMP Debian 3.16.5-1 (2014-10-10) x86_64 GNU/Linux$ file /bin/ls/bin/ls: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.26, BuildID[sha1]=0xd3280633faaabf56a14a26693d2f810a32222e51, stripped Your system has a 64-bit kernel, but the rest of the system is 32-bit. Linux supports this configuration (a 64-bit kernel can run both 64-bit programs and 32-bit programs, but a 32-bit kernel can only run 32-bit programs). The kernel can load the program just fine; you would be able to run a statically-linked amd64 executable. However, you don't have the 64-bit loader ( /lib64/ld-linux-x86-64.so.2 ), nor presumably any 64-bit library. So you can't run dynamically-linked amd64 executables. Why would you run a 64-bit kernel with a 32-bit userland? To use more than about 3GB of physical memory. (This isn't the only way — another possibility is to run a 32-bit kernel that supports PAE.) To be able to run 64-bit binaries, e.g. by booting on the live OS and then chrooting into an installed 64-bit system somewhere. To reduce maintenance effort for the distribution: provide a single kernel for recent hardware, and make it 64-bit. To run 64-bit virtual machines (some VM engines require a 64-bit kernel to run a 64-bit VM). I don't think Tails provides a 64-bit system. You should get a 32-bit version of the executable. If you can't, use some other distribution (possibly in a virtual machine).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166758", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90762/" ] }
166,798
I have two files with two different values. I want to run a command in loop which needs input from both file. Let me give the example to make it simple. File1 contents: google yahoo File2 contents: mail messenger I need output like the below google is good in mail yahoo is good in messenger How can I use a for/while loop to achieve the same? I need a script to: $File1 needs to replace first result in File1 and $File2 needs to replace first result in File2 /usr/local/psa/bin/domain --create domain $File1 -mail_service true -service-plan 'Default Domain' -ip 1.2.3.4 -login $File2 -passwd "abcghth"
The standard procedure (in Bash) is to read from different file descriptors with the -u switch of read : while IFS= read -r -u3 l1 && IFS= read -r -u4 l2; do printf '%s is good in %s\n' "$l1" "$l2"done 3<file1 4<file2
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166798", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85155/" ] }
166,804
I'm using Nix to install packages under my home (so no binary packages) on a shared host with limited resources. I'm trying to install git-annex. When building one of its dependencies, haskell-lens, the unit tests consume so much memory that they get killed and the installation fails. Is there a way to skip the unit tests to get the package installed? I looked at the Cabal builder and haskell-packages.nix and it seems to me that you could disable the tests by setting enableCheckPhase to false. I tried the following in ~/.nixpkgs/config.nix , but the tests are still run: { packageOverrides = pkgs: with pkgs; { # ...other customizations... haskellPackages = haskellPackages.override { extension = self : super : { self.lens = self.disableTest self.lens; }; }; };}
nixpkgs reorganized things since the accepted answer was posted and there is a new function for disabling tests. You now wrap any Haskell package with the pkgs.haskell.lib.dontCheck function to disable tests. Here is an example Nix expression from one of my Haskell projects where I had to disable tests for the shared-memory dependency when building on OS X: { pkgs ? import <nixpkgs> {}, compiler ? "ghc7103" }:pkgs.haskell.packages.${compiler}.callPackage ./my-project.nix { shared-memory = let shared-memory = pkgs.haskell.packages.${compiler}.shared-memory; in if pkgs.stdenv.isDarwin then pkgs.haskell.lib.dontCheck shared-memory else shared-memory; }
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166804", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5908/" ] }
166,817
I have a process that needs root privileges when run by a normal user. Apparently I can use the "setuid bit" to accomplish this. What is the proper way of doing this on a POSIX system? Also, how can I do this with a script that uses an interpreter (bash, perl, python, php, etc)?
The setuid bit can be set on an executable file so that when run, the program will have the privileges of the owner of the file instead of the real user, if they are different. This is the difference between effective uid (user id) and real uid. Some common utilities, such as passwd , are owned root and configured this way out of necessity ( passwd needs to access /etc/shadow which can only be read by root). The best strategy when doing this is to do whatever you need to do as superuser right away then lower privileges so that bugs or misuse are less likely to happen while running root. To do this, you set the process's effective uid to its real uid. In POSIX C: #define _POSIX_C_SOURCE 200112L // Needed with glibc (e.g., linux).#include <stdio.h>#include <sys/types.h>#include <unistd.h>void report (uid_t real) { printf ( "Real UID: %d Effective UID: %d\n", real, geteuid() );}int main (void) { uid_t real = getuid(); report(real); seteuid(real); report(real); return 0;} The relevant functions, which should have an equivalent in most higher level languages if they are used commonly on POSIX systems: getuid() : Get the real uid. geteuid() : Get the effective uid. seteuid() : Set the effective uid. You can't do anything with the last one inappropriate to the real uid except in so far as the setuid bit was set on the executable . So to try this, compile gcc test.c -o testuid . You then need to, with privileges: chown root testuidchmod u+s testuid The last one sets the setuid bit. If you now run ./testuid as a normal user you'll see the process by default runs with effective uid 0, root. What about scripts? This varies from platform to platform , but on Linux, things that require an interpreter, including bytecode, can't make use of the setuid bit unless it is set on the interpreter (which would be very very stupid). Here's a simple perl script that mimics the C code above: #!/usr/bin/perluse strict;use warnings FATAL => qw(all);print "Real UID: $< Effective UID: $>\n";$> = $<; # Not an ASCII art greedy face, but genuine perl...print "Real UID: $< Effective UID: $>\n"; True to it's *nixy roots, perl has build in special variables for effective uid ( $> ) and real uid ( $< ). But if you try the same chown and chmod used with the compiled (from C, previous example) executable, it won't make any difference. The script can't get privileges. The answer to this is to use a setuid binary to execute the script: #include <stdio.h>#include <unistd.h> int main (int argc, char *argv[]) { if (argc < 2) { puts("Path to perl script required."); return 1; } const char *perl = "perl"; argv[0] = (char*)perl; return execv("/usr/bin/perl", argv);} Compile this gcc --std=c99 whatever.c -o perlsuid , then chown root perlsuid && chmod u+s perlsuid . You can now execute any perl script with with an effective uid of 0, regardless of who owns it. A similar strategy will work with php, python, etc. But... # Think hard, very important:>_< # Genuine ASCII art "Oh tish!" face PLEASE PLEASE DO NOT leave this kind of thing lying around . Most likely, you actually want to compile in the name of the script as an absolute path , i.e., replace all the code in main() with: const char *args[] = { "perl", "/opt/suid_scripts/whatever.pl" } return execv("/usr/bin/perl", (char * const*)args); Them make sure /opt/suid_scripts and everything in it are read-only for non-root users. Otherwise, someone could swap in anything for whatever.pl . In addition, beware that many scripting languages allow environment variables to change the way they execute a script . For example, an environment variable might cause a library supplied by the caller to be loaded, allowing the caller to execute arbitrary code as root. Thus, unless you know that both the interpreter and the script itself are robust against all possible environment variables, DON'T DO THIS . So what should I do instead? A safer way to allow a non-root user to run a script as root is to add a sudo rule and have the user run sudo /path/to/script . Sudo strips most environment variables, and also allows the administrator to finely select who can run the command and with what arguments. See How to run a specific program as root without a password prompt? for an example.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/166817", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/25985/" ] }
166,825
What rename command can I use which will delete the present filenames in a directory and replace them with alphanumeric filenames? Such as gg0001 , gg0002 , et cetera? And I'd like the command to work on files with any type of extension. And I'd like the extension to be retained. I'd prefer that the Perl rename be used. I've tried this command, but it just prepends the "gg" to the filenames whereas I want the original filenames replaced: rename 's/^/gg_/' *
You don't really need the rename command for this, you can do it directly in the shell: c=0; for i in *; do let c++; mv "$i" "gg_$(printf "%03d" $c)${i#"${i%.*}"}"; done The printf "%03d" $c will print the $c variable, padding it with 3 leading 0s as needed. The let c++ increments the counter ( $c ). The ${i#"${i%.*}"} extracts the extension. More on that here . I would not use rename for this. I don't think it can do calculations. Valid Perl constructs like s/.*/$c++/e fail and I don't see any other way to have it do calculations. That means you would still need to run it in a loop and have something else increment the counter for you. Something like: c=0;for i in *; do let c++; rename -n 's/.*\.(.*)/'$c'.$1/' "$i"; done But there's no advantage over using the simpler mv approach.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166825", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77038/" ] }
166,832
I have a Gentoo server with LVM running on top of a RAID array that I have been using for a number of years. Recently I upgraded LVM to 2.02.109 (don't recall what version it was before) and received a message while upgrading: * Make sure to enable lvmetad in /etc/lvm/lvm.conf if you want* to enable lvm autoactivation and metadata caching. I understand that I can enable it by setting use_lvmetad = 1 in /etc/lvm/lvm.conf . But why would I need such a feature? My understanding is that it works with udev rules to keep LVM state in a cache so that LVM tools don't need to scan volumes for obtain that information. Is it just that my little array can't benefit from this kind of feature? Under what circumstances might I want/need to use it?
From this link : Normally, each LVM command issues a disk scan to find all relevant physical volumes and to read volume group metadata. However, if the metadata daemon is running and enabled, this expensive scan can be skipped ... This can save a significant amount of I/O and reduce the time required to complete LVM operations, particularly on systems with many disks. So you would run it for increased performance of LVM management and status operations, at the cost of startup performance and increased complexity. The level of performance increase is larger when there are more disks in the system.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/166832", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39030/" ] }
166,837
One thing that I frequently do is edit the most recently modified files, so instead of typing "ls -lr" and then "vim lastfile", I thought I would make some shortcuts in my ~/.bash_profile file: alias via="vim `ls -rt | tail -1`"alias vib="vim `ls -rt | tail -2 | head -1`"alias vic="vim `ls -rt | tail -3 | head -1`"alias vid="vim `ls -rt | tail -4 | head -1`"alias vie="vim `ls -rt | tail -5 | head -1`" The problem is that, weirdly enough, these commands don't work. They open some file that isn't one of the last, or even a file was deleted from the current directory (I wonder if there's some kind of file cache updating issue in the directory. This occurs on both my local machine and the cluster I work on). However, if I type vim `ls -rt | tail -1` directly, without using the alias, it works every time.
The problem is you need to quote the backticks in your alias definition. Double quotes ( " ) do not quote command substitution. You will need single quotes ( ' ). Use alias via='vim `ls -rt | tail -1`' Though you'd actually want: alias via='vim -- "$(ls -t | head -n 1)"' That is: use the modern form of command substitution ( $(...) ) while we are at it. quote it to disable the split+glob operator (otherwise it wouldn't work properly if the file name had IFS characters or wildcards (it still doesn't work if it has newline characters)). Use -- to mark the end of options for vim (otherwise, it wouldn't work for filenames starting with - or + ). Use ls -t | head instead of ls -rt | tail to get the result sooner. Do not use alias via="vim `ls -rt | tail -1`" If you do that the command substitution happens when you define the alias, not when you run it. Try typing alias via to see that the output is not actually alias via='vim `ls -rt | tail -1`' but rather alias via='vim <prematurely expanded output>' .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166837", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21752/" ] }
166,853
General problem I want to write a script that interacts with the user even though it is in the middle of a chain of pipes. Concrete example Concretely, it takes a file or stdin , displays lines (with line numbers), asks the user to input a selection or line numbers, and then prints the corresponding lines to stdout . Let's call this script selector . Then basically, I want to be able to do grep abc foo | selector > myfile.tmp If foo contains blabcblafoo abc barquuxxyzzy abc then selector presents me (on the terminal, not in myfile.tmp !) with options 1) blabcbla2) foo abc bar3) xyzzy abcSelect options: after which I type in 2-3 and end up with foo abc barxyzzy abc as contents of myfile.tmp . I've got a selector script up and running, and basically it is working perfectly if I don't redirect input and output. So selector foo behaves like I want. However, when piping things together as in the above example, selector prints the presented options to myfile.tmp and tries to read a selection from the grepped input. My approach I've tried to use the -u flag of read , as in exec 4< /proc/$PPID/fd/0exec 4> /proc/$PPID/fd/1nl $INPUT >4read -u4 -p"Select options: " but this doesn't do what I hoped it would. Q: How do I get actual user interaction?
Using /proc/$PPID/fd/0 is unreliable: the parent of the selector process may not have the terminal as its input. There is a standard path that always refers to the current process's terminal: /dev/tty . nl "$INPUT" >/dev/ttyread -p"Select options: " </dev/tty or exec </dev/tty >/dev/ttynl "$INPUT"read -p"Select options: "
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/166853", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20530/" ] }
166,860
I want to remove Nginx from my Amazon Linux EC2 server. How can I find it and remove/disable it? Nginx is listening on port 80, and I need access to that port. It would be preferable to not have to stop the process every server reboot. By the way, I tried this, but it didn't work: sudo rm -f -R /usr/local/nginx && rm -f /usr/local/sbin/nginx
If it's Amazon AMI Linux first you need to stop nginx service: sudo service nginx stop than you should disable it with: sudo chkconfig nginx off and if you like, uninstall it: sudo yum remove nginx HTH
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/166860", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90826/" ] }
166,873
Example: This is {the multilinetext file }that wants{ to bechanged} anyway. Should become: This is that wants anyway. I have found some similar threads in the forum, but they don't seem to work with multi-line curly brackets. If possible, I would prefer some one-line method, like solutions based on grep, sed, awk... etc. EDIT: Solutions seem to be OK, but I have noticed that my original files include curly brackets nesting. So I am opening a new question. Thanks you everybody: How can I delete all text between nested curly brackets in a multiline text file?
$ sed ':again;$!N;$!b again; s/{[^}]*}//g' fileThis is that wants anyway. Explanation: :again;$!N;$!b again; This reads the whole file into the pattern space. :again is a label. N reads in the next line. $!b again branches back to the again label on the condition that this is not the last line. s/{[^}]*}//g This removes all expressions in braces. On Mac OSX, try: sed -e ':again' -e N -e '$!b again' -e 's/{[^}]*}//g' file Nested Braces Let's take this as a test file with lots of nested braces: a{b{c}d}e1{2}3{}5 Here is a modification to handle nested braces: $ sed ':again;$!N;$!b again; :b; s/{[^{}]*}//g; t b' file2ae135 Explanation: :again;$!N;$!b again This is the same as before: it reads in the whole file. :b This defines a label b . s/{[^{}]*}//g This removes text in braces as long as the text contains no inner braces. t b If the above substitute command resulted in a change, jump back to label b . In this way, the substitute command is repeated until all brace-groups are removed.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/166873", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57439/" ] }
166,879
This question comes from How can I delete all text between curly brackets in a multiline text file? (just the same, but without the requirements for nesting). Example: This is {{the multilinetext} file }that wants{ to {bechanged}} anyway. Should become: This is that wants anyway. Is it possible to do this with some sort of one-line bash command (awk, sed, perl, grep, cut, tr... etc)?
$ sed ':again;$!N;$!b again; :b; s/{[^{}]*}//g; t b' file3This is that wants anyway. Explanation: :again;$!N;$!b again This reads in the whole file. :again is a label. N reads in the next line and $!N reads in the next line on the condition that we are not already at the last line. $!b again branches back to the again label on the condition that this is not the last line. :b This defines a label b . s/{[^{}]*}//g This removes text in braces as long as the text contains no inner braces. t b If the above substitute command resulted in a change, jump back to label b . In this way, the substitute command is repeated until all brace-groups are removed.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/166879", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57439/" ] }
166,899
So the paragraph is defined by having a empty line at the end. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc et nisi tristique, facilisis orci nec, pellentesque tortor. Suspendisse mattis, would end up as Nunc et nisi tristique, facilisis orci nec, pellentesque tortor. Suspendisse mattis,
$ sed ':again;$!N;$!b again; :b; s/{[^{}]*}//g; t b' file3This is that wants anyway. Explanation: :again;$!N;$!b again This reads in the whole file. :again is a label. N reads in the next line and $!N reads in the next line on the condition that we are not already at the last line. $!b again branches back to the again label on the condition that this is not the last line. :b This defines a label b . s/{[^{}]*}//g This removes text in braces as long as the text contains no inner braces. t b If the above substitute command resulted in a change, jump back to label b . In this way, the substitute command is repeated until all brace-groups are removed.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/166899", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16814/" ] }
166,908
COMPREPLY by default returns a space separated list of words, but I'd like to have it return a single word per line. I've tried putting newlines at the ends of the words and have looked through the docs for both compgen and complete but can't find anything. Is this possible? EDIT: Sorry I really explained that poorly. I have a script that is bound to an autocomplete function via complete -F. When the user runs the script, hitting tab twice will show a list of possible options via compgen. Right now the function has lines of code like this: COMPREPLY=( $(compgen -W '$( ls ~/work/dev/jobs/ | cat )' -- $curword ) ) When the user hits tab though, these directories are displayed like: directory0 directory1 directory2 directory3 but i would like them displayed like: directory0directory1directory2directory3 I posted a similar thread to /r/bash and someone suggested doing bind 'set completion-display-width 0' which works, and then I can unset it with bind 'set completion-display-width -1' The issue now is that if I unset it before the complete function returns, it has no effect, so I unset it in the script after the user has pressed enter. This works fine, but if the user starts using the autocomplete, and then changes their mind and delets what they had entered and were to return to the shell, completion-display-width would still be set to 0. Is there another way to go about this?
I'm the one that suggested changing the completion-display-width readline variable at /r/bash , but then you didn't specify you only wanted it to work on this one completion function. Anyway, in a completion function, you can detect whether it's triggered by TAB (COMP_TYPE == 9) or by TAB TAB (COMP_TYPE == 63), and if the latter is the case, you could pad the results with spaces so they fill the entire width of the terminal. It's the least hackish thing I can think of. It would look something like this: _foo_complete() { local i file files files=( ~/work/dev/jobs/"$2"* ) [[ -e ${files[0]} || -L ${files[0]} ]] || return 0 if (( COMP_TYPE == 63 )); then for file in "${files[@]}"; do printf -v 'COMPREPLY[i++]' '%*s' "-$COLUMNS" "${file##*/}" done else COMPREPLY=( "${files[@]##*/}" ) fi}complete -F _foo_complete foo On a side note, you really shouldn't parse ls output .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166908", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28194/" ] }
166,920
What is the difference between the following? Host foo ProxyCommand ssh example.com -- /usr/bin/nc %h %p 2> /dev/null and Host foo ProxyCommand ssh -W %h:%p example.com Which one should I prefer when? Is either of them faster or more efficient in some way?
The two settings do the same thing. The -W option was added in 2010 and is described as a “netcat mode”. Use ssh -W if you don't need compatibility with versions of OpenBSD prior to 4.7 or with portable OpenSSH prior to 5.5 (I think). Use nc if you do need to support older versions of OpenSSH. ssh -W is preferable if available because it's marginally more efficient and doesn't require a separate utility to be installed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166920", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6252/" ] }
166,976
What command can I use to create zip s with a file number limit? I have a folder (no subfolders) of, say, 5000 files, so I would want a command that could divide that number and create 10 individual zip archives, each consisting of no more than 500 files. I also don't want the resulting 10 zip files to be connected with each other, so that I can open them individually and won't need to open all 10 at the same time.
You can use GNU parallel to do that as it can limit the number of elements to a job as well as provide a job number (for a unique zip archive name): $ touch $(seq 20)$ find . ! -name "*.zip" -type f -print0 | parallel -0 -N 5 zip arch{#} {} adding: 1 (stored 0%) adding: 10 (stored 0%) adding: 11 (stored 0%) adding: 12 (stored 0%) adding: 13 (stored 0%) adding: 14 (stored 0%) adding: 15 (stored 0%) adding: 16 (stored 0%) adding: 17 (stored 0%) adding: 18 (stored 0%) adding: 19 (stored 0%) adding: 2 (stored 0%) adding: 20 (stored 0%) adding: 3 (stored 0%) adding: 4 (stored 0%) adding: 5 (stored 0%) adding: 6 (stored 0%) adding: 7 (stored 0%) adding: 8 (stored 0%) adding: 9 (stored 0%)$ ls1 11 13 15 17 19 20 4 6 8 arch1.zip arch3.zip10 12 14 16 18 2 3 5 7 9 arch2.zip arch4.zip The option -N 5 limits the number of files to 5 per archive and is presented to zip in place of {} The {#} (verbatim, not to be replaced by you during the invocation), is replaced by the job number, resulting in arch1.zip , arch2.zip etc. The -print0 option to find and -0 option to parallel in tandem make sure that filenames with special characters are correctly handled.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/166976", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77038/" ] }
166,985
While there seem to be a lot of questions regarding dual booting Windows and Linux, I did not see one that seemed to cover this problem. That said, I believe a lot of the problems may be fixed in a similar manner. In Fedora fc20 with latest patches as of 11/9/2014, grub gives the following two errors when selecting the automatically generated Windows Bootloader entry. > error: file '/EFI/Microsoft/Boot/bootmgfw.efi' not found > error: you need to load the kernel first Why is this happening and how do I fix it?
The obvious answer is this is happening because grub can not find the windows boot loader. The less obvious answer is because the grub configuration file does not properly specify the root for the windows bootloader. The default operation seems to leave that line out. While it would be somewhat complicated to fix the default Windows Bootloader, the following instructions will allow you to have the system create a second one that works properly. If you are using Fedora fc20, or another similarly configured system that is running grub2, the following steps should fix your problem provided you have not damaged your windows bootloader partition. 1) Find out which partition your windows bootloader is on. [root@localhost]# fdisk -lDisk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisklabel type: gptDisk identifier: D733242D3-33B9-4C33-B33F-2C333DC52333Device Start End Size Type/dev/sda1 2048 206847 100M EFI System/dev/sda2 206848 2050047 900M Windows recovery environment/dev/sda3 2050048 2312191 128M Microsoft reserved/dev/sda4 2312192 988518399 470.3G Microsoft basic data/dev/sda5 1911560192 1953523711 20G Windows recovery environment/dev/sda6 988518400 989337599 400M EFI System/dev/sda7 989337600 991385599 1000M Microsoft basic data/dev/sda8 991385600 1911560191 438.8G Linux LVMDisk /dev/mapper/fedora-swap: 7.8 GiB, 8396996608 bytes, 16400384 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk /dev/mapper/fedora-root: 431 GiB, 462728200192 bytes, 903766016 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytes 2) Find out the UUID of that partition. [root@localhost]# blkid /dev/sda1/dev/sda1: LABEL="SYSTEM" UUID="1234-567A" TYPE="vfat" PARTLABEL="EFI system partition" PARTUUID="0c33e3ab-d3dc-3af3-333d-a33eee3c333c" Note: Fedora will automatically generate a new configuration file when you do things like update the kernel so while you can manually edit the grub.cfg file, it is less work in the long term to edit the configuration stub files that are used to generate the grub.cfg file. 3) Add the menuentry text to the end of the /etc/grub.d/40_custom file. Use a text editor of your choice but you must be root to do so. I used vi. Make sure you substitute the UUID from step 2 for the 1234-567A shown here. [root@localhost]# vi /etc/grub.d/40_custommenuentry 'My Working Windows Bootloader' { search --no-floppy --fs-uuid --set=root '1234-567A' chainloader /EFI/Microsoft/Boot/bootmgfw.efi boot} 4) Now generate the actual config file using the grub2-mkconfig command. [root@localhost]# grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfgGenerating grub.cfg ...Found linux image: /boot/vmlinuz-3.16.7-200.fc20.x86_64Found initrd image: /boot/initramfs-3.16.7-200.fc20.x86_64.imgFound linux image: /boot/vmlinuz-0-rescue-0b156afaadc545779646d809437ed977Found initrd image: /boot/initramfs-0-rescue-0b156afaadc545779646d809437ed977.imgFound Windows Boot Manager on Microsoft/Boot/bootmgfw.efidone NOTE: Running this command by specifying /etc/grub2-efi.cfg as the output file deletes the symbolic link that is normally there and creates a new file instead of updating the actual config file. 5) You are done. When you reboot, you should now have access to your Windows and GNU/Linux operating systems.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166985", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90805/" ] }
166,999
I need to create a script that outputs the internal IP address, that is configured as the default Interface.
Here's another slightly terser method using procfs (assumes you're using Linux): default_iface=$(awk '$2 == 00000000 { print $1 }' /proc/net/route)ip addr show dev "$default_iface" | awk '$1 ~ /^inet/ { sub("/.*", "", $2); print $2 }' This returns both the IPv4 and (if available) the IPv6 address of the interface. You can change the test if you only want one or the other (look for inet for IPv4, and inet6 for IPv6). $ default_iface=$(awk '$2 == 00000000 { print $1 }' /proc/net/route)$ ip addr show dev "$default_iface" | awk '$1 ~ /^inet/ { sub("/.*", "", $2); print $2 }'10.0.2.15fe80::a00:27ff:fe45:b085 $ ip addr show dev "$default_iface" | awk '$1 == "inet" { sub("/.*", "", $2); print $2 }'10.0.2.15 $ ip addr show dev "$default_iface" | awk '$1 == "inet6" { sub("/.*", "", $2); print $2 }'fe80::a00:27ff:fe45:b085
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/166999", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/84528/" ] }
167,004
All shell builtins share the same manual page: BUILTIN(1) BSD General Commands Manual BUILTIN(1)NAME builtin, ! etc. Then there is a little text describing what shell builtins are, and then a list that looks something like this: Command External csh(1) sh(1) ! No No Yes % No Yes No But if we do man grep we get sections such as Bugs History See also Standards Description etc. Don't shell builtins have their own history, description and arguments like -A or -r ? Why isn't that provided in the manual pages and how would I learn to use them correctly and efficiently?
Because builtins are part of the shell. Any bugs or history they have are bugs and history of the shell itself. They are not independent commands and don't exist outside the shell they are built into. The equivalent, for bash at least, is the help command. For example: $ help whilewhile: while COMMANDS; do COMMANDS; done Execute commands as long as a test succeeds. Expand and execute COMMANDS as long as the final command in the `while' COMMANDS has an exit status of zero. Exit Status: Returns the status of the last command executed. All bash builtins have help pages. Even help itself: $ help helphelp: help [-dms] [pattern ...] Display information about builtin commands. Displays brief summaries of builtin commands. If PATTERN is specified, gives detailed help on all commands matching PATTERN, otherwise the list of help topics is printed. Options: -d output short description for each topic -m display usage in pseudo-manpage format -s output only a short usage synopsis for each topic matching PATTERN Arguments: PATTERN Pattern specifiying a help topic Exit Status: Returns success unless PATTERN is not found or an invalid option is given. Inspired by @mikeserv's sed script, here's a little function that will print the relevant section of a man page using Perl. Add this line to your shell's initialization file ( ~/.bashrc for bash): manperl(){ man "$1" | perl -00ne "print if /^\s*$2\b/"; } Then, you run it by giving it a man page and the name of a section: $ manperl bash while while list-1; do list-2; done until list-1; do list-2; done The while command continuously executes the list list-2 as long as the last command in the list list-1 returns an exit status of zero. The until command is identical to the while command, except that the test is negated; list-2 is exe‐ cuted as long as the last command in list-1 returns a non-zero exit status. The exit status of the while and until commands is the exit status of the last command executed in list-2, or zero if none was executed.$ manperl grep SYNOPSISSYNOPSIS grep [OPTIONS] PATTERN [FILE...] grep [OPTIONS] [-e PATTERN | -f FILE] [FILE...]$ manperl rsync "-r" -r, --recursive This tells rsync to copy directories recursively. See also --dirs (-d).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/167004", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79979/" ] }
167,013
If I have a file with following content: 00010002000300040132013701380141 How can I get a random permutation of them in bash?
shuf is the command you are looking for. From man shuf , -n, --head-count=COUNT output at most COUNT lines So, for example to get 4 random lines from the file, you could use the command as, shuf -n 4 file You could even use the below approach. head -$((${RANDOM} % `wc -l < file` + 1)) file | tail -1 Where, the final pipe to tail will specify the number of lines you need in the output. References https://stackoverflow.com/questions/448005/whats-an-easy-way-to-read-random-line-from-a-file-in-unix-command-line
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167013", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
167,038
Is there any way to know the size of L1, L2, L3 caches and RAM in Linux?
If you have lshw installed: $ sudo lshw -C memory Example $ sudo lshw -C memory... *-cache:0 description: L1 cache physical id: a slot: Internal L1 Cache size: 32KiB capacity: 32KiB capabilities: asynchronous internal write-through data *-cache:1 description: L2 cache physical id: b slot: Internal L2 Cache size: 256KiB capacity: 256KiB capabilities: burst internal write-through unified *-cache:2 description: L3 cache physical id: c slot: Internal L3 Cache size: 3MiB capacity: 8MiB capabilities: burst internal write-back *-memory description: System Memory physical id: 2a slot: System board or motherboard size: 8GiB *-bank:0 description: SODIMM DDR3 Synchronous 1334 MHz (0.7 ns) product: M471B5273CH0-CH9 vendor: Samsung physical id: 0 serial: 67010644 slot: DIMM 1 size: 4GiB width: 64 bits clock: 1334MHz (0.7ns) *-bank:1 description: SODIMM DDR3 Synchronous 1334 MHz (0.7 ns) product: 16JTF51264HZ-1G4H1 vendor: Micron Technology physical id: 1 serial: 3749C127 slot: DIMM 2 size: 4GiB width: 64 bits clock: 1334MHz (0.7ns)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/167038", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90929/" ] }
167,042
I have the following command to delete files in a folder that are 15 days or older: find /var/www/App/app/var/sessions* -mtime +15 -exec rm {} \; what is the best way to speed this up and run it on the background? I heard rm is a pretty slow operation
If you have lshw installed: $ sudo lshw -C memory Example $ sudo lshw -C memory... *-cache:0 description: L1 cache physical id: a slot: Internal L1 Cache size: 32KiB capacity: 32KiB capabilities: asynchronous internal write-through data *-cache:1 description: L2 cache physical id: b slot: Internal L2 Cache size: 256KiB capacity: 256KiB capabilities: burst internal write-through unified *-cache:2 description: L3 cache physical id: c slot: Internal L3 Cache size: 3MiB capacity: 8MiB capabilities: burst internal write-back *-memory description: System Memory physical id: 2a slot: System board or motherboard size: 8GiB *-bank:0 description: SODIMM DDR3 Synchronous 1334 MHz (0.7 ns) product: M471B5273CH0-CH9 vendor: Samsung physical id: 0 serial: 67010644 slot: DIMM 1 size: 4GiB width: 64 bits clock: 1334MHz (0.7ns) *-bank:1 description: SODIMM DDR3 Synchronous 1334 MHz (0.7 ns) product: 16JTF51264HZ-1G4H1 vendor: Micron Technology physical id: 1 serial: 3749C127 slot: DIMM 2 size: 4GiB width: 64 bits clock: 1334MHz (0.7ns)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/167042", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73847/" ] }
167,058
How do I correctly round IEEE 754 floating point numbers on the command line? I want to specify the precision of the output number - the count of fractional digits. Rounding 6.66 to precision 1 should give 6.7 , for example. More in the table below: Value Precision Rounded6.66 0 76.66 1 6.76.66 2 6.666.66 3 6.6606.666 3 6.6666.6666 3 6.667 It should be usable in an interactive shell, but ideally robust enough for using it in production shell scripts.
Rounding floating point numbers What does "rounding a floating point number" mean? That's easy, obviously... Where's my math book from school... No, we already know nothing related to floating point numbers is easy: For a start, there are multiple rounding modes: Rounding upwards? Rounding downwards? Rounding to zero? Rounding to nearest - ties to even? Rounding to nearest - ties away from zero? How to handle the corner cases? How to find out which are the corner cases? OK, looks like we better use an implementation of the IEEE 754 standard, and let our system take care of that. To round a floating point number in the shell, based on standard floating point arithmetic, we need three steps: Convert the input text from a command line argument to a standard floating point number. Round the floating point number using the normal IEEE 754 implementation. Format the number as a string for output. Turns out that the shell command printf can do all of this. It can be used to print numbers according to a format specification as described in man 3 printf . The numbers are rounded implicitly in the standard way if it is required for the output format: The command Round x to p digits precision with input as command line arguments: printf "%.*f\n" "$p" "$x" Or in a shell pipeline, with input of x on standard input, and p as argument: echo "$x" | xargs printf "%.*f\n" "$p" Examples: $ printf '%.*f\n' 0 6.667$ printf '%.*f\n' 1 6.666.7$ printf '%.*f\n' 2 6.666.66$ printf '%.*f\n' 3 6.666.660$ printf '%.*f\n' 3 6.6666.666$ printf '%.*f\n' 3 6.66666.667 Bad traps Beware the locale! It specifies the separator between the integral and fraction part - the . , as you may expect. But see yourself what happens in a German locale, for example: $ LC_ALL=de_DE.UTF-8 printf '%.*f\n' 3 6.66666,667 Yes, that's right 6,667 - six comma six six seven. That would mess up your script for sure. (But only for the two customers in Germany. Except for the developer's machines currently debugging for these customers.) More robust To make it more robust, use: LC_ALL=C /usr/bin/printf "%.*f\n" "$p" "$x" or echo "$x" | LC_ALL=C xargs /usr/bin/printf "%.*f\n" "$p" This also uses /usr/bin/printf instead of the shell builtin of bash or zsh to work around minor inconsistencies in implementation of the printf variants, and prevent a very dirty effect when, in a German locale, LC_ALL is set, but not exported. Then, the builtin uses , , and /usr/bin/printf uses . ... See also %g for rounding to a specified number of significant digits.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/167058", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63775/" ] }
167,071
I have the following path : $ vim /path/to/some/where If I press Ctrl + w , It removes entire text to first space. The result would be : $ vim How do I delete just the word next of last slash with comination keys?
Try Alt + Backspace . From bash documentation : backward-kill-word (M-DEL) Kill the word behind point. Word boundaries are the same as backward-word.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/167071", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52995/" ] }
167,077
Shouldn't it be possible? Let's assume I don't need a response, I just want to send a request. Shouldn't we be able to alter tcp/ip headers, because our computer sends it? I am probably missing something, just really curious, learning about it in the uni.
You can using the -H/--header argument: You could spoof your ip address: curl --header "X-Forwarded-For: 192.168.0.2" http://example.com Example: client $ curl http://webhost.co.uk web host $ tailf access.log | grep 192.168.0.54 192.168.0.54 - - [10/Nov/2014:15:56:09 +0000] "GET / HTTP/1.1" 200 14328 "-" "curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2" client with ip address changed $ curl --header "X-Forwarded-For: 192.168.0.99" http://webhost.co.uk web host $ tailf access.log | grep 192.168.0.99 192.168.0.99 - - [10/Nov/2014:15:56:43 +0000] "GET / HTTP/1.1" 200 14328 "-" "curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2" man curl -H/--header <header> (HTTP) Extra header to use when getting a web page. You may specify any number of extra headers. Note that if you should add a custom header that has the same name as one of the internal ones curl would use, your externally set header will be used instead of the internal one. This allows you to make even trickier stuff than curl would normally do. You should not replace internally set headers without knowing perfectly well what you’re doing. Remove an internal header by giving a replacement without content on the right side of the colon, as in: -H "Host:". References: Modify_method_and_headers
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/167077", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90963/" ] }
167,084
I have a hundreds of pdf files and html files in a directory.And I want to know total size of pdf files. By command du -ch /var/foo I can see total file size but I only need last line, the total size. If the directory contains only pdf files I can use -s option, but the option can't be used this time. How can I get only total size of particular file type?
With GNU du (i.e. on non-embedded Linux or Cygwin), you can use the --exclude option to exclude the files you don't want to match. du -s --exclude='*.html' /var/foo If you want to positively match *.pdf files, you'll need to use some other method to list the files, and du will at least display one output line per argument, plus a grand total with the option -c . You can call tail to remove all but the last line, or sed to remove the word “total” as well. To enumerate the files in that one directory, use wildcards in the shell. du -sc /var/foo/*.pdf | tail -n1du -sc /var/foo/*.pdf | sed -n '$s/\t.*//p' If you need to traverse files in subdirectories as well, use find , or use a **/ pattern if your shell supports that. For **/ , in bash, first run shopt -s extglob , and note that bash versions up to 4.2 will traverse symbolic links to directories; in zsh, this works out of the box. du -sc /var/foo/**/*.pdf | tail -n1 An added complication with the find version is that if there are too many files, find will run du more than once, to keep under the command line length limit. With the wildcard method, you'll get an error if that happens (“command line length limit exceeded”). The following code assumes that you don't have any matching file name containing a newline. find /var/foo -name '*.pdf' -exec du -sc {} + |awk '$2 == "total" {total += $1} END {print total}'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/167084", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/44001/" ] }
167,103
I want to use two ethernet cards on my machine. So I've physically inserted the new ethernet card in to my machine. However when enter the ifconfig -a command Ubuntu cannot find or detect the new device. So I have tried to bring up the new device in the usual manner and I received the error: ifconfig eth1 upeth1: error fetching interface information: Device not found This is when i enter ifconfig -a command eth0 Link encap:Ethernet HWaddr 00:17:31:53:b7:9c UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:17 Memory:cffe0000-d0000000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2249 errors:0 dropped:0 overruns:0 frame:0 TX packets:2249 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:362973 (362.9 KB) TX bytes:362973 (362.9 KB)wlan0 Link encap:Ethernet HWaddr 64:70:02:25:6d:e4 inet addr:192.168.150.5 Bcast:192.168.150.255 Mask:255.255.255.0 inet6 addr: fe80::6670:2ff:fe25:6de4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:14219 errors:0 dropped:0 overruns:0 frame:0 TX packets:18637 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8525631 (8.5 MB) TX bytes:3548272 (3.5 MB) Ubuntu cannot find/or detect the newly installed device. I'm using a Silan SC92031 PCI Fast Ethernet Adapter . I've checked for a module that correspond to it and I found `sc92031.ko'. I tried to load that module using modprobe command `modprobe -v sc92031` Afterwards I checked whether that module had been successfully loaded using this command: #cat /proc/modules |grep sc92031 sc92031 18108 0 - Live 0xf8436000 I rebooted my machine. After booting Ubuntu still cannot find or detect the new network adapter. It appears that the module has disappeared or has been unloaded automatically. lspci00:00.0 Host bridge: Intel Corporation 82945G/GZ/P/PL Memory Controller Hub (rev 02)00:02.0 VGA compatible controller: Intel Corporation 82945G/GZ Integrated Graphics Controller (rev 02)00:1b.0 Audio device: Intel Corporation NM10/ICH7 Family High Definition Audio Controller (rev 01)00:1c.0 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 1 (rev 01)00:1c.1 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 2 (rev 01)00:1d.0 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #1 (rev 01)00:1d.1 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #2 (rev 01)00:1d.2 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #3 (rev 01)00:1d.3 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #4 (rev 01)00:1d.7 USB controller: Intel Corporation NM10/ICH7 Family USB2 EHCI Controller (rev 01)00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1)00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01)00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 01)00:1f.2 IDE interface: Intel Corporation NM10/ICH7 Family SATA Controller [IDE mode] (rev 01)00:1f.3 SMBus: Intel Corporation NM10/ICH7 Family SMBus Controller (rev 01)02:00.0 Ethernet controller: Intel Corporation 82573L Does anyone have any ideas what I can do next to diagnose this issue?
lspci doesn't show your network adapter. That means it isn't connected to the PCI bus. All PCI peripherals appear in the lspci output, regardless of whether you have a driver for them: it's a PCI feature. The text descriptions (like “Intel Corporation 82573L”) are from a database, but the controller would at least appear as “Ethernet controller: Device 1904:2031” or some such. So there's a hardware problem, either your adapter doesn't work, or the PCI slot doesn't work, or there's a bad connection or an incompatibility or something. Once you solve the hardware problem, the right driver should be loaded automatically and the interface will appear in ifconfig -a . If the driver isn't loaded automatically but the hardware is present, that would mean that the driver doesn't recognize your hardware.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167103", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66786/" ] }
167,104
It is easy to redirect stdout or stderr to the same output (file or one of the 2 std output) with a 1>&1 , >&2 and/or >file .Is there a way to send the same output to both the std output in KSH (like | tee File but with &2 as file) ?I try to have the stream to stdout AND on stderr at the same time (duplicate the output, one on each channel) but without using temporary object (variable or file) I use a temporary variable but try to avoid this Tempo="$( #all my code with output to stdout and stderr )"echo "${Tempo}"echo "${Tempo}" >&2 Question asked on StackOverflow with suggestion to ask it here
lspci doesn't show your network adapter. That means it isn't connected to the PCI bus. All PCI peripherals appear in the lspci output, regardless of whether you have a driver for them: it's a PCI feature. The text descriptions (like “Intel Corporation 82573L”) are from a database, but the controller would at least appear as “Ethernet controller: Device 1904:2031” or some such. So there's a hardware problem, either your adapter doesn't work, or the PCI slot doesn't work, or there's a bad connection or an incompatibility or something. Once you solve the hardware problem, the right driver should be loaded automatically and the interface will appear in ifconfig -a . If the driver isn't loaded automatically but the hardware is present, that would mean that the driver doesn't recognize your hardware.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167104", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50738/" ] }
167,112
On my Fedora 20 system I use scp a lot, and this is the second time I encounter this, when I run this command: scp -r -P PORT user@host:/home/user/something/{file1,folder1,folder2,folder3,folder4} folder/folder2/ it asks me for the password for each file/directory it transfers. user@host's password: "password here" Question: What is happening here? Is this normal, I would think this is very peculiar behavior?
Your local shell (probably bash) is expanding user@host:/home/user/something/{file1,folder1,folder2,folder3,folder4} into: user@host:/home/user/something/file1 user@host:/home/user/something/folder1 user@host:/home/user/something/folder2 user@host:/home/user/something/folder3 user@host:/home/user/something/folder4 Instead, you can do: scp -r -P PORT user@host:"/home/user/something/file1 /home/user/something/folder1 /home/user/something/folder2 /home/user/something/folder3 /home/user/something/folder4" folder/folder2/ or, if you know user's login shell on the remote end is bash, you can use brace expansion too: scp -r -P PORT user@host:"/home/user/something/{file1,folder1,folder2,folder3,folder4}" folder/folder2/ to have the remote shell split the string into arguments instead of the local shell.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/167112", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36440/" ] }
167,148
If I have a string "1 2 3 2 1" - or an array [1,2,3,2,1] - how can I select the unique values, i.e. "1 2 3 2 1" produces "1 2 3" or [1,2,3,2,1] produces [1,2,3] Similar to uniq but uniq seems to work on whole lines, not patterns within a line...
If you are using zsh: $ array=(1 2 3 2 1)$ echo ${(u)array[@]}1 2 3 or (if KSH_ARRAYS option is not set) even $ echo ${(u)array}1 2 3
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167148", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10043/" ] }
167,151
I nee to subtract two lines which is in the format of time in shell.The time format is hh:mm:ssI used the code below to get the time. cat /var/log/kern.log |grep usb |tail -2| awk '{print $3}' The output of the above code is 18:23:2418:20:20 How can I find the difference in seconds?
I would take a step further (inspired by this post ): # => 18:23:24 --> 66204grep usb /var/log/kern.log|tail -2|awk '{print $3}'|awk -F: '{print ($1 * 3600) + ($2 * 60) + $3 }' So, after I had: 6620466020 You could then do: echo $((66204-66020)) # => 184
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167151", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91015/" ] }
167,158
I connect to RHEL 7 via Putty. I want to have multitab functionality inside. But when I press "Ctrl-Shift-T" nothing happen. How can I have multitab functionality?
I would take a step further (inspired by this post ): # => 18:23:24 --> 66204grep usb /var/log/kern.log|tail -2|awk '{print $3}'|awk -F: '{print ($1 * 3600) + ($2 * 60) + $3 }' So, after I had: 6620466020 You could then do: echo $((66204-66020)) # => 184
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167158", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86978/" ] }
167,165
I configured Qemu's grub the following way: GRUB_TERMINAL="serial console"GRUB_SERIAL_COMMAND="serial"GRUB_CMDLINE_LINUX="..console=ttyS0" and run the qemu process with the -nographic command line option. These enables to use the current terminal for the serial console and qemu monitor console. However now, anytime I press Ctrl + C inside the running VM, it is intercepted by qemu and shuts the process down. How am I supposed to pass Ctrl + C or any other keystroke involving CTRL in Qemu?
In your shell, before you run qemu, run "stty intr ^]" to change the interrupt key from ^c to ctrl-] That way, ctrl-c will be passed through to qemu, but you can still interrupt qemu itself by pressing ctrl-]
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167165", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86215/" ] }
167,181
I have a few directories of ~10,000 files. What's the quickest way to search each file and return the filename if the second line contains a specific string? Edited for clarity
awk 'FNR==2 {if (/some string/) print FILENAME; nextfile}' ./* Some awks don't have "nextfile".
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/167181", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91027/" ] }
167,201
Might be a very silly question to many folks out there, but I'm dense! Ex: Applying predefined layouts: C-a M-1 switch to even-horizontal layoutC-a M-2 switch to even-vertical layoutC-a M-3 switch to main-horizontal layoutC-a M-4 switch to main-vertical layoutC-a M-5 switch to tiled layoutC-a space switch to the next layout What is M? If it's just shift+m then please take away my neckbeard right now. I thought it might be alt + key but that doesn't seem to be it.
It's the meta key . So M-1 is meta-1. (Just like how C-1 is control-1). Now, when you look at your keyboard, you probably notice the distinct lack of any key actually labeled meta, at least if you have a normal PC keyboard. Depending on how your keyboard layout is set up, meta is typically either the alt key or the logo (Windows) key. In short, C-a M-1 is telling you to press and hold Control and press A ; then release both; then press and hold Alt (or Windows ) and press 1 . The release them, of course.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/167201", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37656/" ] }
167,204
I'd like to take a program P that reads from stdin & writes to stdout , but connect it to nc or whatever such that it reads from a certain port and outputs to another port. # The reading is easy, here P reads from port 50505nc -l 50505 | P How do I get it to write back to say port 60606?
I you mean that someone may open 2 TCP connections to your machine, one to port 50505 and another to port 60606, send data on the first one intended to be fed to P and expect to read the output of P from the second TCP connection, then that would be: < /dev/null nc -q -1 -l 50505 | P | nc -l 60606 > /dev/null Or with socat : socat -u tcp-listen:50505,reuseaddr - | P | socat -u - tcp-listen:60606,reuseaddr For P to send its output back to the same connection: socat tcp-listen:50505,reuseaddr exec:P
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/167204", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40428/" ] }