output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
You don't need a VM, if your second computer also runs Linux, then you just need to install LVM (package is usually called lvm2), if you don't have it already installed and connect both drives (I don't have personal experience with USB NVMe enclosures, but these look fine if you don't have an option to use an internal NVMe slot or PCIe NVMe adapter) and you'll be able to access the logical volumes. You might need to run vgscan and vgchange -ay <vgname> to activate the volume group, but that should be all and after that you can simply mount the logical volumes and access the data.
Only reason to use the VM would be if you also use LVM on your computer and you use the same name for your VG on both (VG name is used as unique identifier in LVM so it won't allow you to activate two VGs with same name).
Note: I assumed the reason for sending the laptop for repairs was not related to the storage. If you believe the data might be corrupted, then using TestDisk to recover the LVM and partitions and PhotoRec to recover the data would be the way to go (ideally from an image of the drives, not from the drives directly).
|
I have two 1TB NVMe SSDs that was setup using LVM in a manner where the two drives were combined to make up (roughly) one single 2TB logical volume.
The laptop that booted from these drives has been sent off for repairs and whether it can be fixed or not is uncertain.
In the meantime, I'd like to recover the data and transfer it to something more easily accessible (like an external hard drive).
How can I achieve that?
My current idea is to purchase these two NVMe drive enclosures, and connect both of the drives to the same computer. Then, use Virtual Machine Manager to share both of those USB devices to a newly created virtual machine. Maybe I'll be able to boot that virtual machine from these drives (as though the virtual machine was the laptop they previously booted). Or, maybe I can install some ISO image onto the virtual machine that can at least see the LVM volumes and all its sub-partitions.
Does this sound feasible? I've never had to do this before, so I'd appreciate any suggestions that may help me avoid some trail and error.
| Recovering Data from 2 NVMe SSDs that Were Setup Using LVM |
I don't have self-encrypting NVMe disks that I could test these commands with.
But based on how SAN LUN partitions can be rescanned, at least one of these ways might work:
echo 1 > /sys/class/nvme/nvme0/rescan_controlleror
partprobe /dev/nvme0n1 |
I've been spending some time working with self encrypting SSDs recently, and I am stuck on how to access drive contents after I've unlocked it.
Normally with this drive, you would load a Preboot Authentication image on startup that would unlock a partition containing the OS, and you would see the unlocked partition in /dev when the OS boots. However, I'm using the drive for secondary storage, and would like to be able to unlock it after the OS boots. Here's the behavior I'm looking to achieve:/dev/nvme0 is present in the /dev directory, but you can't see any partitions because it's locked. Exactly what I'd expect!
Issue TCG Opal commands to unlock the drive. Confirm that the drive is unlocked using TCG Identify command. Success!
??? <----- This is where I'm stuck
/dev/nvme0n1p* for each partition on the drive is present in /devWhat do I need to do for step 3 in order to force a reread of the device so that I can see the partition after it's unlocked? And is this something I can do programmatically, or would I have to invoke a script of some sort?
| How do I get a self encrypting NVMe SSD partition to show up in /dev after unlocking it? |
Critical Warning is a bit field read directly from the device itself. smartmontools then only displays it to you... so you're looking for an interpretation that smartmontools itself doesn't do. Technically smartctl does not display this because of reason X or Y; the drive firmware sets the failure bit internally out of its own considerations.
See NVM Express® Base Specification, Figure 208, Page 200 where this particular critical warning bit is described as such:Critical Warning: This field indicates critical warnings for the state of the controller. Each bit
corresponds to a critical warning type; multiple bits may be set to ‘1’. If a bit is cleared to ‘0’,
then that critical warning does not apply. Critical warnings may result in an asynchronous
event notification to the host. Bits in this field represent the state at the time the Get Log Page
command is processed and may not reflect the state at the time a related asynchronous event
notification, if any, occurs or occurred.Bits: 2 | Definition: If set to ‘1’, then the NVM subsystem reliability has been degraded due to significant
media related errors or any internal error that degrades NVM subsystem reliability.(Bits are counted from 0 here, so Critical Warning (0x04) is bit 2).
Is Percentage Used enough to set this bit? It's possible. I did a Google search for smartctl outputs of Samsung EVO SSDs and those few I could find with Percentage Used >100%, all had it set.
You still shouldn't get failed segments in a self-test though. Perhaps run a long selftest as well as a read-only test using badblocks (don't use -n or -w) or dd?
If in doubt: replace the drive.
| Having bought a used PC and now installing smartd on it, I'm getting smartd "Critical Warning (0x04): Reliability" emails about it (full pastebin). The Percentage Used: 112% is concerning. Is that enough for smartd to declare "Critical Warning (0x04): Reliability"?
This message was generated by the smartd daemon running on: host name: kosh
DNS domain: [Empty]The following warning/error was logged by the smartd daemon:Device: /dev/nvme0, Critical Warning (0x04): ReliabilityDevice info:
Samsung SSD 970 EVO Plus 1TB, S/N:S4EWNM0R328374F, FW:2B2QEXM7, 1.00 TB<snip>=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: FAILED!
- NVM subsystem reliability has been degradedSMART/Health Information (NVMe Log 0x02)<snip>
Percentage Used: 112%
<snip>Error Information (NVMe Log 0x01, 16 of 64 entries)
Num ErrCount SQId CmdId Status PELoc LBA NSID VS Message
0 4357 0 0x0010 0x4004 - 0 0 - Invalid Field in CommandSelf-test Log (NVMe Log 0x06)
Self-test status: No self-test in progress
No Self-tests LoggedIt looks to me like the "Invalid Field in Command" errors are red herrings since I'm running smartmontools version 7.4 where https://www.smartmontools.org/ticket/1222 has been fixed, so that should not cause tests to fail.
I then ran:
$ sudo smartctl -t short /dev/nvme0n1and now sudo smartctl --all /dev/nvme0n1 ends with:
Self-test Log (NVMe Log 0x06)
Self-test status: No self-test in progress
Num Test_Description Status Power_on_Hours Failing_LBA NSID Seg SCT Code
0 Short Completed: failed segments 3535 - 1 2 - -
1 Short Completed: failed segments 3535 - 1 2 - -But I don't know how to get more information about the "failed segments".
Is this enough for me to conclude that the disk is bad and needs replacement, or it there still hope for it?
| Is this drive dead?: Samsung SSD 970 EVO Plus 1TB |
Consider increasing the test size or simply removing the limit. Using --size=1024m means you're targeting a specific range of NAND flash, which can limit the bandwidth.
Opt for a time-based option and a smaller block size. By specifying --bs=1024m and the same size, you're essentially completing each loop with a single I/O from fio (though it will be divided into many smaller I/Os at the block layer), leading to potentially biased results.
Increase the I/O depth. It's unlikely that any SSD's spec numbers are based on a single job and depth of 1.Consider trying the following command:
sudo fio --time_based --runtime=300 --filename=/dev/nvme0n2 --ioengine=libaio --direct=1 --bs=1024k --iodepth=128 --numjobs=1 --rw=read --name=read --group_reporting |
TLDL
For the very simple sequential read, FIO reports is much slower than the NVMe SSD sequentail read capability.Main Text
Hello everyone,
I have been facing an issue while trying to achieve the maximum read bandwidth reported by the vendor for my Samsung 980 Pro 1T NVMe SSD. According to the Samsung product description, the SSD is capable of reaching read bandwidths of around 7 GB/s. However, despite my efforts, I have been unable to achieve this maximum read bandwidth.
Current Setup:SSD: Samsung 980 Pro 1T NVMe SSD
Connection: PCIe 4.0 port
Operating System: Linux UbuntuCurrent FIO Script and Results:
To test the read performance of the SSD, I have been using the FIO benchmarking tool with the following script:
$ sudo fio --loops=5 --size=1024m --filename=/dev/nvme0n2 --stonewall --ioengine=libaio --direct=1 --zero_buffers=1 --name=Seqread --bs=1024m --iodepth=1 --numjobs=1 --rw=readHere are the results obtained from running the FIO script:
Seqread: (g=0): rw=read, bs=(R) 1024MiB-1024MiB, (W) 1024MiB-1024MiB, (T) 1024MiB-1024MiB, ioengine=libaio, iodepth=1
fio-3.28
Starting 1 process
Jobs: 1 (f=1)
Seqread: (groupid=0, jobs=1): err= 0: pid=1504682: Mon Oct 16 09:28:48 2023
read: IOPS=3, BW=3368MiB/s (3532MB/s)(5120MiB/1520msec)
slat (msec): min=151, max=314, avg=184.19, stdev=72.71
clat (msec): min=2, max=149, avg=119.59, stdev=65.39
lat (msec): min=300, max=316, avg=303.77, stdev= 7.33
clat percentiles (msec):
| 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3],
| 30.00th=[ 148], 40.00th=[ 148], 50.00th=[ 148], 60.00th=[ 148],
| 70.00th=[ 150], 80.00th=[ 150], 90.00th=[ 150], 95.00th=[ 150],
| 99.00th=[ 150], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 150],
| 99.99th=[ 150]
bw ( MiB/s): min= 2048, max= 4096, per=81.07%, avg=2730.67, stdev=1182.41, samples=3
iops : min= 2, max= 4, avg= 2.67, stdev= 1.15, samples=3
lat (msec) : 4=20.00%, 250=80.00%
cpu : usr=0.00%, sys=31.47%, ctx=405, majf=0, minf=262156
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=5,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1Run status group 0 (all jobs):
READ: bw=3368MiB/s (3532MB/s), 3368MiB/s-3368MiB/s (3532MB/s-3532MB/s), io=5120MiB (5369MB), run=1520-1520msecDisk stats (read/write):
nvme0n2: ios=9391/0, merge=0/0, ticks=757218/0, in_queue=757218, util=93.39%I would greatly appreciate any guidance or suggestions on how to optimize my FIO script to achieve the expected read bandwidth of around 7 GB/s. If there are any improvements or modifications that can be made to the script, please let me know. Thank you in advance for your assistance!
Please feel free to provide any additional information or insights that may be relevant to the issue at hand.Note:
It should be PCIe4.0*4:
$ lspci -vv -s 5e:00.0
5e:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO (prog-if 02 [NVM Express])
Subsystem: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 32 bytes
Interrupt: pin A routed to IRQ 39
NUMA node: 0
IOMMU group: 84
Region 0: Memory at c5e00000 (64-bit, non-prefetchable) [size=16K]
Capabilities: <access denied>
Kernel driver in use: nvme
Kernel modules: nvme$ cat /sys/class/pci_bus/0000\:5e/device/0000\:5e\:00.0/max_link_width
4
$ cat /sys/class/pci_bus/0000\:5e/device/0000\:5e\:00.0/max_link_speed
16.0 GT/s PCIe | FIO reports slower sequential read than the advertised NVMe SSD read bandwidth |
The problem was caused by the enabled but unused BIOS option Storage Controller for Intel Optane (there is no Optane memory in the system in question).
Disabling this BIOS setting shows a warning about possible data loss and accepting this warning allows the NVMEs to be recognized, in fact without data loss.
As long as that Optane option was on, the Linux kernel produced a false warning in dmesg about active hardware RAID (which is available, but has never been enabled). A second line suggested disabling this hardware RAID to get access to the NVMe SSDs.
|
On a HP ZBook G6 with two Kingston M.2 NVMe SSDs is Windows running without problems. But when booting the Debian 12 installer ISO from an USB hard disk, the Linux 6.0 kernel does not detect these NVMe disks but detects only the USB boot disk in /proc/partitions.
I tried modprobe nvme-core nvme nvmet but the repeated Debian installer "Detect disks" still does not detect these. (The installer ISO initrd does not contain a partprobe executable so I did not try that.) lspci does not show anything about NVMe.
Same result with the default graphical install and the text mode experts installer.
What can I do to detect the NVMe disks ?
Edit: Updating the BIOS and disabling Secure Boot did not help. Hardware RAID is disabled.
| Debian 12 installer does not detect NVMe disks |
Regarding SSD/NVMe, I strongly suggest you to read Archwiki pages [1] and [2].
This is the starting point, rather, much more!
Talking about which filesystem to use for, take a look here [3] for example.
For Thunderbolt, I have no experience with this hardware, so my only advice is to start from the Archwiki as well.
A personal advice: since your hardware is really new, use always latest kernel. [1] https://wiki.archlinux.org/index.php/Solid_state_drive
[2] https://wiki.archlinux.org/index.php/Solid_state_drive/NVMe
[3] https://www.phoronix.com/scan.php?page=article&item=linux-50-filesystems
|
I recently picked up this lovely laptop (HP Spectre X360 13), and want to install Manjaro on it. It has a Toshiba NVMe SSD (KXG50ZNV1T02).
Seeing that this will be my first install of Linux onto an SSD of any sort, I would just like the community's feedback regarding any special precaution I should be taking when installing to an NVMe drive, to maintain longevity of the drive? TRIM? Discards?
Closest machine I could find info on on the ArchWiki is a similarly-spec'd model, but no special mention of SSD considerations.
I will not be running a dual-boot setup ... Manjaro will be used exclusively.
Lastly, is there any special configuration/package needed to get full-speed Thunderbolt 3 support in Linux? Or is this not needed?
| Installing Manjaro 18 on HP Spectre NVMe SSD |
The reason is that the operating system needs memory to manage each open file, and memory is a limited resource - especially on embedded systems.
As root user you can change the maximum of the open files count per process (via ulimit -n) and per system (e.g. echo 800000 > /proc/sys/fs/file-max).
|
Right now, I know how to:find open files limit per process: ulimit -n
count all opened files by all processes: lsof | wc -l
get maximum allowed number of open files: cat /proc/sys/fs/file-maxMy question is: Why is there a limit of open files in Linux?
| Why is number of open files limited in Linux? |
If you can't kill your application, you can truncate instead of deleting the log file to reclaim the space. If the file was not open in append mode (with O_APPEND), then the file will appear as big as before the next time the application writes to it (though with the leading part sparse and looking as if it contained NUL bytes), but the space will have been reclaimed (that does not apply to HFS+ file systems on Apple OS/X that don't support sparse files though).
To truncate it:
: > /path/to/the/file.logIf it was already deleted, on Linux, you can still truncate it by doing:
: > "/proc/$pid/fd/$fd"Where $pid is the process id of the process that has the file opened, and $fd one file descriptor it has it opened under (which you can check with lsof -p "$pid".
If you don't know the pid, and are looking for deleted files, you can do:
lsof -nP | grep '(deleted)'lsof -nP +L1, as mentioned by @user75021 is an even better (more reliable and more portable) option (list files that have fewer than 1 link).
Or (on Linux):
find /proc/*/fd -ls | grep '(deleted)'Or to find the large ones with zsh:
ls -ld /proc/*/fd/*(-.LM+1l0)An alternative, if the application is dynamically linked is to attach a debugger to it and make it call close(fd) followed by a new open("the-file", ....).
|
How does one find large files that have been deleted but are still open in an application? How can one remove such a file, even though a process has it open?
The situation is that we are running a process that is filling up a log file at a terrific rate. I know the reason, and I can fix it. Until then, I would like to rm or empty the log file without shutting down the process.
Simply doing rm output.log removes only references to the file, but it continues to occupy space on disk until the process is terminated. Worse: after rming I now have no way to find where the file is or how big it is! Is there any way to find the file, and possibly empty it, even though it is still open in another process?
I specifically refer to Linux-based operating systems such as Debian or RHEL.
| Find and remove large files that are open but have been deleted |
Use lsof | grep /media/whatever to find out what is using the mount.
Also, consider umount -l (lazy umount) to prevent new processes from using the drive while you clean up.
|
Sometimes, I would like to unmount a usb device with umount /run/media/theDrive, but I get a drive is busy error.
How do I find out which processes or programs are accessing the device?
| How do I find out which processes are preventing unmounting of a device? |
A hard limit can only be raised by root (any process can lower it). So it is useful for security: a non-root process cannot overstep a hard limit. But it's inconvenient in that a non-root process can't have a lower limit than its children.
A soft limit can be changed by the process at any time. So it's convenient as long as processes cooperate, but no good for security.
A typical use case for soft limits is to disable core dumps (ulimit -Sc 0) while keeping the option of enabling them for a specific process you're debugging ((ulimit -Sc unlimited; myprocess)).
The ulimit shell command is a wrapper around the setrlimit system call, so that's where you'll find the definitive documentation.
Note that some systems may not implement all limits. Specifically, some systems don't support per-process limits on file descriptors (Linux does); if yours doesn't, the shell command may be a no-op.
|
What is the difference between hard and soft limits in ulimit?
For number of open files, I have a soft limit of 1024 and a hard limit of 10240.
It is possible to run programs opening more than 1024 files. What is the soft limit for?
| ulimit: difference between hard and soft limits |
Yes, this will list all open file descriptors:
$ ls -l /proc/$$/fd
total 0
lrwx------ 1 isaac isaac 64 Dec 28 00:56 0 -> /dev/pts/6
lrwx------ 1 isaac isaac 64 Dec 28 00:56 1 -> /dev/pts/6
lrwx------ 1 isaac isaac 64 Dec 28 00:56 2 -> /dev/pts/6
lrwx------ 1 isaac isaac 64 Dec 28 00:56 255 -> /dev/pts/6
l-wx------ 1 isaac isaac 64 Dec 28 00:56 4 -> /home/isaac/testfile.txtOf course, as usual: 0 is stdin, 1 is stdout and 2 is stderr.
The 4th is an open file (to write) in this case.
|
I am running in an interactive bash session. I have created some file descriptors, using exec, and I would like to list what is the current status of my bash session.
Is there a way to list the currently open file descriptors?
| How to list the open file descriptors (and the files they refer to) in my current bash session |
Since kernel 3.3, it is possible using ss or lsof-4.89 or above — see Stéphane Chazelas's answer.
In older versions, according to the author of lsof, it was impossible to find this out: the Linux kernel does not expose this information. Source: 2003 thread on comp.unix.admin.
The number shown in /proc/$pid/fd/$fd is the socket's inode number in the virtual socket filesystem. When you create a pipe or socket pair, each end successively receives an inode number. The numbers are attributed sequentially, so there is a high probability that the numbers differ by 1, but this is not guaranteed (either because the first socket was N and N+1 was already in use due to wrapping, or because some other thread was scheduled between the two inode allocations and that thread created some inodes too).
I checked the definition of socketpair in kernel 2.6.39, and the two ends of the socket are not correlated except by the type-specific socketpair method. For unix sockets, that's unix_socketpair in net/unix/af_unix.c.
|
I want to determine which process has the other end of a UNIX socket.
Specifically, I'm asking about one that was created with socketpair(), though the problem is the same for any UNIX socket.
I have a program parent which creates a socketpair(AF_UNIX, SOCK_STREAM, 0, fds), and fork()s. The parent process closes fds[1] and keeps fds[0] to communicate. The child does the opposite, close(fds[0]); s=fds[1]. Then the child exec()s another program, child1. The two can communicate back and forth via this socketpair.
Now, let's say I know who parent is, but I want to figure out who child1 is. How do I do this?
There are several tools at my disposal, but none can tell me which process is on the other end of the socket. I have tried: lsof -c progname
lsof -c parent -c child1
ls -l /proc/$(pidof server)/fd
cat /proc/net/unixBasically, I can see the two sockets, and everything about them, but cannot tell that they are connected. I am trying to determine which FD in the parent is communicating with which child process.
| Who's got the other end of this unix socketpair? |
Running it with
strace -e trace=open,openat,close,read,write,connect,accept your-command-herewould probably be sufficient.
You'll need to use the -o option to put strace's output somewhere other than the console, if the process can print to stderr. If your process forks, you'll also need -f or -ff.
Oh, and you might want -t as well, so you can see when the calls happened.Note, you may need to tweak the function call list depending on what your process does - I needed to add getdents for example, to get better sample using ls:
$ strace -t -e trace=open,openat,close,read,getdents,write,connect,accept ls >/dev/null
...
09:34:48 open("/etc/ld.so.cache", O_RDONLY) = 3
09:34:48 close(3) = 0
09:34:48 open("/lib64/libselinux.so.1", O_RDONLY) = 3
09:34:48 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0@V\0\0\0\0\0\0"..., 832) = 832
09:34:48 close(3) = 0
...
09:34:48 open("/proc/filesystems", O_RDONLY) = 3
09:34:48 read(3, "nodev\tsysfs\nnodev\trootfs\nnodev\tb"..., 1024) = 366
09:34:48 read(3, "", 1024) = 0
09:34:48 close(3) = 0
09:34:48 open("/usr/lib/locale/locale-archive", O_RDONLY) = 3
09:34:48 close(3) = 0
09:34:48 open(".", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
09:34:48 getdents(3, /* 5 entries */, 32768) = 144
09:34:48 getdents(3, /* 0 entries */, 32768) = 0
09:34:48 close(3) = 0
09:34:48 write(1, "file-A\nfile-B\nfile-C\n", 21) = 21
09:34:48 close(1) = 0
09:34:48 close(2) = 0 |
I know I can view the open files of a process using lsof at that moment in time on my Linux machine. However, a process can open, alter and close a file so quickly that I won't be able to see it when monitoring it using standard shell scripting (e.g. watch) as explained in "monitor open process files on linux (real-time)".
So, I think I'm looking for a simple way of auditing a process and see what it has done over the time passed. It would be great if it's also possible to see what network connections it (tried to) make and to have the audit start before the process got time to run without the audit being started.
Ideally, I would like to do this:
sh $ audit-lsof /path/to/executable
4530.848254 OPEN read /etc/myconfig
4530.848260 OPEN write /var/log/mylog.log
4540.345986 OPEN read /home/gert/.ssh/id_rsa <-- suspicious
4540.650345 OPEN socket TCP ::1:34895 -> 1.2.3.4:80 |
[...]
4541.023485 CLOSE /home/gert/.ssh/id_rsa <-- would have missed
4541.023485 CLOSE socket TCP ::1:34895 -> 1.2.3.4:80 | this when pollingWould this be possible using strace and some flags to not see every system call?
| How do I monitor opened files of a process in realtime? |
When you do <(some_command), your shell executes the command inside the parentheses and replaces the whole thing with a file descriptor, that is connected to the command's stdout. So /dev/fd/63 is a pipe containing the output of your ls call.
When you do <(ls -l) you get a Permission denied error, because the whole line is replaced with the pipe, effectively trying to call /dev/fd/63 as a command, which is not executable.
In your second example, cat <(ls -l) becomes cat /dev/fd/63. As cat reads from the files given as parameters you get the content. echo on the other hand just outputs its parameters "as-is".
The last case you have, <() is simply replaced by nothing, as there is no command. But this is not consistent between shells, in zsh you still get a pipe (although empty).
Summary:
<(command) lets you use the ouput of a command, where you would normally need a file.
Edit: as Gilles points out, this is not a named pipe, but an anonymous pipe. The main difference is, that it only exists, as long as the process is running, while a named pipe (created e.g. with mkfifo) will stay without processes attached to it.
|
I am trying to understand named pipes in the context of this particular example.
I type <(ls -l) in my terminal and get the output as, bash: /dev/fd/63: Permission denied.
If I type cat <(ls -l), I could see the directory contents. If I replace the cat with echo, I think I get the terminal name (or is it?).
echo <(ls -l) gives the output as /dev/fd/63.
Also, this example output is unclear to me.
ls -l <(echo "Whatever")
lr-x------ 1 root root 64 Sep 17 13:18 /dev/fd/63 -> pipe:[48078752]However, if I give,ls -l <() it lists me the directory contents.
What is happening in case of the named pipe?
| Why does process substitution result in a file called /dev/fd/63 which is a pipe? |
tail +1f fileI tested it on Ubuntu with the LibreOffice source tarball while wget was downloading it:
tail +1f libreoffice-4.2.5.2.tar.xz | tar -tvJf -It also works on Solaris 10, RHEL3, AIX 5 and Busybox 1.22.1 in my Android phone (use tail +1 -f file with Busybox).
|
A file is being sequentially downloaded by wget.
If I start unpacking it with cat myfile.tar.bz2 | tar -xj, it may unpack correctly or fail with "Unexpected EOF", depending on what is faster.
How to "cat and follow" a file, i.e. output content of the file to stdout, but don't exit on EOF, instead keep subsribed to that file and continue outputting new portions of the data, exiting only if the file is closed by writer and not re-opened within N seconds.I've created a script cat_and_follow based on @arielCo's answer that also terminates the tail when the file is not being opened for writing anymore.
| How do I "cat and follow" a file? |
The best test to see if a server is accepting connections is to actually try connecting. Use a regular client for whatever protocol your server speaks and try a no-op command.
If you want a lightweight TCP or UDP client you can drive simply from the shell, use netcat. How to program a conversation depends on the protocol; many protocols have the server close the connection on a certain input, and netcat will then exit.
while ! echo exit | nc localhost 13000; do sleep 10; doneYou can also tell netcat to exit after establishing the connection. It returns 1 if there's no connection and 0 if there is so we negate its output. Depending on your version of netcat, it may support one or both of the following commands:
while ! nc -z localhost 13000 </dev/null; do sleep 10; done
while ! nc -q 1 localhost 13000 </dev/null; do sleep 10; doneAn alternative approach is to wait for the server process to open a listening socket.
while netstat -lnt | awk '$4 ~ /:13000$/ {exit 1}'; do sleep 10; doneIf you are on Mac OS, netstat uses a slightly different output format, so you would want the following intead:
while netstat -lnt | awk '$4 ~ /\.13000$/ {exit 1}'; do sleep 10; doneOr you might want to target a specific process ID:
while ! lsof -n -Fn -p $pid | grep -q '^n.*:13000$'; do sleep 10; doneI can't think of any way to react to the process starting to listen to the socket (which would avoid a polling approach) short of using ptrace.
|
I need a command that will wait for a process to start accepting requests on a specific port.
Is there something in linux that does that?
while (checkAlive -host localhost -port 13000 == false)
do some waiting... | How do I tell a script to wait for a process to start accepting requests on a port? |
That's the inode number for the pipe or socket in question.
A pipe is a unidirectional channel, with a write end and a read end. In your example, it looks like FD 5 and FD 6 are talking to each other, since the inode numbers are the same. (Maybe not, though. See below.)
More common than seeing a program talking to itself over a pipe is a pair of separate programs talking to each other, typically because you set up a pipe between them with a shell:
shell-1$ ls -lR / | lessThen in another terminal window:
shell-2$ ...find the ls and less PIDs with ps; say 4242 and 4243 for this example...
shell-2$ ls -l /proc/4242/fd | grep pipe
l-wx------ 1 user user 64 Mar 24 12:18 1 -> pipe:[222536390]
shell-2$ ls -l /proc/4243/fd | grep pipe
l-wx------ 1 user user 64 Mar 24 12:18 0 -> pipe:[222536390]This says that PID 4242's standard output (FD 1, by convention) is connected to a pipe with inode number 222536390, and that PID 4243's standard input (FD 0) is connected to the same pipe.
All of which is a long way of saying that ls's output is being sent to less's input.
Getting back to your example, FD 1 and FD 2 are almost certainly not talking to each other. Most likely this is the result of tying stdout (FD 1) and stderr (FD 2) together, so they both go to the same destination. You can do that with a Bourne shell like this:
$ some-program 2>&1 | some-other-programSo, if you poked around in /proc/$PID_OF_SOME_OTHER_PROGRAM/fd, you'd find a third FD attached to a pipe with the same inode number as is attached to FDs 1 and 2 for the some-program instance. This may also be what's happening with FDs 5 and 6 in your example, but I have no ready theory how these two FDs got tied together. You'd have to know what the program is doing internally to figure that out.
|
In Linux, in /proc/PID/fd/X, the links for file descriptors that are pipes or sockets have a number, like:
l-wx------ 1 user user 64 Mar 24 00:05 1 -> pipe:[6839]
l-wx------ 1 user user 64 Mar 24 00:05 2 -> pipe:[6839]
lrwx------ 1 user user 64 Mar 24 00:05 3 -> socket:[3142925]
lrwx------ 1 user user 64 Mar 24 00:05 4 -> socket:[3142926]
lr-x------ 1 user user 64 Mar 24 00:05 5 -> pipe:[3142927]
l-wx------ 1 user user 64 Mar 24 00:05 6 -> pipe:[3142927]
lrwx------ 1 user user 64 Mar 24 00:05 7 -> socket:[3142930]
lrwx------ 1 user user 64 Mar 24 00:05 8 -> socket:[3142932]
lr-x------ 1 user user 64 Mar 24 00:05 9 -> pipe:[9837788]Like on the first line: 6839. What is that number representing?
| /proc/PID/fd/X link number |
On unices, filenames are just pointers (inodes) that point to the memory where the file resides (which can be a hard drive or even a RAM-backed filesystem). Each file records the number of links to it: the links can be either the filename (plural, if there are multiple hard links to the same file), and also every time a file is opened, the process actually holds the "link" to the same space.
The space is physically freed only if there are no links left (therefore, it's impossible to get to it). That's the only sensible choice: while the file is being used, it's not important if someone else can no longer access it: you are using it and until you close it, you still have control over it - you won't even notice the filename is gone or moved or whatever. That's even used for tempfiles: some implementations create a file and immediately unlink it, so it's not visible in the filesystem, but the process that created it is using it normally. Flash plugin is especially fond of this method: all the downloaded video files are held open, but the filesystem doesn't show them.
So, the answer is, while the processes have the files still opened, you shouldn't expect to get the space back. It's not freed, it's being actively used. This is also one of the reasons that applications should really close the files when they finish using them. In normal usage, you shouldn't think of that space as free, and this also shouldn't be very common at all - with the exception of temporary files that are unlinked on purpose, there shouldn't really be any files that you would consider being unused, but still open. Try to review if there is a process that does this a lot and consider how you use it, or just find more space.
|
Hi I have many files that have been deleted but for some reason the disk space associated with the deleted files is unable to be utilized until I explicitly kill the process for the file taking the disk space
$ lsof /tmp/
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
cron 1623 root 5u REG 0,21 0 395919638 /tmp/tmpfPagTZ4 (deleted)The disk space taken up by the deleted file above causes problems such as when trying to use the tab key to autocomplete a file path I get the error bash: cannot create temp file for here-document: No space left on device
But after I run kill -9 1623 the space for that PID is freed and I no longer get the error.
My questions are:why is this space not immediately freed when the file is first deleted?
what is the best way to get back the file space associated with the deleted files?and please let me know any incorrect terminology I have used or any other relevant and pertinent info regarding this situation.
| Best way to free disk space from deleted files that are held open |
On linux, you can find the position of the file descriptor number N of process PID in /proc/$PID/fdinfo/$N. Example:
$ cat /proc/687705/fdinfo/36
pos: 26088
flags: 0100001The same file can be opened several times with different positions using several file descriptors, so you'll have to choose the relevant one in the case there are more than one. Use:
$ readlink /proc/$PID/fd/$Nto know what is the file to which the corresponding file descriptor is attached (it might not be a file, in this case the symlink is dangling).
|
My problem is that with
lsof -p pid
I can find out the list of opened file of a process whose process id is pid. But is there a way to find out the file offset of each accessed file ?
Please give me some suggestions ?
| How to find out the file offset of an opened file? |
The file descriptor, i.e. the 4 in your example, is the index into the process-specific file descriptor table, not the open file table. The file descriptor entry itself contains an index to an entry in the kernel's global open file table, as well as file descriptor flags.
|
Say I have process 1 and process 2. Both have a file descriptor corresponding to the integer 4.
In each process however the file descriptor 4 points to a totally different file in the Open File Table of the kernel:How is that possible? Isn't a file descriptor supposed to be the index to a record in the Open File Table?
| How can same fd in different processes point to the same file? |
How about :e .? This opens the current directory in Vim, i.e. it opens the file explorer. Because I have autochdir setting set, this shows the directory that the currently edited file is in.
|
I have opened a dir vim some/dir. I can navigate within the tree, yet once I opened a file I wonder, how do I close the file view in order to go back to the directory listing to navigate to another file. :wq is no option, as it closes the whole vim session. I guess there is a for mode to that, yet I do not know what it is called nor how I start it.
How to close the file to file navigation view?
| How to switch to the directory listing from file view in vim? |
I can read the /proc/$PID/net/tcp file for example and get information about TCP ports opened by the process.That file is not a list of tcp ports opened by the process. It is a list of all open tcp ports in the current network namespace, and for processes running in the same network namespace is identical to the contents of /proc/net/tcp.
To find ports opened by your process, you would need to get a list of socket descriptors from /proc/<pid>/fd, and then match those descriptors to the inode field of /proc/net/tcp.
|
I need to know if a process with a given PID has opened a port without using external commands.
I must then use the /proc filesystem. I can read the /proc/$PID/net/tcp file for example and get information about TCP ports opened by the process. However, on a multithreaded process, the /proc/$PID/task/$TID directory will also contains a net/tcp file. My question is :
do I need to go over all the threads net/tcp files, or will the port opened by threads be written into the process net/tcp file.
| Read "/proc" to know if a process has opened a port |
I know of fuser, see if it's available on your system.
|
In many cases lsof is not installed on the machines that I have to work with, but the "function" of lsof would be needed very much (for example on AIX). :\
Are there any lsof like applications in the non-Windows world?
For example, I need to know which processes use the /home/username directory?
| Alternatives for "lsof" command? |
In 2017 Linux got a new feature which can simplify this process a bit (commit d01c3289e7d, available in Linux 4.14 and newer)
After getting the list of processes with /dev/ptmx open:
$ fuser dev/ptmx
/dev/ptmx: 1330334 1507443The pts number can be received like this:
for pid in $(fuser /dev/ptmx 2>/dev/null); do grep -r tty-index /proc/$pid/fdinfo; done
/proc/1330334/fdinfo/13:tty-index: 0
/proc/1330334/fdinfo/14:tty-index: 1
/proc/1330334/fdinfo/27:tty-index: 2
/proc/1330334/fdinfo/28:tty-index: 4
/proc/1507443/fdinfo/3:tty-index: 3The result is a mapping from a <pid>:<ptmx fd> to the corresponding /dev/pts/<index>
Since version 4.90, lsof can use that API to report on the other ends of /dev/ptmx and /dev/pts/x open files with -E/+E:
$ lsof -E -ad 0 -p $$
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
zsh 14335 user 0u CHR 136,8 0t0 11 /dev/pts/8 14333,xterm,5u$ lsof +E -ad 0 -p $$
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
xterm 14333 user 5u CHR 5,2 0t0 87 /dev/ptmx ->/dev/pts/8 14335,zsh,0u 14335,zsh,1u 14335,zsh,2u 14335,zsh,10u 14391,lsof,0u 14391,lsof,1u 14391,lsof,2u
zsh 14335 user 0u CHR 136,8 0t0 11 /dev/pts/8 14333,xterm,5u |
If I do a:
echo foo > /dev/pts/12Some process will read that foo\n from its file descriptor to the master side.
Is there a way to find out what that(those) process(es) is(are)?
Or in other words, how could I find out which xterm/sshd/script/screen/tmux/expect/socat... is at the other end of /dev/pts/12?
lsof /dev/ptmx will tell me the processes that have file descriptors on the master side of any pty. A process itself can use ptsname() (TIOCGPTN ioctl) to find out the slave device based on its own fd to the master side, so I could use:
gdb --batch --pid "$the_pid" -ex "print ptsname($the_fd)"for each of the pid/fd returned by lsof to build up that mapping, but is there a more direct, reliable and less intrusive way to get that information?
| How can we know who's at the other end of a pseudo-terminal device? |
Check out the file descriptor #1 (STDOUT) in /proc/$PID/fd/. The kernel represents this file as symbolic link to a file the descriptor is redirected to.
$ readlink -f /proc/20361/fd/1
/tmp/file |
If I start an app with this command:
/path/to/my/command >> /var/log/command.logAnd the command doesn't return, is there a way, from another prompt, to see what the STDOUT redirect is set to?
I'm looking for something like either
cat /proc/PID/redirectsor
ps -??? | grep PIDbut any method will do.
| See the STDOUT redirect of a running process |
As long as you don't move the file across file-system borders, the operation should be safe. This is due to the mechanism, how »moving« actually is done.
If you mv a file on the same file-system, the file isn't actually touched, but only the file-system entry is changed.
$ mv foo baractually does something like
$ ln foo bar
$ rm fooThis would create a hard link (a second directory entry) for the file (actually the inode pointed by file-system entry) foo named bar and remove the foo entry. Since now when removing foo, there is a second file-system entry pointing to foo's inode, removing the old entry foo doesn't actually remove any blocks belonging to the inode.
Your program would happily append to the file anyways, since its open file-handle points to the inode of the file, not the file-system entry.
Note: If your program closes and reopens the file between writes, you would end up having a new file created with the old file-system entry!
Cross file-system moves:
If you move the file across file-system borders, things get ugly. In this case you couldn't guarantee to have your file keeping consistent, since mv would actually create a new file on the target file-system
copy the contents of the old file to the new file
remove the old fileor
$ cp /path/to/foo /path/to/bar
$ rm /path/to/fooresp.
$ touch /path/to/bar
$ cat < /path/to/foo > /path/to/bar
$ rm /path/to/fooDepending on whether the copying reaches end-of-file during a write of your application, it could happen that you have only half of a line in the new file.
Additionally, if your application does not close and reopen the old file, it would continue writing to the old file, even if it seems to be deleted: the kernel knows which files are open and although it would delete the file-system entry, it won't delete old file's inode and associated blocks until your application closes its open file-handle.
|
I have a node.js process that uses fs.appendFile to add lines to file.log. Only complete lines of about 40 chars per lines are appended, e.g. calls are like fs.appendFile("start-end"), not 2 calls like fs.appendFile("start-") and fs.appendFile("end"). If I move this file to file2.log can I be sure that no lines are lost or copied partially?
| Is it safe to move a file that's being appended to? |
The /proc/PID/fd/NUM symlinks are quasi-universal on Linux, but they don't exist anywhere else (except on Cygwin which emulates them). /proc/PID/fd/NUM also exist on AIX and Solaris, but they aren't symlinks. Portably, to get information about open files, install lsof.
Unices with /proc/PID/fd
Linux
Under Linux, /proc/PID/fd/NUM is a slightly magic symbolic link to the file that the process with the ID PID has open on the file descriptor NUM. This link is magic in that, for example, it can be used to access the file even if the file is removed. The link will track the file through renames, too. /proc/self is a magic symbolic link which points to /proc/PID where PID is the process that accesses the link.
This feature is present on virtually all Linux systems. It's provided by the driver for the proc filesystem, which is technically optional but used for so many things (including making the ps work — it reads from /proc/PID) that it's almost never left out even on embedded systems.
Cygwin
Cygwin emulates Linux's /proc/PID/fd/NUM (for Cygwin processes) and /proc/self.
Solaris (since version 2.6), AIX
There are /proc/PID/fd entries for each file descriptor, but they appear as the same type as the opened file, so they provide no information about the path of the file. They do however report the same stat information as fstat would report to the process that has the file open, so it's possible to determine on which filesystem the file is located and its inode number. Directories appear as symbolic links, however they are magic symlinks which can only be followed, and readlink returns an empty string.
Under AIX, the procfiles command displays some information about a process's open files. Under Solaris, the pfiles command displays some information about a process's open files. This does not include the path to the file (on Solaris, it does since Solaris 10, see below).
Solaris (since version 10)
In addition to /proc/PID/fd/NUM, modern Solaris versions have /proc/PID/path/NUM which contains symbolic links similar to Linux's symlinks in /proc/PID/fd/NUM. The pfiles command shows information about a process's open files, including paths.
Plan9
/proc/PID/fd is a text file which contains one record (line) per file descriptor opened by the process. The file name is not tracked there.
QNX
/proc/PID/ is a directory, but it doesn't contain any information about file descriptors.
Unices with /proc but no direct access to file descriptors
(Note: sometimes it's possible to obtain information about a process's open files by riffling through its memory image which is accessible under /proc. I don't count that as “direct access”.)
Unices where /proc/PID is a file
The proc filesystem itself started out in UNIX 8th edition, but with a different structure, and went through Plan 9 and back to some unices. I think that all operating systems with a /proc have an entry for each PID, but on many systems, it's a regular file, not a directory. The following systems have a /proc/PID which needs to be read with ioctl:Solaris up to 2.5
OSF/1 now known as Tru64
IRIX (?)
SCO (?)MINIX 3
MINIX 3 has a procfs server which provides several Linux-like components including /proc/PID/ directories. However there is no /proc/PID/fd.
FreeBSD
FreeBSD has /proc/PID/ directories, but they do not provide information about open file descriptors. (There is however /proc/PID/file which is similar to Linux's /proc/PID/exe, giving access to the executable through a symbolic link.)
FreeBSD's procfs is deprecated.
Unices without /procHP-UX
OpenBSD
NetBSD
Mac OSXFile descriptor information through other channels
Fuser
The fuser command lists the processes that have a specified file open, or a file open on the specified mount point. This command is standard (available on all XSI-compliant systems, i.e. POSIX with the X/Open System Interface Extension).
You can't go from a process to file names with this utility.
Lsof
Lsof stands for “list open files”. It is a third-party tool, available (but usually not part of the default installation) for most unix variants. Obtaining information about open files is very system-dependent, as the analysis above might have made you suspect. The lsof maintainer has done the work of combining it all under a single interface.
You can read the FAQ to see what kinds of difficulties lsof has to put up with. On most unices, obtaining information about the names of open files requires parsing kernel data structures. Quoting from FAQ 3.3 “Why doesn't lsof report full path names?”:Lsof can't obtain path name components from the kernel name caches of the following dialects:AIXOnly the Linux kernel records full path names in the structures it maintains about open files; instead, most kernels convert path names to device and node number doublets and use them for subsequent file references once files have been opened.If you need to parse information from lsof's output, be sure to use the -F mode (one field per line), preferably the -F0 mode (null-delimited fields). To get information about a specific file descriptor of a specific process, use the -a option with -p PID and -d NUM, e.g. lsof -a -p 123 -d 0 -F0n.
/dev/fd/NUM for file descriptors of the current process
Many unix variants provide a way to for a process to access its open files via a file name: opening /dev/fd/NUM is equivalent to calling dup(NUM). These names are useful when a program wants a file name but you want to pass an already-open file (e.g. a pipe or socket); for example the shells that implement process substitution use them where available (using a temporary named pipe where /dev/fd is unavailable).
Where /dev/fd exists, there are also usually (always?) synonyms (sometimes symbolic links, sometimes hard links, sometimes magic files with equivalent properties) /dev/stdin = /dev/fd/0, /dev/stdout = /dev/fd/1, /dev/stderr = /dev/fd/2.Under Linux, /dev/fd is a symbolic link to /proc/self/fd.
Under most unices (IRIX, OpenBSD, NetBSD, SCO, Solaris, …), the entries in /dev/fd are character devices. They usually appear whether the file descriptor is open or not, and entries may not be available for file descriptors above a certain number.
Under FreeBSD and OSX, the fdescfs filesystem provides a dynamic /dev/fd directory which follows the open descriptors of the calling process. A static /dev/fd is available if /dev/fd is not mounted.
Under OSF/1 (Tru64), /dev/fd is provided via fdfs.
There is no /dev/fd on AIX or HP-UX. |
I've always wondered this but never took the time to find out, so I'll do so now - how portable is the usage shown here of either /proc/$$/fd/$N or /dev/fd/$N? I understand POSIX guarantees /dev/null, /dev/tty, and /dev/console (though I only found that out the other day after reading the comments on this answer) but what about these others?
As far as I can tell they're pretty common, but in what systems can I not expect to find them? Why not? Is it more likely to find one than the other? Will they always exhibit like attributes?
I tend to use these devices pretty extensively in all manner of ways, and I'd like to know if there's a chance I'd come up short just trying.
Also the above questions should be understood to be only what I think I'd like to know, but, since I obviously have to ask in the first place, I may not know best in this regard and they should not be considered stringent requirements for an answer. Just clue me in if you can, please.
| Portability of file descriptor links |
On Linux, assuming you want to know what is writing to the same resource as your shell's stdout is connected to, you could do:
strace -fe write $(lsof -t "/proc/$$/fd/1" | sed 's/^/-p/')That would report the write() system calls (on any file descriptor) of every process that have at least one file descriptor open on the same file as fd 1 of your shell.
|
I have two instances of a process running. One of them is "frEAkIng oUT!" and printing errors non stop to STDOUT.
I want to kill the broken process but I have to make sure I don't terminate the wrong one. They were both started about at the same time and using top I can see they both use about the same amount of memory and CPU. I can't seem to find anything that points to which process is behaving badly.
The safest thing would be to figure out which process/pid is writing to STDOUT.
Is there any way to do that?
| How to find out what process is writing to STDOUT? |
Making /dev/null a named pipe is probably the easiest way. Be warned that some programs (sshd, for example) will act abnormally or fail to execute when they find out that it isn't a special file (or they may read from /dev/null, expecting it to return EOF).
# Remove special file, create FIFO and read from it
rm /dev/null && mkfifo -m622 /dev/null && tail -f /dev/null
# Remove FIFO, recreate special file
rm /dev/null && mknod -m666 /dev/null c 1 3This should work under all Linux distributions, and all major BSDs.
|
Just for fun:
Is there a way to monitor/capture/dump whatever is being written to /dev/null?
On Debian, or FreeBSD, if it matters, any other OS specific solutions are also welcome.
| Monitor what is being sent to /dev/null? |
I suspect the main reason for the limit is to avoid excess memory consumption (each open file descriptor uses kernel memory). It also serves as a safeguard against buggy applications leaking file descriptors and consuming system resources.
But given how absurdly much RAM modern systems have compared to systems 10 years ago, I think the defaults today are quite low.
In 2011 the default hard limit for file descriptors on Linux was increased from 1024 to 4096.
Some software (e.g. MongoDB) uses many more file descriptors than the default limit. The MongoDB folks recommend raising this limit to 64,000. I've used an rlimit_nofile of 300,000 for certain applications.
As long as you keep the soft limit at the default (1024), it's probably fairly safe to increase the hard limit. Programs have to call setrlimit() in order to raise their limit above the soft limit, and are still capped by the hard limit.
See also some related questions:https://serverfault.com/questions/356962/where-are-the-default-ulimit-values-set-linux-centos
https://serverfault.com/questions/773609/how-do-ulimit-settings-impact-linux |
Is there a (technical or practical) limit to how large you can configure the maximum number of open files in Linux? Are there some adverse effects if you configure it to a very large number (say 1-100M)?
I'm thinking server usage here, not embedded systems. Programs using huge amounts of open files can of course eat memory and be slow, but I'm interested in adverse effects if the limit is configured much larger than necessary (e.g. memory consumed by just the configuration).
| Largest allowed maximum number of open files in Linux |
You can do
lsof -n | grep -i "TCP\|UDP" | grep -v "ESTABLISHED\|CLOSE_WAIT"to see all of your listening ports, but dollars to donuts that ntpd is running:
service ntpd statusAnd as for "What does socket in use" mean? If I can be forgiven for smoothing over some wrinkles (and for the very basic explanation, apologies of most of this is remedial for you)...TCP/IP (the language of the internet) specifies that each computer has an IP address, which uniquely identifies that computer on the internet. In addition, there are 65,000 numbered ports on each IP address that can be connected to.
When you want to connect to a web server, you open the site in your browser, but the machinery underneath is actually connecting you to port 80 on the web server's IP. The web server's daemon (the program listening for connections to port 80) uses a "socket" to hold open that port, reserving it for itself. Only one program can use the same port at a time.
Since you had ntpd running, it was using that port. 'ntpdate' tried to access that port, but since it was already held open, you got the 'socket already in use' error.
Edit
Changed to account for UDP as well
|
I'm trying to use NTP to update the time on my machine. However, it gives me an error:
host # ntpdate ntp1.example.org
10 Aug 12:38:50 ntpdate[7696]: the NTP socket is in use, exitingWhat does the error "socket is in use" mean? How can I see what is using this socket?
This happens on my CentOS 4.x system, but I also see it on FreeBSD 7.x, Ubuntu 10.04 and Solaris 10.
| What is using this network socket? |
Programs connect to files through a number maintained by the filesystem (called an inode on traditional unix filesystems), to which the name is just a reference (and possibly not a unique reference at that).
So several things to be aware of:Moving a file using mv does not change that underling number unless you move it across filesystems (which is equivalent to using cp then rm on the original).
Because more than one name can connect to a single file (i.e. we have hard links), the data in "deleted" files doesn't go away until all references to the underling file go away.
Perhaps most important: when a program opens a file it makes a reference to it that is (for the purposes of when the data will be deleted) equivalent to a having a file name connected to it.This gives rise to several behaviors like:A program can open a file for reading, but not actually read it until after the user as rmed it at the command line, and the program will still have access to the data.
The one you encountered: mving a file does not disconnect the relationship between the file and any programs that have it open (unless you move across filesystem boundaries, in which case the program still have a version of the original to work on).
If a program has opened a file for writing, and the user rms it's last filename at the command line, the program can keep right on putting stuff into the file, but as soon as it closes there will be no more reference to that data and it will go away.
Two programs that communicate through one or more files can obtain a crude, partial security by removing the file(s) after they are finished opening. (This is not actual security mind, it just transforms a gaping hole into a race condition.) |
I just renamed a log file to "foo.log.old", and assumed that the application will start writing a new logfile at "foo.log". I was surprised to discover that it tracked the logfile to its new name, and kept appending lines to "foo.log.old".
In Windows, I'm not familiar with this kind of behavior - I don't know if it's even possible to implement it. How exactly is this behavior implemented in linux? Where can I learn more about it?
| How do open files behave on linux systems? |
The inodes still persist on disk, although no more hard links to the inodes exist. They will be deleted when the file descriptor is closed. Until then, the file can be modified as normal, barring operations that require a filename/hard link.
debugfs and similar tools can be used to recover the contents of the inodes.
|
What happens to the files that are deleted while they have a file handle open to them?
I have been wondering this ever since I figured out I could delete a video file while it was playing in MPlayer and it would still play through to the end. Where is it pulling the data from? Is it still coming from the hard drive? Did it get copied to RAM once I deleted the file?
If it's still on the hard drive, what happens if I fill up the file system while the program is running reading from what is essentially unallocated space? If it's buffered in RAM, what happens if I flush the buffers?
What happens if the file was on an NFS share–is it stored on the server? (Isn't that a security risk–DoS by tons of open remote file handles?)
Doing an lsof -n |grep '(deleted)' sometimes yields interesting results; if I'm upgrading packages that swap out shared library files, then running programs that had been using those libraries will still be able to use them as if nothing changed.
Bonus question: Is there some way to get the data back from the dead in this situation?
| Where do open file handles go when they die? |
When you delete a file you really remove a link to the file (to the inode). If someone already has that file open, they get to keep the file descriptor they have. The file remains on disk, taking up space, and can be written to and read from if you have access to it.
The unlink function is defined with this behaviour by POSIX:When the file's link count becomes 0 and no process has the file open, the space occupied by the file shall be freed and the file shall no longer be accessible. If one or more processes have the file open when the last link is removed, the link shall be removed before unlink() returns, but the removal of the file contents shall be postponed until all references to the file are closed.This piece of advice because of that behaviour. The daemon will have the file open, and won't notice that it has been deleted (unless it was monitoring it specifically, which is uncommon). It will keep blithely writing to the existing file descriptor it has: you'll keep taking up (more) space on disk, but you won't be able to see any of the messages it writes, so you're really in the worst of both worlds. If you truncate the file to zero length instead then the space is freed up immediately, and any new messages will be appended at the new end of the file where you can see them.
Eventually, when the daemon terminates or closes the file, the space will be freed up. Nobody new can open the file in the mean time (other than through system-specific reflective interfaces like Linux's /proc/x/fd/...). It's also guaranteed that:If the link count of the file is 0, when all file descriptors associated with the file are closed, the space occupied by the file shall be freed and the file shall no longer be accessible.So you don't lose your disk space permanently, but you don't gain anything by deleting the file and you lose access to new messages.
|
From the Unix Power Tools, 3rd Edition: Instead of Removing a File, Empty It section:If an active process has the file open (not uncommon for log files),
removing the file and creating a new one will not affect the logging
program; those messages will just keep going to the file that’s no
longer linked. Emptying the file doesn’t break the association, and so
it clears the file without affecting the logging program.(emphasis mine)
I don't understand why a program will continue to log to a deleted file. Is it because the file descriptor entry not getting removed from the process table?
| How can a log program continue to log to a deleted file? |
This one-liner should help:
ls -l /proc/[0-9]*/fd/* |grep /dev/ttyS0replace ttyS0 with actual port name
example output:
lrwx------ 1 root dialout 64 Sep 12 10:30 /proc/14683/fd/3 -> /dev/ttyUSB0That means the pid 14683 has the /dev/ttyUSB0 open as file descriptor 3
|
I'm using uclinux and I want to find out which processes are using the serial port. The problem is that I have no lsof or fuser.
Is there any other way I can get this information?
| How to find processes using serial port |
It's all done with MIME types in various databases. xdg-mime can be used to query and set user values.
|
How to change the applications associated with certain file-types for gnome-open, exo-open, xdg-open, gvfs-open and kde-open?Is there a way by editing config files or by a command-line command?
Is there a way to do this using a GUI?For both questions: How to do it per user basis, how to do it system-wide?
| Change default applications used by gnome-open, exo-open, xdg-open, gvfs-open and kde-open |
A directory (like any file) is not defined by its name. Think of the name as the directory's address. When you move the directory, it's still the same directory, just like if you move to a different house, you're still the same person. If you remove a directory and create a new one by the same name, it's a new directory, just like someone who moves into the house where you used to live isn't you.
Each process has a working directory. The cd command in the shell changes the shell's current working directory. The pwd command prints the¹ path to the current working directory.
When you removed the directory A, what this did was to remove the entry for A in its parent directory. The directory A itself remained in the filesystem, but in a detached state, with no name. It was not deleted yet because it was in use by a process, namely the first shell. When you changed the directory in the first shell, the directory was finally deleted. The same thing happens when a file is deleted while a process still has it open: the file's directory entry is removed immediately, and the file itself is removed when it stops being in use.
Similarly, observe what happens when you move directories around.
mkdir one two
touch one/1 two/2
cd one
lsIn another shell:
mv one tmp
mv two one
mv tmp twoIn the first shell:
lsThe file 1 is in the directory that was originally called one and is now called two. The file 2 is in the directory that was originally called two and is now called one.
¹ More precisely, a path, which may not be unique if symbolic links or other subtleties are involved.
|
I have two shells open. The first is in directory A. In the second, I remove directory A, and then recreate it. When I go back to the first shell, and type ls, the output is:
ls: cannot open directory .: Stale file handleWhy? I thought the first shell (the one that remained open inside a non-existent directory) would "freeze" while waiting for the next command, and wouldn't have "realized" that the directory was deleted and recreated. Does the shell hold a "deeper" reference to its current working directory other than the string $PWD?
| `ls` error when directory is deleted |
Do they point to some property of the resource?Yes. They're a unique identifier that allows you to identify the resource.Also why are some of the links broken?Because they're links to thinks that don't live in the filesystem, you can't follow the link the normal way. Essentially, links are being abused as a way to return the resource type and unique identifier.what is pipe?As the name suggests, a pipe is a connection between two points such that anything put in one end comes out the other end.
| Possible Duplicate:
/proc/PID/fd/X link number i have a question regarding the file descriptors and their linkage in the proc file system. I've observed that if i list the file descriptors of a certain process from proc ls -la /proc/1234/fd i get the following output:
lr-x------ 1 root root 64 Sep 13 07:12 0 -> /dev/null
l-wx------ 1 root root 64 Sep 13 07:12 1 -> /dev/null
l-wx------ 1 root root 64 Sep 13 07:12 2 -> /dev/null
lr-x------ 1 root root 64 Sep 13 07:12 3 -> pipe:[2744159739]
l-wx------ 1 root root 64 Sep 13 07:12 4 -> pipe:[2744159739]
lrwx------ 1 root root 64 Sep 13 07:12 5 -> socket:[2744160313]
lrwx------ 1 root root 64 Sep 13 07:12 6 -> /var/lib/log/some.logI get the meaning of a file descriptor and i understand from my example the file descriptors 0 1 2 and 6, they are tied to physical resources on my computer, and also i guess 5 is connected to some resource on the network(because of the socket), but what i don't understand is the meaning of the numbers in the brackets. Do the point to some property of the resource? Also why are some of the links broken? And lastly as long as I asked a question already :) what is pipe?
| File descriptor linked to socket or pipe in proc [duplicate] |
If you attach strace to the process just when it's hung (you can get the pid and queue the command up in advance, in a spare terminal), it'll show the file descriptor of the blocking write.
Trivial example:
$ mkfifo tmp
$ cat /dev/urandom > tmp &
[1] 636226
# this will block on open until someone opens for reading$ exec 4<tmp
# now it should be blocked trying to write$ strace -p 636226
Process 636226 attached - interrupt to quit
write(1, "L!\f\335\330\27\374\360\212\244c\326\0\356j\374`\310C\30Z\362W\307\365Rv\244?o\225N"..., 4096 <unfinished ...>
^C
Process 636226 detached |
My situation is that from time to time a specific process (in this case, it's Thunderbird) doesn't react to user input for a minute or so. I found out using iotop that during this time, it writes quite a lot to the disk, and now I want to find out which file it writes to, but unfortunately iotop gives only stats per process and not per open file(-descriptor).
I know that I can use lsof to find out which files the process has currently open, but of course Thunderbird has a lot of them open, so this is not that helpful. iostat only shows statistics per device.
The problem occurs only randomly and it might take quite some time for it to appear, so I hope I don't have to strace Thunderbird and wade through long logs to find out which file has the most writes.
| How to find out which file is currently written by a process |
On Linux you can configure it via limits.conf, e.g. via
# cd /etc/security
# echo debian-transmission - nofile 8192 > limits.d/transmission.conf(which sets both the hard and soft limit for processes started under the user debian-transmission to 8192)
You can verify the change via:
# sudo -u debian-transmission bash -c "ulimit -a"
[..]
open files (-n) 8192
[..]If a daemon is already running, it has to be restarted such that the new limit is picked up. In case the daemon is manually started from a user session, the user has to re-login to get the new limit.
Alternatively, you can also specify additional limits directly in /etc/security/limits.conf, of course - but I prefer the .d directory approach for better maintainability.
For enforcing different soft/hard limits use two entries, e.g.
debian-transmission soft nofile 4096
debian-transmission hard nofile 8192(rationale behind this: the soft value is set after the user logs in but a users process is allowed to increases the limit up to the hard limit)
The limits.conf/limits.d configuration is used by pam_limits.so, which is enabled by default on current Linux distributions.
Related
There is also a system-wide limit on Linux, /proc/sys/fs/file-max:This file defines a system-wide limit on the number of open files for all processes.For example the default on Ubuntu 10.04:
$ cat /proc/sys/fs/file-max
786046The pseudo file /proc/sys/fs/file-nr provides more information, e.g.
$ cat /proc/sys/fs/file-nr
1408 0 786046the number of allocated file handles (i.e., the number of files presently opened); the number of free file handles; and the maximum number of file handlesThus, on the one hand, you also may have to adjust system-wide file-max limit, in case its is very small and/or the system is already very loaded. On the other hand, just increasing file-max is not sufficient, because it does not influence the soft/hard limits enforced by the pam_limits mechanism.
To change file-max on the command line (no reboot necessary):
# sysctl -w fs.file-max=786046For permanent changes add fs.file-max=786046 to /etc/sysctl.conf or /etc/sysctl.d.
The upper limit on fs.file-max is recorded in fs.nr_open. For example, (again) on Ubuntu 10.04:
$ sysctl -n fs.nr_open
1048576(which is 1024*1024)
This sysctl is also configurable.
|
The default open file limit per process is 1024 on - say - Linux. For certain daemons this is not enough. Thus, the question: How to change the open file limit for a specific user?
| How to configure the process open file limit of a user? |
From the lsof man pageLsof returns a one (1) if any error was detected, including the failure
to locate command names, file names, Internet addresses or files, login
names, NFS files, PIDs, PGIDs, or UIDs it was asked to list. If the -V
option is specified, lsof will indicate the search items it failed to
list.So that would suggest that your lsof failed for some other reason clause would never be executed.
Have you tried just moving the file while your external process still has it open? If the destination directory is on the same filesystem, then there should be no problems with doing that unless you need to access it under the original path from a third process as the underlying inode will remain the same. Otherwise I think mv will fail anyway.
If you really need to wait until your external process is finished with the file, you are better to use a command that blocks instead of repeatedly polling. On Linux, you can use inotifywait for this. Eg:
inotifywait -e close_write /path/to/fileIf you must use lsof (maybe for portability), you could try something like:
until err_str=$(lsof /path/to/file 2>&1 >/dev/null); do
if [ -n "$err_str" ]; then
# lsof printed an error string, file may or may not be open
echo "lsof: $err_str" >&2 # tricky to decide what to do here, you may want to retry a number of times,
# but for this example just break
break
fi # lsof returned 1 but didn't print an error string, assume the file is open
sleep 1
doneif [ -z "$err_str" ]; then
# file has been closed, move it
mv /path/to/file /destination/path
fiUpdate
As noted by @JohnWHSmith below, the safest design would always use an lsof loop as above as it is possible that more than one process would have the file open for writing (an example case may be a poorly written indexing daemon that opens files with the read/write flag when it should really be read only). inotifywait can still be used instead of sleep though, just replace the sleep line with inotifywait -e close /path/to/file.
|
I want to move large file created by external process as soon as it's closed.
Is this test command correct?
if lsof "/file/name"
then
# file is open, don't touch it!
else
if [ 1 -eq $? ]
then
# file is closed
mv /file/name /other/file/name
else
# lsof failed for some other reason
fi
fiEDIT: the file represents a dataset and I have to wait until it's complete to move it so another program can act on it. That's why I need to know if the external process is done with the file.
| Move file but only if it's closed |
!!!!! I don't know if this will work with other Distros then Linux Lite !!!!!
What happens if you install VSCode (can be with other editors to) there is something in the code what says to you system that VSCode can open files and directories. So your system puts VSCode in front of you file Manager (Linux Lite 4.8 == Thunar) what you will see if you go to /usr/share/applications/ then you will find mimeinfo.cache and if you look in to that file you have to look for inode/directory where you can see then inode/directory=code.desktop;Thunar-folder-handler.desktop; this means that code (VSCode) is your default you can change this by going out of that file and in the applications folder you open MIME Type Editor in the Filter search field you look for directory and change Default Application to Open Folder with Thunar .
I know all that is probably more fast or easier in the Terminal but everything I found on the web in the Terminal did not work for me. |
Hi everybody I want to start to say thank you for your time!
I have a problem and don't really know what to do to solve the problem. When i download something and I click on the arrow in Firefox to see my downloads and then click on the folder next to the application name it should open the folder where it is saved? (I think something like moz/.tmp) anyway when I click on the folder it opens VSCode. what did i do wrong?
even after "extraction completed successfully" and i click Show the Files it opens VSCode
Running Linux Lite 4.8 x86_64 | when clicking open folder the system launches VSCode |
When you use vi/vim to edit a file you aren't actually holding ~/<filename>open you are reading the file into ~/.<filename>.swp and then holding that temp file open.
If you run lsof ~/.<filename>.swp it will show you the information you are looking for.
NOTE: If you have multiple people editing the same file you will need to lsof ~/.<filename>.s* as each vi/vim session will create its own swap file but will name it differently
|
Consider this simple scenario:I open a text file ~/textfile.txt with vim in one terminal (tried with both edit and read-only modes).
In a different terminal, I run/usr/sbin/lsof ~/textfile.txt
Get no results
Why?
| lsof doesn't return files open by the same user |
I had that problem too some months ago, and I remember I had to delete some .desktop files that were inside the $HOME/.local/share/applications folder.
I think you should delete any file that has notepad as part of its name, and also you should try to delete (or move somewhere else) the files wine-extension-*.
|
After installing Wine, Notepad has became a default application to open unknown textual files by double click. I'd like to eliminate this behaviour and remove Notepad from the list of applications offered to open an unknown type file. I've deleted /usr/share/applications/wine-notepad.desktop, but this didn't help. How can I disable Notepad correctly?
I use XUbuntu 11.10 (XFCE 4.8) and Wine 1.3.
| How to remove Notepad from the applications list? |
Well on Linux you could use inotify to track changes to your files. Inotify is in-kernel and has bindings to many different languages allowing you to quickly script such functionality if the app you are working with does not support inotify yet.
|
I've had a mac at work lately, and was amazed to see that Xcode would still find my latest project after I renamed its folder and moved it someplace else.
Now I understand that this is the result of a heavy infrastructure at work, but I was wondering if it would be possible to somehow come up with similar functionality for the rest of the Unix world ?
| Strategies for maintaining a reference to a file after it was moved or renamed? |
It could only be in memory and not recoverable, in which case you'd have to try to recover it from the filesystem using one of those filesystem recovery tools (or from memory, maybe). However!
$ cat hamlet.c
#include <unistd.h>
int main(void) { while (1) { sleep(9999); } }
$ gcc -o hamlet hamlet.c
$ md5sum hamlet
30558ea86c0eb864e25f5411f2480129 hamlet
$ ./hamlet &
[1] 2137
$ rm hamlet
$ cat /proc/2137/exe > newhamlet
$ md5sum newhamlet
30558ea86c0eb864e25f5411f2480129 newhamlet
$ With interpreted programs, obtaining the script file may be somewhere between tricky and impossible, as /proc/$$/exe will point to perl or whatever, and the input file may already have been closed:
$ echo sleep 9999 > x
$ perl x &
[1] 16439
$ rm x
$ readlink /proc/16439/exe
/usr/bin/perl
$ ls /proc/16439/fd
0 1 2Only the standard file descriptors are open, so x is already gone (though may for some time still exist on the filesystem, and who knows what the interpreter has in memory).
|
I have a process running very long time.
I accidentally deleted the binary executable file of the process.
Since the process is still running and doesn't get affected, there must be the original binary file in somewhere else....
How can I get recover it? (I use CentOS 7, the running process is written in C++)
| How to recover the deleted binary executable file of a running process |
The file can be access through the /proc filesystem: you already know the PID and the FD from the lsof output.
cat /proc/21742/fd/5 |
I have an hourly hour-long crontab job running with some mtr (traceroute) output every 10 minutes (that is going to go for over an hour prior to it being emailed back to me), and I want to see the current progress thus far.
On Linux, I have used lsof -n | fgrep cron (lsof is similar to BSD's fstat), and it seems like I might have found the file, but it is annotated as having been deleted (a standard practice for temporary files is to be deleted right after opening):
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
...
cron 21742 root 5u REG 202,0 7255 66310 /tmp/tmpfSuELzy (deleted)And cannot be accesses by its prior name anymore:
# stat /tmp/tmpfSuELzy
stat: cannot stat `/tmp/tmpfSuELzy': No such file or directoryHow do I access such a deleted file that is still open?
| How can I access a deleted open file on Linux (output of a running crontab task)? |
Using tail in follow mode should allow you to do what you want.
tail -n +0 -f /proc/<pid>/fd/<fd> > abc.deletedI just did a quick test and it seems to work here. You did not mention whether your file was a binary file or not. My main concern is that it may not copy from the start of file but the -n +0 argument should do that even for binary files.
The tail command may not terminate at the end of the download so you will need to terminate it yourself.
|
I started downloading a big file and accidently deleted it a while ago. I know how to get its current contents by cping /proc/<pid>/fd/<fd> but since the download is still in progress it'll be incomplete at the time I copy it someplace else.
Can I somehow salvage the file right at the moment the download finishes but before the downloader closes the file and I lose it for good?
| Recover deleted file that is currently being written to |
If you have fuser installed and have the permission to use sudo:
for i in $(sudo fuser /dev/pts/0); do
ps -o pid= -o command= -p $i
doneeg:
24622 /usr/bin/python /usr/bin/terminator
24633 ksh93 -o vi |
On Linux: Normally pseudo terminals are allocated one after the other.
Today I realized that even after a reboot of my laptop the first opened
terminal window (which was always pts/0 earlier) suddenly became pts/5.
This was weird and made me curious. I wanted to find out which process is occupying the device /dev/pts/0 and had no luck using common tools like who and lsof or even ps as suggested in the comment:
pf@pfmaster-P170EM:pts/6 /var/log 1115> ps auxww | grep pts/0
pf 7042 0.0 0.0 17208 964 pts/6 S+ 12:32 0:00 grep --color=auto pts/0What I'm missing here? Possibly infected by a rookit?
| Which process is occupying a certain pseudo terminal pts/X? |
A file manager is responsible for invoking applications to open a file. It has no control over what the application does with the file, and in particular whether the application will open a new window if you open the same file twice.
Having the same file open in multiple windows can be useful, for example when you want to see different sections from the same document. So systematically refusing to open more than one window on the same document would be bad. Which behavior is the default is mostly a matter of taste. Some applications default to opening a new window, others default to focusing the existing window.
Okular defaults to opening a new window. If you start all instances with okular --unique, then the second time you run that command, it doesn't open a new window (though it doesn't focus the existing window, at least if you aren't running KDE). Note that the first instance must have been started with --unique as well as the second one.
Evince, the Gnome PDF viewer, defaults to the behavior you want: if you open the same document a second time, it focuses the existing window. It doesn't have a command line option to open a separate window, you have to do this through the GUI (menu “File” → “Open a Copy” or Ctrl+N).
|
I have always been confused why the file manager in Linux cannot stop applications from opening a single file twice at the same time?
Specifically, I want to stop the PDF file reader Okular from opening the file A.pdf again when I have already opened it. I need to get an warning or just show me the opened copy of the file A.pdf.
More generally, I would like this to happen with any application, not just Okular. I want to make the document management behavior in Linux the same as in Windows.
| If I open the same file twice in Okular, switch to the existing window |
This is described on Emacs Beginner's Howto.
With the line
(setq auto-mode-alist (cons '("README" . text-mode) auto-mode-alist))You tell emacs to enter "text-mode" if you open a file which is named README.
with
(setq auto-mode-alist (cons '("\\.html$" . html-helper-mode) auto-mode-alist))
(setq auto-mode-alist (cons '("\\.htm$" . html-helper-mode) auto-mode-alist))you tell emacs to type "html-helper-mode" if the file is named *.html or *.htm
on stackoverflow there is an example, that highligts *.emacs files as lisp.code:
(setq auto-mode-alist
(append '((".*\\.emacs\\'" . lisp-mode))
auto-mode-alist)) |
How do I get Emacs to recognize new file extensions? For example if I have a .c file and I open it in Emacs, I get the correct syntax highlighting for C, but if I have a .bob file format (which I know to be C), how do I tell Emacs to interpret it in the same way as a .c file?
| Emacs' file extension recognition |
Linux normally doesn't do any locking (contrary to windows). This has many advantages, but if you must lock a file, you have several options. I suggestflock: apply or remove an advisory lock on an open file.
This utility manages flock(2) locks from within shell scripts or from the command line.For a single command (or entire script), you can use
flock --exclusive /var/lock/mylockfile -c commandIf you want to execute more commands in your script under the lock, use
#!/bin/bash
.... (
flock --nonblock 200 || exit 1
# ... commands executed under lock ...
) 200>/var/lock/mylockfile All operations following the flock call inside the sub-shell (...) are executed only if the no other process currently holds a flock on /var/lock/mylockfile. The lock is automatically dropped after the sub-shell exited.
flock can also wait until the file lock has been dropped (that's the default). In this case do not use the --nonblock option, which makes flock fail if no successful lock can be obtained.
|
I have a shell script which will be executed by multiple instances and if an instance accessing a file and doing some operation how can I make sure other instances are not accessing the same file and corrupting the data ?
My question is not about controlling the parallel execution but dealing with file lock or flagging mechanism.
Request some suggestion to proceed.
| How to make sure only one instance accessing the file at a time in a folder? |
Posting another solution, since file is being written randomly, breaks my tail idea. Thinking rsync might be promising here, since the rsync can operate using a delta transfer algorithm, saving transfer time by only sending the changed parts of a file. If you run rsync on two local files, it will default to --whole-file mode, which is not what you want.
Suggesting
rsync -av --inplace --no-whole-file /your/local/file.dat /your/remote/file.dat... or maybe (if the CIFS mount doesn't agree with delta transfer) use pure rsync:
rsync -av --inplace --no-whole-file /your/local/file.dat remoteserver:/your/directory/file.datSo you would run this multiple times while your 200 GB file is filling up. Each time you run it, it updates the remote file incrementally. This should even work when the source file is being updated randomly. Maybe you could run this every 15 minutes. Then when your pid finishes, you would run it once more, and it would just be a quick incremental delta.
|
I have an application running that is generating a large (~200GB) output file, and takes about 35 hours to run (currently I'm about 12 hours in). The application just opens the file once then keeps it open as it is writing until it is complete; the application also does a lot of random access writes to the file (i.e. not sequential writes).
Right now the file is being saved to my local hard drive but I just decided that when it's done, I'm going to move it to a different device instead (a network drive, NTFS mounted via SMB).
To save time instead of moving the file later, is there some way I can suspend the program and somehow move the current partially complete file to the other device, do some tricks, and resume the program so it is now using the new location?
I'm pretty much positive that the answer is no but I thought I'd ask, sometimes there are surprising tricks out there...
| Moving an open file to a different device |
Several things might be confusing here.
Filedescriptors are attached to a file (in the general sense) and are specific to a given process. Filedescriptors are themselves referred to via numeric ids by their associated process, but one file descriptor can have several ids. Example: ids 1 and 2 which are called standard output and standard error usually refers to the same file descriptor.
The symlinks /proc/pid/fd/x only provide a hint for what the x filedescriptor of process pid is linked to. If it's a regular file, the symlink gives its path. But if the filedescriptor is e.g. an inet socket, then the symlink is just broken. In the case of a regular file (or something which has a path like a tty), it's possible to open it, but you would obtain a different filedescriptor to the same object.
|
Are file descriptors unique across a process, or throughout the whole system. Because every file seems the use the same descriptor for stdin and stdout. Is there something special with these? How does stdin and stdout work? I realize the dev/fd, is a link to proc/self/fd, but how do they all have the same number?
Edit: Even after looking at other processes most of the file descriptors are about the same numbers.
| file descriptors and /dev/fd |
When you use process substitution with <(...) or >(...), bash will open a pipe to the other program on an arbitrary high file descriptor (I think it used to count up from 10, but now it counts down from 63) and pass the name as /dev/fd/N on the command line of the first program. This isn't POSIX, but other shells also support it (it's a ksh88 feature).
That's not exactly a feature of the program you're running though, it just sees /dev/fd/N and tries to open it like a regular file.The Autoconf manual mentions some historic notes:A few ancient systems reserved some file descriptors. By convention, file descriptor 3 was opened to /dev/tty when you logged into Eighth Edition (1985) through Tenth Edition Unix (1989). File descriptor 4 had a special use on the Stardent/Kubota Titan (circa 1990), though we don't now remember what it was. Both these systems are obsolete, so it's now safe to treat file descriptors 3 and 4 like any other file descriptors.Also while I did a google search for this I found a program called runit that uses file descriptors 4 and 5 for some purpose related to log rotation.And quoting from the svlogd man page:If svlogd is told to process recent log files, (...). svlogd also saves any output that the processor writes to file descriptor 5, and makes that output available on file descriptor 4 when running processor on the next log file rotation. |
Standard file descriptors <= 2 are opened by default. A program can write to or read from, a file descriptor after 2, without using the open system call to obtain such a descriptor. The program can then advertise in its manual, which file descriptors it is using and how. To make use of this, a POSIX shell can open a file and assign that file to a descriptor with the exec built-in. After that, the shell would start the program which will use that descriptor and file.
One reason for doing that would be if the program wants to have more than one output or input file, and does not want to specify them as command line arguments. If there was just one file, you could just redirect a standard file descriptor.
I have never seen a generally available program which would advertise such a thing in its manual. Does this happen in practice? Has anybody heard of such a thing?
Yes, I do want to stay within POSIX world - so no bash-only built-ins. I just want to know if there is such a program, not a shell built-in.
| Which programs use a file descriptor higher than 2? |
There is not really a more specific term. In traditional Unix, file descriptors reference entries in the file table, entries of which are referred to as files, or sometimes open files. This is in a specific context, so while obviously the term file is quite generic, in the context of the file table it specifically refers to open files.
Files on disk are often referred to as inodes, although technically the inode is the metadata portion of the file. However since the relationship between the inode and the data blocks is one-to-one, referring to an inode implicitly refers to the data to which it points. More modern filesystems may support features such as copy-on-write where data blocks may be shared, so this is not universally true, but is for traditional Unix. However, given the term filesystem, it is no small leap to consider the contents of that to be files.
inodes are also read into memory as in-core inodes when a file on disk is opened, and maintained in the inode table, but that is one level of indirection further than you are asking.
This leads to the conflict in terminology: files on disk (referenced by an inode), and open files (referenced by entries in the file table).
I would suggest that "open file" or "file table entry" are both adequate descriptions of what you are looking to describe.
One reasonably concise reference I found is at: http://www.hicom.net/~shchuang/Unix/unix4.html. References of the form (Bach nn) are references to the book The Design Of The Unix Operating System by Maurice J. Bach.
|
My understanding is that a file descriptor is an integer which is a key in the kernel's per-process mapping to objects such as open()ed files, pipes, sockets, etc.
Is there a proper, short, and specific name for “open files/sockets/pipes/...”, the referents of file descriptors?
Calling them “files” leads to confusion with unopened files stored in the file system. Simply referring to file descriptors does not adequately describe the semantics (e.g. copying the integer between processes is useless).
Consulting The Open Group Base Specifications and my own system's manpages leads me to the conclusion that the referent of a file descriptor is an object and when it is specifically an open file it is, well, an open file. Is there a more specific term than object?
| What is the referent of a file descriptor? |
It doesn't tell me everything, but I used fuser ~/.myfile.txt.swp which gave me the PID of the vim session. Running ps aux | grep <PID> I was able to find out which vim session I was using, which gave me a hint as to which window I had it open in.
Thanks to Giles's inspiration and a bit of persistence and luck, I came up with the following command:
⚘ (FNAME="/tmp/.fnord.txt.swp"; tmux switch -t $(tmux list-panes -a -F '#{session_name}:#{window_index}.#{pane_index} #{pane_tty}' | grep $(ps -o tty= -p $(lsof -t $FNAME))$ | awk '{ print $1 }'))To explain what this does:
(FNAME="/tmp/.fnord.txt.swp";This creates a subshell and sets FNAME as an environment variable. It's not, strictly speaking, necessary - you could just replace $FNAME with the filename yourself, but it does make editing things easier. Now, working from the inside out:
lsof -t $FNAMEThis produces only the PID of the process that has open the file.
ps -o tty= -p $(...)This produces the pts of the PID that we found using lsof.
tmux list-panes -a -F '#{session_name}:#{window_index}.#{pane_index} #{pane_tty}'This produces a pane list of entries like session:0.1 /dev/pts/1. The first part is the format that tmux likes for targets, and the second part is the pts
| grep $(...)$This filters our pane list - the trailing $ is so it will only match the one we care about. I discovered that quite by accident as I had pts/2 and pts/22, so there were two matches, whoops!
| awk '{ print $1 }'This produces the session:0.1 part of the pane output, which is suitable for passing to tmux switch -t.
This should work across sessions as well as panes, bringing to focus the pane that contains your swap file.
|
I use tmux at work as my IDE. I also run vim in a variety of tmux panes and will fairly often background the process (or alternatively I just close the window - I have vim configured not to remove open buffers when the window is closed). Now I've a problem, because a file that I want to edit is open in one of my other vim sessions but I don't know which one.
Is it possible to find out which one, without manually going through all my windows and panes? In my particular case, I know that I didn't edit it with vim ~/myfile.txt because ps aux | grep myfile.txt doesn't return anything.
| Is it possible to find which vim/tmux has my file open? |
You could try rwsnoop (http://dtracebook.com/index.php/File_System:rwsnoop) to monitor i/o access using dtrace:
# rwsnoop - snoop read/write events.
# Written using DTrace (Solaris 10 3/05).
#
# This is measuring reads and writes at the application level. This matches
# the syscalls read, write, pread and pwrite.good luck!
|
I have a Solaris 10 server with autofs-mounted home dirs. On one server they are not unmounted after the 10 min timeout period. We've got AUTOMOUNT_TIMEOUT=600 in /etc/default/autofs, I ran automount -t 600, disabled and re-enabled svc:/system/filesystem/autofs:default service and nothing seems to work.
My suspicion is that something on the system is periodically accessing all the mounted filesystems, maybe checking if they are accessible, and thus resetting the automounter timeout that in turn never expires. This is supported by a test I just did - if I set the timeout to 10 seconds the mountpoints are unmounted, looks like 10 sec is shorter than the period in which that something is doing the checks and the timer has a chance to expire.
The question is how can I find what process is doing that? The server is a heavily used production system and I can't do any dangerous experiments on it.
Note that the filesystems are not kept open and can be manually unmounted. That something is probably going mountpoint by mountpoint, cd in, cd out, move on, often enough to prevent automount from unmounting it. But it doesn't keep it open and therefore is not visible with lsof or fuser -c. I want to catch it or record it as soon as it accesses the mountpoints to know what's doing it.
FWIW it's a Solaris 10 zone on rather beefy Solaris 10 host (Sparc / M5000).
| What process is accessing a mounted filesystem sporadically? |
See this StackOverflow link for a dtrace based answer to this. I've tested it on FreeBSD and it works perfectly:
capture() {
sudo dtrace -p "$1" -qn '
syscall::write*:entry
/pid == $target && arg0 == 1/ {
printf("%s", copyinstr(arg1, arg2));
}
'
} |
Under Linux I often use /proc/<pid>/fd/[0,1,2] to access std[in,out,err] of any running process.
Is there a way to achieve the same result under FreeBSD and/or macOS ?
| Grab standard input/ouput of a running process under FreeBSD/macOS |
Add ulimit -n 262144 to the condor init script.
|
I'm trying to increase the default file descriptor limits for processes on my system. Specifically I'm trying to get the limits to apply to the Condor daemon and its sub-processes when the machine boots. But the limits are never applied on machine boot.
I have the limits set in /etc/sysctl.conf:
[root@mybox ~]# cat /etc/sysctl.conf
# TUNED PARAMETERS FOR CONDOR PERFORMANCE
# See http://www.cs.wisc.edu/condor/condorg/linux_scalability.html for more information# Allow for more PIDs (to reduce rollover problems); may break some programs
kernel.pid_max = 4194303# increase system file descriptor limit
fs.file-max = 262144# increase system IP port limits
net.ipv4.ip_local_port_range = 1024 65535And in /etc/security/limits.conf:
[root@mybox ~]# cat /etc/security/limits.conf
# TUNED PARAMETERS FOR CONDOR PERFORMANCE
# See http://www.cs.wisc.edu/condor/condorg/linux_scalability.html for more information
# Increase the limit for a user continuously by editing etc/security/limits.conf.
* soft nofile 32768
* hard nofile 262144 #65536The trouble I run in to is, on system reboot, the limits don't seem to apply to Condor and its processes. After a reboot, if I look at the file descriptor limit for a Condor process I see:
[root@mybox proc]# cat /proc/`/sbin/pidof condor_schedd`/limits | grep 'Max open files'
Max open files 1024 1024But if I restart the condor_schedd process after a reboot the limits are increased as expected:
[root@mybox proc]# cat /proc/`/sbin/pidof condor_schedd`/limits | grep 'Max open files'
Max open files 32768 262144The boot.log indicates these limits are being set before my Condor daemon and its processes are being started:
May 18 07:51:52 mybox sysctl: net.ipv4.ip_forward = 0
May 18 07:51:52 mybox sysctl: net.ipv4.conf.default.rp_filter = 1
May 18 07:51:52 mybox sysctl: net.ipv4.conf.default.accept_source_route = 0
May 18 07:51:52 mybox sysctl: kernel.sysrq = 0
May 18 07:51:52 mybox sysctl: kernel.core_uses_pid = 1
May 18 07:51:52 mybox sysctl: kernel.pid_max = 4194303
May 18 07:51:52 mybox sysctl: fs.file-max = 262144
May 18 07:51:52 mybox sysctl: net.ipv4.ip_local_port_range = 1024 65535
May 18 07:51:52 mybox network: Setting network parameters: succeeded
May 18 07:51:52 mybox network: Bringing up loopback interface: succeeded
May 18 07:51:57 mybox ifup: Enslaving eth0 to bond0
May 18 07:51:57 mybox ifup: Enslaving eth1 to bond0
May 18 07:51:57 mybox network: Bringing up interface bond0: succeeded
May 18 07:52:17 mybox hpsmhd: smhstart startup succeeded
May 18 07:52:17 mybox condor: Starting up Condor
May 18 07:52:17 mybox rc: Starting condor: succeeded
May 18 07:52:17 mybox crond: crond startup succeededObviously I'd like to avoid having to boot a machine and then restart process that I need these increased limits to apply to -- what have I done wrong that's preventing these limits from applying to the processes when the machine boots?
| File descriptor limits are lost after a system reboot |
The permissions of a file are checked when the file is opened. Changing the permissions doesn't affect what processes that already have the file open can do with it. This is used sometimes with processes that start with additional privileges, open a file, then drop those additional privileges: they can still access the file but may not be able to reopen it.
However editors typically do not keep a file open. When an editor opens a document, what happens under the hood is that the editor loads the file contents in memory and closes the file. When you save the document, the editor opens the file and writes the new content.
Editors can follow one of two strategies when saving a file. They can create a new file, then move it into place. Alternatively, they can open the existing file and overwrite the old contents. Overwriting has the advantage that the file's permission and ownership do not change, and that it works even in a read-only directory. The major disadvantage of overwriting is that if saving fails midway (editor crash, system crash, disk full, …), you are left with a truncated document. Different editors choose different strategies; the good one do write-to-new-then-move if possible, and overwrite only in a read-only directory (after making a backup somewhere else).
If the editor follows the new-then-move strategy, the permissions on the file don't matter: the editor will create a new file, and it only needs write permission on the directory for that. There are two exceptions: if the directory has the sticky bit, changing the ownership of the file (but not the permission) may make it impossible for the process to move the new file into place. Another exception is on systems that support delete permission through ACLs (such as OSX): revoking the delete permission from the file may make the move impossible.
If the editor follows the overwrite strategy, revoking write permission will make saving impossible. (However, some editors that overwrite by default may fall back to new-then-move.)
In Vim, you can force the overwrite strategy by turning off the backupcopy option; see also why inode value changes when we edit in "vi" editor?. In Emacs, you can force the overwrite strategy by setting the backup-by-copying variable to t.
|
Let's say you open a file on which you have write permission.
Meanwhile you change permissions and remove write permission while you still have the file open in some editor.
What will happen if you edit and save it?
| File permissions and saving |
Under the assumption that the PDFs you are viewing have the extension .pdf, the following could work to get you a list of open PDFs:
$ lsof | grep ".pdf$"If you only ever use Evince, see Gilles' similar answer. On my machine (with a few pdfs open) displayed output as follows
evince 6267 myuser 14u REG 252,0 363755 7077955 /tmp/SM.pdfTo just get the filenames, we can use awk:
$ lsof | grep ".pdf$" | awk '{print $9}'or even better,
$ lsof | awk '/.pdf$/ {print $9}'We can save these results in a file:
$ lsof | awk '/.pdf$/ {print $9}' > openpdfsAnd later, to restore them:
$ xargs -a openpdfs evinceTo make this happen automatically, you can use whatever mechanisms your desktop environment provides to run the "save" command on exit and the "open" command on login. A bit more robustness can be added by ensuring that the pdf's returned by lsof are being opened by your user. One advantage of this method is that it should work for any pdf viewer that takes command-line arguments. One disadvantage is that it depends on file names; however, with a bit of poking, the requirement of knowing the filename extension could probably also be removed.
|
I constantly have many PDF files open. These are usually downloaded using chrome and immediately opened using evince.
I sometimes want to persist the state of all my open PDF files, so I could re-open the same group of documents at a later time.
This mostly happens when I need to reboot and what to have the same set of documents re-opened, but sometimes I just want to keep a list of open documents for later.
Is there a way to get the names of all open pdf files, from evince or any other program?
or at least, is there a way of asking evince to re-open the same set of documents after a reboot?
| retrieving names of all open pdf files (in evince or otherwise) |
Felrood from Arch Linux forums provided a solution and I would like to share it here and close this question.
Gedit seems to display data from stdin in a new "Unsaved document". For example:
echo "foobar" | geditWhat can be done is this:right click the Kmenu button -> edit applications -> find gedit there
(for me that is "utilities") -> put "gedit $1 < /dev/null" in gedits
command field -> saveFor me that solved the problem no matter whether I use krusader, dolphin, alt+f2 or something else..
|
When I open files by double-clicking a file with mouse I always get one additional "Unsaved Document X".. which is very annoying, because I have to close tham all, and click "Close without save" every time... This happens in dolphin, nautilus and krusader (those are the ones where I tried it, so I gues it's not because of a file manager).
When I try opening a file from terminal using "gedit filename" the problem is not there. It also does not happen if I open files from within gedit.
Any hints on how to fix this?
This started happening I think somewhere around the time when gnome3 came into Arch official repos.
(I use up-to-date Arch and KDE4.6)
| Gedit opening an "Unsaved document" on opening files with mouse |
You are confusing two different counters: the file system link counter and the file descriptor reference counter.The file system link counter counts how many links to an inode are in the file system itself. The inode is the structure that contains the file metadata. In ext* file systems this counter is stored in the file system itself.
You can verify how many links has a inode using ls -l. In addition, you can use ls -i to get the inode number of a file. E.g. try to multiply the links to a file using ln and verify that all links have the same inode number.
andcoz@tseenfoo:~/refcount> ls -li
total 40
2248813 -rw-r--r-- 1 andcoz users 40960 7 feb 21.34 test
andcoz@tseenfoo:~/refcount> ln test test2
andcoz@tseenfoo:~/refcount> ln test test3
andcoz@tseenfoo:~/refcount> ls -li
total 120
2248813 -rw-r--r-- 3 andcoz users 40960 7 feb 21.34 test
2248813 -rw-r--r-- 3 andcoz users 40960 7 feb 21.34 test2
2248813 -rw-r--r-- 3 andcoz users 40960 7 feb 21.34 test3The file descriptor reference counter counts how many times a file is open by a process or, more formally, how many file descriptors reference that inode. This information is stored in kernel memory.
You can get an approximation of this value using fuser command. This command lists all the processes that have a file open. Note that a single process could open the same file multiple times, so fuser list size is less or, usually, equal to reference counter.
andcoz@tseenfoo:~/refcount> tail -f test &
[3] 4226
andcoz@tseenfoo:~/refcount> fuser test
/home/andcoz/refcount/test: 4226
andcoz@tseenfoo:~/refcount> tail -f test2 &
[4] 4354
andcoz@tseenfoo:~/refcount> fuser test
/home/andcoz/refcount/test: 4226 4354A file is removed from the file system when both the counters are zero.
|
Introduction
Until recently, I thought that on ext file system, inodes have reference counters which count the number of times the file is referenced by a directory entry or a file descriptor.
Then, I learned that the reference counter only counts the number of directory entries referencing it. To falsify this, I read the reference count of a video file using ls -l. It was 1 as I expected because I didn't create any additional hard links to it. I then opened the video file with a video player and executed the same command again. To my surprise, the reference count was still 1. Therefore, I failed at falsifying.
However, I can definitely continue watching the video after removing its only directory entry. When opening a big video file and deleting its directory entry, the amount of free storage space on the file system does not change. It only changes (by the size of the video file) when the player reached the end of the video and closes the file descriptor or the player terminates itself (depending on the video player used).
Question
What are the exact conditions for a file to be freed on an ext file system? I'm interested in how it is handled in ext2, ext3, and ext4. Are there differences depending on the kernel used or other parts of the operating system?
| When is a file freed in an ext file system? |
From man gzip:
-k, --keep Keep (don't delete) input files during compression or
decompression.So gzip -k log.txt should do the trick.
(But generally, a real logging solution, i.e., some syslog daemon, maybe with using log4j, could possibly be preferable.)
|
I'm runnig a java server on Debian with this command:
java -jar myapp.jar [args] >> log.txtOnce I gzipped the log file to send it and then I realized the original file was gone, leaving me with only the .gzip.
Although I created the file manually (and also tried to unzip the original) the app didn't log anymore to that file. So my questions are: where does that log go after that? Is there any way to re-route the output log file without restarting the app (as it is a server, I'd rather not kill the process).
| Capturing new output after deleting the output file |
This depends on how the script is writing: If directly by redirection (i.e. my_script.sh > ~/1212_000001/some_file), you can use lsof -p <script-pid> and you'll see the open file on your output directory
Else, the output of ps axjf' will show you the pid dependencies of sub-processes launched by your script, which may give you the information immediately in command arguments or allow you to play lsof -p <sub-process-pid> on sub-processes. Of course, if this occurs in a very short time, this method will not apply.
You can also make use of 'strace' command and look for "open" calls.
|
This question is NOT a duplicate of this question Find out current working directory of a running process?, because the writing directory can be different from the working directory.
For example, I start two processes by running my_script.sh in ~/ twice (one right after another).
In my_script.sh, I have made it to write the output to a folder whose name is a time stamp. Hence, despite the same working directory, the two processes actually write into different directories, say ~/1212_000001/ and ~/1212_000003/. (the 1st process starts at 00:00:01 on Dec. 12 and the 2nd starts 2 seconds later)
The solution in the question linked returns me the same result ~/ for the two processes and thus fail my purpose.
How to do it?
| Find out to which directory a process is writing? |
There are two settings that limit the number of open files: a per-process limit, and a system-wide limit. The system-wide limit is set by the fs.file-max sysctl, which can be configured in /etc/sysctl.conf (read at boot time) or set on the fly with the sysctl command or by writing to /proc/sys/fs/file-max. The per-process limit is set by ulimit -n.
The per-process limit is inherited by each process from its parent. A default value can be set in /etc/security/limits.conf, but this only applies to interactive sessions, not to daemons started at boot time. It will apply to a daemon only if it's started via an interactive session.
To increase (or decrease) per-process limits for a daemon, in general, edit its startup script and add a call to ulimit just before the daemon is started. The Debian redis package comes with a configuration setting in a separate file: /etc/default/redis. Comment out the ULIMIT= line and increase the value if necessary.
|
I am running Debian wheezy. File limits are increased to 100000 for every user.
ulimit -a and ulimit -Hn / -Sn show me the right amounts of maximum open file limits even in screen.
But for some reason I am not able to to have more than ~4000 connections / open files.
from sysctl.conf:
net.ipv4.tcp_fin_timeout = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.ip_local_port_range = 500 65000
net.core.somaxconn = 81920Output of ulimit -a:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 256639
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 999999
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 256639
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimitedfor example redis:
client: benchmark with 100 clients
Writing to socket: Connection reset by peer
Writing to socket: Connection reset by peer
Writing to socket: Connection reset by peer
Writing to socket: Connection reset by peer
Error: Connection reset by peerserver info:
127.0.0.1:6379> info clients
-Clients
connected_clients:4005
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0Java:
Caused by: io.netty.channel.ChannelException: Failed to open a socket.
Caused by: java.net.SocketException: Too many open files
at sun.nio.ch.SelectorProviderImpl.openSocketChannel(Unknown Source)
java.io.IOException: Too many open files
Caused by: io.netty.channel.ChannelException: Failed to open a socket.
Caused by: java.net.SocketException: Too many open files
at sun.nio.ch.SelectorProviderImpl.openSocketChannel(Unknown Source)
Caused by: io.netty.channel.ChannelException: Failed to open a socket.
Caused by: java.net.SocketException: Too many open filesls -l /proc/[id]/fd | wc -l shows ~4000 descriptors
| Daemon's open file limit is reached even though the system limits have been increased |
“Most of these seem to be various bids of Nepomuk.” But you killed all of them, not just the Nepomuk ones. So some other process must have been caught in the fray — presumably one critical to KDE, without which the window manager or the session manager crashed, possibly the window manager or the session manager itself.
If you haven't logged back in, check the file ~/.xsession-errors, it may have a relevant message near the end. Other than that, I don't expect that you'll be able to find useful traces of what happens. Next time, check what you're killing before you kill it.
|
I have an external hard drive on which I use rsync to backup my home directory. Today, I tried to umount the drive. It said it was busy. So, I used fuser to figure out who was using it:
/media/Panp9 Backups: 10198rce 10283rce 10284rce 10337rce 10338rce 10339rce 10341rce
10345rce 10348rce 10353rce 10354rce 10356rce 10362rce 10367rce
10371rce 10374rce 10378rce 10384rce 10387rce 10389rce 10396rce
10433rce 10436rce 10439rce 10441rce 10443rce 10447rce 10448rce
10457rce 10460rce 10466rce 10467rce 10473rce 10478rce 10487rce
10492rce 10504rce 10522rce 10538rce 10555rce 10560rce 10561rce
10562rce 10563rce 10564rce 10565rce 10566rce 10567rce 10720rce
10721rce 10722rce 10723rce 11083rce 11088rce 11090e 11094eMost of these seem to be various bids of Nepomuk. So, I went ahead and issued a fuser -c -k in my frustration. Apparently, this command (without sudo, I should mention) managed to kill X11. I'm at a loss and can't figure out why this is happening. Can anyone help me?
System Details:Intel i7, 4 cores
8 GB RAM
Linux Mint Nadia 14 KDE | Bizarre Disk Use Problem |
The inotify system on Linux, or the kqueue system on BSD/OSX, gives you an event-driven ("interrupt-like") mechanism to do this.
|
For example, the IDE I'm using at the moment (Aptana Studio) notifies me as soon as a file's contents it has open have been changed by some external program.
I can imagine having a periodic loop run stat() on a file and check the time of last data modification. Is this how it's normally done or is there a blocking interrupt-like mechanism used instead?
| Efficient mechanism to determine if open file has been externally modified? |
File descriptors are created for pretty much everything (since everything in Linux is a file), from connecting to another computer over the internet to running most applications. The resource limit is for that particular point in time. Keep in mind that even after the resource isn't being used, it can take several cycles for the shell to clean them up. To see what your hard limit is set to try doing ulimit -H -n this will show you the hard limit, when you do ulimit -n it's effectively like running ulimit -S -n which shows the soft limit.
|
On my machine,
ulimit -n returns 2560
Given that -n returnsThe maximum number of open file descriptors.Does it mean that system won't allow more then 2560 open files to be out there at any given time?
If not, how can i find out what is a hard limit system imposes on open files?
| Max Open Files, clarification needed |
In this context, a “stream” is an open file in a process. (The word “stream” can have other meanings that are off-topic here.)
The three standard streams are the ones that are supposed to be already open when a program starts. File descriptor 0 is called standard input because that's where a program is supposed to read user input or its default data input. File descriptor 1 is called standard output because that's where a program is supposed to write its normal data output. File descriptor 2 is called standard error because that's where a program is supposed to write its error messages.
Other file descriptor numbers are not standard anything because they don't have such a preassigned role. They'll end up being used for whatever the program wants. So could call any file opened by a program a “nonstandard stream”, but it would be weird and confusing: “open file other than stdin, stdout or stderr” doesn't really need a name, and “nonstandard stream” sounds like it's some special type of file or a file opened by a nonstandard method, which is not the case.
The conventional role of file descriptors 0–2 is granted by the standard library and by certain programs. For example, console login programs and terminal emulators start the shell (or other program) with the terminal open on these file descriptors. The C standard library creates FILE* objects (what C calls streams) for these three standard descriptors. There's no special treatment in the kernel.
|
The so-called "standard streams" in Linux are stdin, stdout, and stderr. They must be called "standard" for a reason. Are there non-standard streams? Are those non-standard streams fundamentally treated differently by the kernel?
| Are there "non-standard" streams in Linux/Unix? |
Yes, vim does not open the file until it needs to save it. Instead, vim uses a temporary hidden swap file to save changes you make incrementally. Once you save the file (:w) it will write to the original file.
You can see that for yourself by using lsof, i.e.:
$ lsof -n -p `pidof vim`
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
[...]
vim 9695 gert 4u REG 252,1 12288 410388 /tmp/.a.swp
[...]This is common behaviour for editors. less just reads the file and it's of no benefit to use tricks when just opening a file for reading.
|
First use Vim to edit a file, say /tmp/A.
Assuming that vim process is the only one that accesses /tmp/A, then use "ctrl+z" to suspend the process, and execute
fuser /tmp/AThen you see nothing in the output.
However, if you use "less" to open that file, you could see the pid of less in the fuser output.
Is there anthing special about vim that makes this weird scenario?
| vim not shown in fuser |
If process 1 has already started reading the file before process 2 overwrites it, then it will have some part of the contents stored in the stdio buffer. Once it crosses the buffer-size boundary it will be forced to go to the kernel, and then it will find the new overwritten contents.
|
It is well known that UNIX systems won't actually delete a file on disk while the file is in use. So if a file is being accessed by process 1 and process 2 deletes the file using rm, process 1 continues to see the file; additionally the file descriptor link at /proc/(process 1 id)/fd reports the original contents of the deleted file.
However, if process 2 overwrites the file as opposed to deleting it (say with echo "abracadabra" > file.txt), the file descriptor link at /proc/(process 1 id)/fd reports the overwriting material("abracadabra"), while process 1 is still able to access the original contents of the file.
Why this difference?
[Edit]The snippet below is in response to Jim Paris
>uname -a
Linux ravoori-netbook 3.2.0-32-generic-pae #51-Ubuntu SMP Wed Sep 26 21:54:23 UT
C 2012 i686 i686 i386 GNU/Linux
>echo original > /tmp/foo
>tail -0f /tmp/foo &
[2] 6144
>rm /tmp/foo
>cat /proc/6144/fd/3
original
>echo abracadabra > /tmp/foo
>cat /proc/6144/fd/3
original | File delete versus overwrite and link at /proc/pid/fd |
I've never tried this before, but...
There is a link to the file in /proc/8948/fd/. You can catenate the file as root (it's only readable as root), and pipe it to a new file. Whether the file is intact, I've not verified.
|
I'm trying the preview release of Flash Player "Square" for Linux and noticed that video files are now being deleted from the /tmp/ folder.
Yet the files are still in use (I can see them with lsof):
chromium- 8948 user 25u REG 8,5 2599793 229908 /tmp/FlashXXStJt3K (deleted)Is there a way to prevent flash from deleting them or a way to recover them?
| Prevent libflashplayer.so from deleting a file? |
Simply use cat (if you like cats ;-)) and paste:
cat file.in | paste -d, - - > file.outExplanation: paste reads from a number of files and pastes together the corresponding lines (line 1 from first file with line 1 from second file etc):
paste file1 file2 ...Instead of a file name, we can use - (dash). paste takes first line from file1 (which is stdin). Then, it wants to read the first line from file2 (which is also stdin). However, since the first line of stdin was already read and processed, what now waits on the input stream is the second line of stdin, which paste happily glues to the first one. The -d option sets the delimiter to be a comma rather than a tab.
Alternatively, do
cat file.in | sed "N;s/\n/,/" > file.outP.S. Yes, one can simplify the above to
< file.in sed "N;s/\n/,/" > file.outor
< file.in paste -d, - - > file.outwhich has the advantage of not using cat.
However, I did not use this idiom on purpose, for clarity reasons -- it is less verbose and I like cat (CATS ARE NICE). So please do not edit.
Alternatively, if you prefer paste to cats (paste is the command to concatenate files horizontally, while cat concatenates them vertically), you may use:
paste file.in | paste -d, - - |
I have more than 1000 lines in a file. The file starts as follows (line numbers added):
Station Name
Station Code
A N DEV NAGAR
ACND
ABHAIPUR
AHA
ABOHAR
ABS
ABU ROAD
ABRI need to convert this to a file, with comma separated entries by joining every two lines. The final data should look like
Station Name,Station Code
A N DEV NAGAR,ACND
ABHAIPUR,AHA
ABOHAR,ABS
ABU ROAD,ABR
...What I was trying was - trying to write a shell script and then echo them with comma in between. But I guess a simpler effective one-liner would do the job here may be in sed/awk.
Any ideas?
| Text processing - join every two lines with commas |
Assuming you don't have any tab characters in your files,
paste file1 file2 | expand -t 13with the arg to -t suitably chosen to cover the desired max line width in file1.
OP has added a more flexible solution:
I did this so it works without the magic number 13:
paste file1 file2 | expand -t $(( $(wc -L <file1) + 2 ))It's not easy to type but can be used in a script.
|
I have the following two files ( I padded the lines with dots so every line in a file is the same width and made file1 all caps to make it more clear).
contents of file1:ETIAM......
SED........
MAECENAS...
DONEC......
SUSPENDISSEcontents of file2Lorem....
Proin....
Nunc.....
Quisque..
Aenean...
Nam......
Vivamus..
Curabitur
Nullam...Notice that file2 is longer than file1.
When I run this command:
paste file1 file2I get this output
ETIAM...... Lorem....
SED........ Proin....
MAECENAS... Nunc.....
DONEC...... Quisque..
SUSPENDISSE Aenean...
Nam......
Vivamus..
Curabitur
Nullam...What can I do for the output to be as follows ?
ETIAM...... Lorem....
SED........ Proin....
MAECENAS... Nunc.....
DONEC...... Quisque..
SUSPENDISSE Aenean...
Nam......
Vivamus..
Curabitur
Nullam...I tried
paste file1 file2 | column -tbut it does this:
ETIAM...... Lorem....
SED........ Proin....
MAECENAS... Nunc.....
DONEC...... Quisque..
SUSPENDISSE Aenean...
Nam......
Vivamus..
Curabitur
Nullam...non as ugly as the original output but wrong column-wise anyway.
| A better paste command |
If you have root permissions on that machine you can temporarily increase the "maximum number of open file descriptors" limit:
ulimit -Hn 10240 # The hard limit
ulimit -Sn 10240 # The soft limitAnd then
paste res.* >final.resAfter that you can set it back to the original values.A second solution, if you cannot change the limit:
for f in res.*; do cat final.res | paste - $f >temp; cp temp final.res; done; rm tempIt calls paste for each file once, and at the end there is a huge file with all columns (it takes its minute).
Edit: Useless use of cat... Not!
As mentioned in the comments the usage of cat here (cat final.res | paste - $f >temp) is not useless. The first time the loop runs, the file final.res doesn't already exist. paste would then fail and the file is never filled, nor created. With my solution only cat fails the first time with No such file or directory and paste reads from stdin just an empty file, but it continues. The error can be ignored.
|
I have ±10,000 files (res.1 - res.10000) all consisting of one column, and an equal number of rows.
What I want is, in essence, simple; merge all files column-wise in a new file final.res. I have tried using:
paste res.*
However (although this seems to work for a small subset of result files, this gives the following error when performed on the whole set: Too many open files.
There must be an 'easy' way to get this done, but unfortunately I'm quite new to unix. Thanks in advance!
PS: To give you an idea of what (one of my) datafile(s) looks like:
0.5
0.5
0.03825
0.5
10211.0457
10227.8469
-5102.5228
0.0742
3.0944
... | Combining large amount of files |
paste use \0 for null delimiter as defined by POSIX:
paste -d'\0' file1 file2Using -d"" a b is the same as -d a b: the paste program sees three arguments -d, a and b, which makes a the delimiter and b the name of the sole file to paste.
If you're on a GNU system (non-embedded Linux, Cygwin, …), you can use:
paste -d "" file1 file2The form -d "" is unspecified by POSIX and can produce errors in other platforms. At least BSD and heirloom paste will report no delimiters error.
|
How do I join two files vertically without any separator? I tried to use paste -d"" a b, but this just gives me a.
Sample file:
000 0 0 0
0001000200030004
10 20 30 40
2000 4000
.123
12.1
1234234534564567 | paste files without delimiter |
To have abc inbetween file1 and file2, you can do:
paste -d abc file1 /dev/null /dev/null file2Or:
paste -d abc file1 - - file2 < /dev/nullIf you want two tabs:
paste file1 /dev/null file2 |
In Linux, I have the following problem with paste from (GNU coreutils) 8.13:
Trying to set another delimiter than the default (TAB) results in either just printing the first character of the defined delimiter or perfectly ignoring it.
Question: How does one define (multiple) delimiters when using paste?
Simply using, e.g. abc-123 as delimiter would be nice. With "multiple" I mean e.g. 2 TABS instead of one.The patterns enclosing the delimiter(s) I've tried so far were --delimiters="\delimiter"
--delimiters='\delimiter'
--delimiters=$"\delimiter"
--delimiters=$'\delimiter'All with the same result: Only the first character is accepted or perfectly ignored. I've also tried the short version -d"\" and multiple instances &ndahs nothing.
Also:--delimiters="\\" → Error messageWhat works perfectly, though not what I want:--delimiters="\n" → newline
--delimiters="\0" → nothing inbetween
--delimiters="\t"→ TAB, the default. Great. | paste command: setting (multiple) delimiters |
If using xterm or a derivative you can setup key bindings to start and end a text selection, and save it as the X11 primary selection or a cutbuffer. See man xterm. For example, add to your ~/.Xdefaults:
XTerm*VT100.Translations: #override\n\
<Key>KP_1: select-cursor-start() \
select-cursor-end(PRIMARY, CUT_BUFFER0)\n\
<Key>KP_2: start-cursor-extend() \
select-cursor-end(PRIMARY, CUT_BUFFER0)\nYou can only have one XTerm*VT100.Translations entry. Update the X11 server with the new file contents with xrdb -merge ~/.Xdefaults. Start a new xterm.
Now when you have some input at the command prompt, typing 1 on the numeric keypad will start selecting text at the current text cursor position, much like button 1 down on the mouse does. Move the cursor with the arrow keys then hit 2 on the numeric keypad and the intervening text is highlighted and copied to the primary selection and cutbuffer0. Obviously other more suitable keys and actions can be chosen. You can similarly paste the selection with bindings like insert-selection(PRIMARY).
|
I'm trying to figure out a way to copy the current text in a command line to the clipboard WITHOUT touching the mouse. In other words, I need to select the text with the keyboard only.
I found a half-way solution that may lead to the full solution:
Ctrl+a - move to the beginning of the line.
Ctrl+k - cuts the entire line.
Ctrl+y - yanks the cut text back.
Alternatively I can also use Ctrl+u to perform the first 2 steps.
This of course works, but I'm trying to figure out where exactly is the cut text saved. Is there a way to access it without using Ctrl+y ?
I'm aware of xclip and I even use it to pipe text straight to the clipboard, so I was thinking about piping the data saved by Ctrl+k to xclip, but not sure how to do it.
The method I got so far is writing a script which uses xdotool to add echo to the beginning of the line and | zxc to the end of the line, and then hits enter (zxc being a custom alias which basically pipes to xclip). This also works, but it's not a really "clean" solution.
I'm using Cshell if that makes any difference.
EDIT: I don't want to use screen as a solution, forgot to mention that.
Thanks!
| How to copy text from command line to clipboard without using the mouse? |
Using paste:
paste -d \\n file2 file1 |
File1:
.tid.setnr := 1123
.tid.setnr := 3345
.tid.setnr := 5431
.tid.setnr := 89323File2:
.tid.info := 12
.tid.info := 3
.tid.info := 44
.tid.info := 60Output file:
.tid.info := 12
.tid.setnr := 1123
.tid.info := 3
.tid.setnr := 3345
.tid.info := 44
.tid.setnr := 5431
.tid.info := 60
.tid.setnr := 89323 | Merge alternate lines from two files |
You can automate this with globbing, specifically the e glob qualifier, plus eval, but it isn't pretty and the quoting is tricky:
eval paste *.csv(e\''REPLY="<(cut -d, -f1 $REPLY)"'\')The part between \'…\' is some code to execute for every match of the glob. It is executed with the variable REPLY set to the match, and can modify it.
I put the code in single quotes so that it isn't expanded when the glob is parsed.
The code REPLY="<(cut -d, -f1 $REPLY)" generates the string <(cut -d, -f1 file1.csv) if the match is file1.csv. The double quotes are necessary so that the part after the equal sign isn't expanded when the e code is executed apart from substituting the value of REPLY.
Since each globbed file is replaced by a string,It would be nicer to hide the complexity in a function. Minimally tested.
function map {
emulate -LR zsh
local cmd pre
cmd=()
while [[ $# -ne 0 && $1 != "--" ]]; do
cmd+=($1)
shift
done
if ((!$#)); then
echo >&2 "Usage: $0: COMMAND [ARGS...] -- PREPROCESSOR [ARGS...] -- FILES..."
return 125
fi
shift
while [[ $# -ne 0 && $1 != "--" ]]; do
pre+="${(q)1} "
shift
done
if ((!$#)); then
echo >&2 "Usage: $0: COMMAND [ARGS...] -- PREPROCESSOR [ARGS...] -- FILES..."
return 125
fi
shift
eval "${(@q)cmd}" "<($pre${(@q)^@})"
}Sample usage (the syntax is reminiscent of zargs):
map paste -- cut -d, -f1 -- *.csv |
I often do operations like
paste <(cut -d, -f1 file1.csv) <(cut -d, -f1 file2.csv)which is very tedious with more than a few files.
Can I automate this process, e.g. with globbing? I can save the cut results with
typeset -A cut_results
for f in file*.csv; do
cut_results[$f]="$(cut -d, -f1 $f)"
donebut I'm not sure how to proceed from there.
| How can I apply `cut` to several files and then `paste` the results? |
You can use paste for this:
paste -d '\0' aaaa.txt bbbb.txt > cccc.txtFrom your question, it appears that the first file contains ; at the end. If it didn't, you could use that as the delimiter by using -d ';' instead.
Note that contrary to what one may think, with -d '\0', it's not pasting with a NUL character as the delimiter, but with an empty delimiter. That is the standard way to specify an empty delimiter. Some paste implementations like GNU paste allow paste -d '' for that, but it's neither standard nor portable (many other implementations will report an error about the missing delimiter if you use paste -d '').
|
Now,I have two files:
aaaa.txt:
a=0;
b=1;
c=2;bbbb.txt:
d=3
e=4
f=5I want to merge aaaa.txt and bbbb.txt to cccc.txt.
cccc.txt as follow:
a=0;d=3
b=1;e=4
c=2;f=5So, what can I do for this?
| How to merge two files in corresponding row? |
with paste under bash you can do:
paste <(cut -f 4 1.txt) <(cut -f 4 2.txt) .... <(cut -f 4 20.txt)With a python script and any number of files (python scriptname.py column_nr file1 file2 ... filen):
#! /usr/bin/env python# invoke with column nr to extract as first parameter followed by
# filenames. The files should all have the same number of rowsimport syscol = int(sys.argv[1])
res = {}for file_name in sys.argv[2:]:
for line_nr, line in enumerate(open(file_name)):
res.setdefault(line_nr, []).append(line.strip().split('\t')[col-1])for line_nr in sorted(res):
print '\t'.join(res[line_nr]) |
I have 20 tab delimited files with the same number of rows. I want to select every 4th column of each file, pasted together to a new file. In the end, the new file will have 20 columns with each column come from 20 different files.
How can I do this with Unix/Linux command(s)?
Input, 20 of this same format.
I want the 4th column denoted here as A1 for file 1:
chr1 1734966 1735009 A1 0 0 0 0 0 1 0
chr1 2074087 2083457 A1 0 1 0 0 0 0 0
chr1 2788495 2788535 A1 0 0 0 0 0 0 0
chr1 2821745 2822495 A1 0 0 0 0 0 1 0
chr1 2821939 2822679 A1 1 0 0 0 0 0 0
...Output file, with 20 columns, each column coming from one of the 20 files' 4th column:
A1 A2 A3 ... A20
A1 A2 A3 ... A20
A1 A2 A3 ... A20
A1 A2 A3 ... A20
A1 A2 A3 ... A20
... | Select certain column of each file, paste to a new file |
Just cat a.txt b.txt > out.txt. If you want even spaces and no blank lines
$ awk 'NF' inputA.txt inputB.txt
a a
a a
a a
b b
b b
b b | Let's assume that I've got two text file a, b.
$cat a
a a
a a
a a$cat b
b b
b b
b bThen, I want to merge these two file vertically by using paste. The merged file is shown bellow
a a
a a
a a
b b
b b
b bNOT
$paste a b > AB
$cat AB
a a b b
a a b b
a a b b | How to merge text file vertically? [duplicate] |
What about paste file{1,2}| column -s $'\t' -tn?
looooooooong line line hello
line worldThis is telling column to use Tab as columns' separator where we takes it from the paste command which is the default seperator there if not specified; generally:
paste -d'X' file{1,2}| column -s $'X' -tn
where X means any single character. You need to choose the one which granted that won't be occur in your files.
The -t option is used to determine the number of columns the input contains.
This will not add long tab between two files while other answers does.
this will work even if there was empty line(s) in file1 and it will not print second file in print area of file1, see below input/ouput
Input file1:
looooooooong linelineInput file2:
hello
worldOutput:
looooooooong line hello
world
line |
I want to output two text files in two columns — one on the left side and other one on the right.
paste doesn't solve the problem, because it only insert a character as delimiter, so if the first file has lines of different lengths output will be twisted:
$ cat file1
looooooooong line
line
$ cat file2
hello
world
$ paste file1 file2
looooooooong line hello
line worldIf there was a command to add trailing spaces like fmt --add-spaces --width 50 the problem would be solved(1):
$ paste <(fmt --add-spaces --width 50 file1) file2
looooooooong line hello
line worldBut I don't know a simple way to do this.
So how to merge files horizontally and print them to standard output without twisting?
In fact I just want to read them side-by-side.(1) UPD: command to add trailing spaces does exist, e. g. xargs -d '\n' printf '%-50s\n'.
But running
$ paste <(add-trailing-spaces file1) file2
won't produce expected visual output when file1 has fewer lines than file2.
| Print two files in two columns side-by-side |
The trouble is each line has a different length. The easiest solution is to give a large enough width to pr:
pr -mtw 150 art_file caption_fileIf you want the caption text to get closer, I suggest
awk '
l<length && NR<=n{l=length}
NR!=FNR{
printf "%-"l"s", $0
getline line < "caption"
print line
}
' n="$(wc -l < caption)" art artn is the number of lines of the caption file.
l is the length of the longest line between the first n lines of the art file.
printf right-pads the art file with spaces so that it all its lines have l length.
getline then gets a line from the caption file and prints it next to the just printed art line.Note that you can add or subtract to the value of l in printf to ad-hoc adjust the spacing.
.::""-, .::""-.
/:: \ /:: \
|:: | _..--""""--.._ |:: | _________ .__
'\:.__ / .' '. \:.__ / / _____/____ _____ ______ | | ____
||____|.' _..---"````'---. '.||____| \_____ \\__ \ / \\____ \| | _/ __ \
||:. |_.' `'.||:. | / \/ __ \| Y Y \ |_> > |_\ ___/
||:.-'` .-----. ';:. | /_______ (____ /__|_| / __/|____/\___ >
||/ .' '. \. | \/ \/ \/|__| \/
|| / '-. '. \\ |. | ___________ __
||:. _| ' \_\_\\/( \ | \__ ___/___ ___ ____/ |_
||:.\_.-' ) || m `\.--._.-""-; | |_/ __ \\ \/ /\ __\
||:.(_ . '\ __'// m ^_/ / '. _.`. | |\ ___/ > < | |
||:. \__^/` _)```'-...' _ .-'.' '-. |____| \___ >__/\_ \ |__|
||:..-'__ .' '. . ' '. `'. \/ \/
||:(_.' .`' _. ' '-. '. . ''-._
||:. : '. .' '. . ' ' '.` '._
||:. : '. .' .::""-: .''. ' . . ' ':::''-.
||:. .' ..' . /:: \ '. . '. /:: \
||:.' .' '. |:: | _.:---""---.._' |:: |
||. : '\:.__ / .' -. .- '. \:.__ /
||: : '. . ||____|_.' .--. .--. '._||____|
||:'.___: '. .' ||:. | ( \/ ) ||:. |
||:___| \ '. : ||:. | '-. .-' ||:. |
[[____] '. '.-._||:. | __ '..' __ ||:. |
'. : ||:. | (__\ (\/) /__) ||:. |
'. : ||:. | ` \/ ` ||:. |
'-: ||:. | () ||:. |
'._||:. |________________________||:. |
||:___|'-.-'-.-'-.-'-.-'-.-'-.-||:___|
[[____] [[____] |
art_file (cat -A output):
.::""-, .::""-.$
/:: \ /:: \$
|:: | _..--""""--.._ |:: |$
'\:.__ / .' '. \:.__ /$
||____|.' _..---"````'---. '.||____|$
||:. |_.' `'.||:. |$
||:.-'` .-----. ';:. |$
||/ .' '. \. |$
|| / '-. '. \\ |. |$
||:. _| ' \_\_\\/( \ |$
||:.\_.-' ) || m `\.--._.-""-;$
||:.(_ . '\ __'// m ^_/ / '. _.`.$
||:. \__^/` _)```'-...' _ .-'.' '-.$
||:..-'__ .' '. . ' '. `'.$
||:(_.' .`' _. ' '-. '. . ''-._$
||:. : '. .' '. . ' ' '.` '._$
||:. : '. .' .::""-: .''. ' . . ' ':::''-.$
||:. .' ..' . /:: \ '. . '. /:: \$
||:.' .' '. |:: | _.:---""---.._' |:: |$
||. : '\:.__ / .' -. .- '. \:.__ /$
||: : '. . ||____|_.' .--. .--. '._||____|$
||:'.___: '. .' ||:. | ( \/ ) ||:. |$
||:___| \ '. : ||:. | '-. .-' ||:. |$
[[____] '. '.-._||:. | __ '..' __ ||:. |$
'. : ||:. | (__\ (\/) /__) ||:. |$
'. : ||:. | ` \/ ` ||:. |$
'-: ||:. | () ||:. |$
'._||:. |________________________||:. |$
||:___|'-.-'-.-'-.-'-.-'-.-'-.-||:___|$
[[____] [[____]$caption_file (cat -A output):
$
$
_________ .__ $
/ _____/____ _____ ______ | | ____ $
\_____ \\__ \ / \\____ \| | _/ __ \ $
/ \/ __ \| Y Y \ |_> > |_\ ___/ $
/_______ (____ /__|_| / __/|____/\___ >$
\/ \/ \/|__| \/ $
___________ __ $
\__ ___/___ ___ ____/ |_ $
| |_/ __ \\ \/ /\ __\ $
| |\ ___/ > < | | $
|____| \___ >__/\_ \ |__| $
\/ \/ $
$
$I am trying to merge art_file with caption_file side by side. So far I have tried two methods:using pr -Jmt art_file caption_file .::""-, .::""-.
/:: \ /:: \
|:: | _..--""""--.._ |:: | _________ .__
'\:.__ / .' '. \:.__ / / _____/____ _____ ______ | | ____
||____|.' _..---"````'---. '.||____| \_____ \\__ \ / \\____ \| | _/ __ \
||:. |_.' `'.||:. | / \/ __ \| Y Y \ |_> > |_\ ___/
||:.-'` .-----. ';:. | /_______ (____ /__|_| / __/|____/\___ >
||/ .' '. \. | \/ \/ \/|__| \/
|| / '-. '. \\ |. | ___________ __
||:. _| ' \_\_\\/( \ | \__ ___/___ ___ ____/ |_
||:.\_.-' ) || m `\.--._.-""-; | |_/ __ \\ \/ /\ __\
||:.(_ . '\ __'// m ^_/ / '. _.`. | |\ ___/ > < | |
||:. \__^/` _)```'-...' _ .-'.' '-. |____| \___ >__/\_ \ |__|
||:..-'__ .' '. . ' '. `'. \/ \/
||:(_.' .`' _. ' '-. '. . ''-._
||:. : '. .' '. . ' ' '.` '._
||:. : '. .' .::""-: .''. ' . . ' ':::''-.
||:. .' ..' . /:: \ '. . '. /:: \
||:.' .' '. |:: | _.:---""---.._' |:: |
||. : '\:.__ / .' -. .- '. \:.__ /
||: : '. . ||____|_.' .--. .--. '._||____|
||:'.___: '. .' ||:. | ( \/ ) ||:. |
||:___| \ '. : ||:. | '-. .-' ||:. |
[[____] '. '.-._||:. | __ '..' __ ||:. |
'. : ||:. | (__\ (\/) /__) ||:. |
'. : ||:. | ` \/ ` ||:. |
'-: ||:. | () ||:. |
'._||:. |________________________||:. |
||:___|'-.-'-.-'-.-'-.-'-.-'-.-||:___|
[[____] [[____]paste art_file caption_file .::""-, .::""-.
/:: \ /:: \
|:: | _..--""""--.._ |:: | _________ .__
'\:.__ / .' '. \:.__ / / _____/____ _____ ______ | | ____
||____|.' _..---"````'---. '.||____| \_____ \\__ \ / \\____ \| | _/ __ \
||:. |_.' `'.||:. | / \/ __ \| Y Y \ |_> > |_\ ___/
||:.-'` .-----. ';:. | /_______ (____ /__|_| / __/|____/\___ >
||/ .' '. \. | \/ \/ \/|__| \/
|| / '-. '. \\ |. | ___________ __
||:. _| ' \_\_\\/( \ | \__ ___/___ ___ ____/ |_
||:.\_.-' ) || m `\.--._.-""-; | |_/ __ \\ \/ /\ __\
||:.(_ . '\ __'// m ^_/ / '. _.`. | |\ ___/ > < | |
||:. \__^/` _)```'-...' _ .-'.' '-. |____| \___ >__/\_ \ |__|
||:..-'__ .' '. . ' '. `'. \/ \/
||:(_.' .`' _. ' '-. '. . ''-._
||:. : '. .' '. . ' ' '.` '._
||:. : '. .' .::""-: .''. ' . . ' ':::''-.
||:. .' ..' . /:: \ '. . '. /:: \
||:.' .' '. |:: | _.:---""---.._' |:: |
||. : '\:.__ / .' -. .- '. \:.__ /
||: : '. . ||____|_.' .--. .--. '._||____|
||:'.___: '. .' ||:. | ( \/ ) ||:. |
||:___| \ '. : ||:. | '-. .-' ||:. |
[[____] '. '.-._||:. | __ '..' __ ||:. |
'. : ||:. | (__\ (\/) /__) ||:. |
'. : ||:. | ` \/ ` ||:. |
'-: ||:. | () ||:. |
'._||:. |________________________||:. |
||:___|'-.-'-.-'-.-'-.-'-.-'-.-||:___|
[[____] [[____]Both of them mess up the alignment of the second file, with paste generating a somewhat better output. So my questions are:Using either paste or pr can I generate desired output? Some option(s) I am overlooking, perhaps?
If neither of them are the correct tools for the job, other than writing a new program, what pre-existing solution can I use? | What is the correct way to merge two ASCII art files side by side while preserving alignment? |
Use awk seen if you don't want to sort the file:
$ awk '!seen[$0]++' a.txt b.txt
foo
bar
foobar
line
by |
What is the fastest command line way to merge the different lines of files? For example, I have two files:
a.txt:
foo
bar
foobarb.txt
foo
foobar
line
by
barAnd I would like to get the following output:
foo
bar
foobar
line
byIs there any fast way to merge files like the example above? (The order of the lines isn't important)
| How to merge the different lines of files? |
You're close; you just need to tell printf to zero-pad to the right of the decimal point:
$ cat 736678.txt
0.2
0.3339
0.111111
$ for value in $( cat 736678.txt ); do printf "%.3f\n" "$value"; done
0.200
0.334
0.111The format string %.3f means "a floating-point number with precisely three decimal places to the right of the point".
|
I have a paste command like this
paste -d , file1.csv file2.csv file3.csv
And file2.csv contains numbers like this
0.2
0.3339
0.111111I want the values in file2.csv having 3 decimals like this:
0.200
0.334
0.111For one value this is working:
printf "%.3f" "0.3339" -> 0.334
But for multiple values in file2.csv this is not working:
paste -d , file1.csv <(printf %s "%.3f" "$(< file2.csv)") file3.csv
Maybe there is a good solution?
| Rounding many values in a csv to 3 decimals (printf ?) |
With GNU awk for arrays of arrays:
$ awk '{a[$1][(NR>FNR)]=$2} END{for (i in a) print i, a[i][0], a[i][1]}' file{1,2}
1 dog woof
2 cat meow
3 fox
4 cow moohor with any awk:
$ awk '{keys[$1]; a[$1,(NR>FNR)]=$2} END{for (i in keys) print i, a[i,0], a[i,1]}' file{1,2}
1 dog woof
2 cat meow
3 fox
4 cow moohAlthough the above produces the output in numerically ascending order of the first field that's just luck/coincidence - the order of the output lines is actually "random" (typically hash order) courtesy of the "in" operator. Pipe the output to sort -k1,1n (or set PROCINFO["sorted_in"]="@ind_num_asc" at the start of the END section in GNU awk) if you care about that.
The significant differences between this and a join solution are that:This will work even if the input is not sorted while join requires input sorted on the key field(s) and,
If there's a line in file2 with a key not present in file1 (or vice-versa) this will display it spaced in a way that you can tell which file that unique line came from (unlike adding -a2 to a join command).Here's some more comprehensive sample input/output to test with:
$ head file{1,2}
==> file1 <==
1 dog
2 cat
4 cow
5 bear==> file2 <==
1 woof
2 meow
3 growl
4 moohwhich we can then run either of the above awk scripts on to get the same output:
$ awk '{a[$1][(NR>FNR)]=$2} END{for (i in a) print i, a[i][0], a[i][1]}' file{1,2}
1 dog woof
2 cat meow
3 growl
4 cow mooh
5 bearand note that 3 growl has an extra blank before growl so you know that was a unique line from file2 as opposed to using join:
$ join -a1 -a2 file1 file2
1 dog woof
2 cat meow
3 growl
4 cow mooh
5 bearwhere you can't tell a unique line from file1 (e.g. 5 bear) from a unique line from file2 (e.g. 3 growl).
|
I want to merge two files like How to merge two files based on the matching of two columns? but one file may not have all results. So for example
file1
1 dog
2 cat
3 fox
4 cowfile2
1 woof
2 meow
4 moohwanted output
1 dog woof
2 cat meow
3 fox
4 cow mooh | Merging files based on potentially incomplete keys |
Try this solution with two extra temporary files:
paste tmp1 tmp2 > tmp12
paste tmp4 tmp5 tmp6 > tmp456
paste -d "\n" tmp12 tmp3 tmp456 > tmp7This solution was based on the assumption that the -d option selects the delimiter globally for all input files so it either be a blank or a newline. In a way this is true since later occurences of -d overwrite previous ones. However, as @DigitalTrauma pointed out we can supply more than one delimiter which will be used sequentially. So @DigitalTrauma's solution is more elegant than mine since it completely avoids additional temporary files.
One niche application for my solution would be the case in which one or delimiters with more than one character each have to be used. This should not be possible with just using the -d option.
|
Here is the weak attempt at a paste command trying to include a newline:
paste -d -s tmp1 tmp2 \n tmp3 \n tmp4 tmp5 tmp6 > tmp7Basically I have several lines in each tmp and I want the output to read
First(tmp1) Last(tmp2)
Address(tmp3)
City(tmp4) State(tmp5) Zip(tmp6)Am I way off base with using a newline in the paste command?
Here is my finished product: THANK YOU FOR THE HELP!
cp phbook phbookh2p5 sed 's/\t/,/g' phbookh2p5 > tmp
sort -k2 -t ',' -d tmp > tmp0
cut -d',' -f1,2 tmp0 > tmp1
cut -d',' -f3 tmp0 > tmp2
cut -d',' -f4,5,6 tmp0 > tmp3
echo "" > tmp4 paste -d '\n' tmp1 tmp2 tmp3 tmp4 > tmp7 sed 's/\t/ /g' tmp7 > phbookh2p5 cat phbookh2p5 rm tmp*; rm phbookh2p5 | Trying to add a newline to the paste command |
The most likely answer is that your data file columns are not separated
by tabs, but by space, for example. You can verify this by running one of
them through cat -vet which shows real tabs as ^I.
To change your cut command to use space as a delimiter you need to
add the arg -d' ', but since you are already inside single quotes and an awk script
you need to change your sprintf(...) to
sprintf("<(cut -d\" \" -f4 %s)",$0) |
I have a huge amount of files having the following naming style:
WBM_MIROC_rcp8p5_mississippi.txt
WBM_GFDL_rcp8p5_nosoc_mississippi.txt
DBH_HADGEM_rcp4p5_co2_mississippi.txt
HMH_IPSL_rcp4p5_mississippi.txtThose files represent tables with (some of them have a tab delimiter and other one space delimiter) as follow:
YEAR MONTH DAY RES
1971 1 1 1988
1971 1 2 3829
...I would like to group all the files having rcp8p5 in their name in one big table; and do the same for the files having rcp4p5 in their name. But, I just want to paste the 4 column of each files in order to avoid the redundancy of the first three column that are always the same. I am currently using the following script:
ls |
awk -F_ '{ i=$1; m=$2; s=$3; u=$4;
if(f[s]=="")add = $0;
else add = sprintf("<(cut -f4 %s)",$0);
f[s] = f[s] " " add }
END{ for(insc in f)
printf "paste%s > out_%s.txt\n",f[insc],insc
}' |bashIt´s unclear why but the output is not as expected. I have the following output:
YEAR MONTH DAY RES YEAR MONTH DAY RES YEAR MONTH DAY RES
1971 1 1 187 1971 1 1 143 1971 1 1 234
1971 1 2 321 1971 1 2 398 1971 1 1 754
...Instead I would like to have the following output:
YEAR MONTH DAY RES RES RES
1971 1 1 187 143 234
1971 1 2 321 398 754It could be great if anyone is able to give me an hint!
| Build table - Add column depending on filenames |
If column-order is important, i.e. numbers from the same file should be kept in the same column, you need to add padding while reading the different files. Here is one way that works with GNU awk:
merge.awk
# Set k to be a shorthand for the key
{ k = $1 SUBSEP $2 }# First element with this key, add zeros to align it with other rows
!(k in h) {
for(i=1; i<=ARGIND-1; i++)
h[k] = h[k] OFS 0
}# Remember the data element
{ h[k] = h[k] OFS $3 }# Before moving to the next file, ensure that all rows are aligned
ENDFILE {
for(k in h) {
if(split(h[k], a) < ARGIND)
h[k] = h[k] OFS 0
}
}# Print out the collected data
END {
for(k in h) {
split(k, a, SUBSEP)
print a[1], a[2], h[k]
}
}Here are some test files: f1, f2, f3 and f4:
$ tail -n+1 f[1-4]
==> f1 <==
xyz desc1 21
uvw desc2 22
pqr desc3 23==> f2 <==
xyz desc1 56
uvw desc2 57==> f3 <==
xyz desc1 87
uvw desc2 88==> f4 <==
xyz desc1 11
uvw desc2 12
pqr desc3 13
stw desc1 14
arg desc2 15Test 1
awk -f merge.awk f[1-4] | column -tOutput:
pqr desc3 23 0 0 13
uvw desc2 22 57 88 12
stw desc1 0 0 0 14
arg desc2 0 0 0 15
xyz desc1 21 56 87 11Test 2
awk -f merge.awk f2 f3 f4 f1 | column -tOutput:
pqr desc3 0 0 13 23
uvw desc2 57 88 12 22
stw desc1 0 0 14 0
arg desc2 0 0 15 0
xyz desc1 56 87 11 21Edit:
If the output should be tab-separated, set the output field separator accordingly:
awk -f merge.awk OFS='\t' f[1-4] |
I would like to merge specific columns from two txt files containing varying number of rows, but same number of columns (as shown below):
file1:
xyz desc1 12
uvw desc2 55
pqr desc3 12 file2:
xyz desc1 56
uvw desc2 88 Preferred output:
xyz desc1 12 56
uvw desc2 55 88
pqr desc3 12 0Currently I use the paste command using awk as:
paste <(awk '{print $1}' file1) <(awk '{print $2}' file1) <(awk '{print $3}' file1) <(awk '{print $3}' file2) But this seems to merge only columns that overlap. Is there a way in awk to insert zeros instead of omitting the row itself?
I need to combine 100 files together such that my output file will contain 102 columns.
| UNIX paste columns and insert zeros for all missing values |
Since you haven’t asked for a 100% awk solution,
I’ll offer a hybrid that (a) may, arguably, be easier to understand,
and (b) doesn’t stress awk’s memory limits:
awk '
$1 == 2 { secondpart = 1 }
{ if (!secondpart) {
print > "top"
} else {
print $1, $2 > "left"
print $5, $6, $7, $8, $9 > "right"
}
}' a
(cat top; paste -d" " left b c right) > new_a
rm top left rightOr we can eliminate one of the temporary files and shorten the script by one command:
(awk '
$1 == 2 { secondpart = 1 }
{ if (!secondpart) {
print
} else {
print $1, $2 > "left"
print $5, $6, $7, $8, $9 > "right"
}
}' a; paste -d" " left b c right) > new_a
rm left rightThis will put some extra spaces at the ends of the lines of the output,
and it will lose data from filea if any line has more than nine fields (columns).
If those are issues, they can be fixed fairly easily.
|
I have a text file in the below format:
$data This is the experimental data
good data
This is good file
datafile
1 4324 3673 6.2e+11 7687 67576
2 3565 8768 8760 5780 8778 "This is line '2'"
3 7656 8793 -3e+11 7099 79909
4 8768 8965 8769 9879 0970
5 5878 9879 7.970e-1 9070 0709799
.
.
.
100000 3655 6868 97879 96879 69899
$.endfileI want to replace the data of the 3rd and 4th column from row '2' to '100000' with the data from two other text files which have one column of 99999 rows each.
How can I do this using awk, sed or any other unix command?
Note that the column delimiter is space.
The other two text files have 99999 lines each, and they are both in the following format:
12414
12421
36347
3.4e+3
-3.5e22
987983
.
.
.
87698 | Replace data at specific positions in txt file using data from another file |
Using awk:
awk '{printf ("'\'%s\'', ", $0)}' infile > new_file
'act', 'bat', 'cat', 'dog', 'eel',Or to avoid adding an extra comma at the end, use below instead.
awk '{printf S"'\''"$0"'\''";S=", "}'
'act', 'bat', 'cat', 'dog', 'eel'Or using paste without quoting.
paste -d, -s infile
act,bat,cat,dog,eelThen quote it with helping sed:
sed -r "s/(\w+)/'\1'/g" <(paste -d, -s infile)
'act','bat','cat','dog','eel' |
I want to take a file that has a list of words on each line of its own eg.
act
bat
cat
dog
eeland put them into one line with comma separated and quoting them. So that it turns out like:
'act', 'bat', 'cat', 'dog', 'eel',so, a single quote, then the word, then another single quote, then a comma, then a space. Then output to a new file with a new name.
| Combine list of words into one line, then add characters |
You wrote in your last block,linux$ paste temp2 temp > temp2You cannot do this. (Well you can, but it won't work.) What happens here is that the shell truncates temp2 ready to send output from the paste command. The paste temp2 temp command then runs - but by this stage temp2 is already zero length.
What you can do instead is this, which uses a third file to collect the output and then replaces your temp2 with its content. The && ensures that the content is only replaced if the paste "succeeded", and the rm -f removes the transient temp3 file if the mv wasn't triggered, or failed in some unexpected way.
paste temp2 temp > temp3 && mv -f temp3 temp2
rm -f temp3 |
I have two files that I am trying to merge, one file is:
linux$ cat temp2
linear_0_A_B linear_0_B_A
103027.244444 102714.177778
103464.311111 102876.266667
103687.422222 103072.711111
103533.244444 102967.733333
103545.066667 102916.933333
103621.555556 103027.511111
104255.776536 103006.256983
103712.178771 102877.139665
103817.555556 103198.488889
103701.422222 103133.200000And the other file is:
linux$ cat temp
linear_1_A_B linear_1_B_A
118620.444444 109212.355556
108408.488889 105744.444444
108136.311111 105174.933333
108627.688889 105390.044444
108356.577778 105412.888889
108559.204420 105667.933702
108469.392265 105547.314917
109032.044693 105497.698324
108925.866667 105986.222222
107975.733333 105070.000000I want to paste the columns in temp into temp2, and retain the temp2 file like this:
linux$ paste temp2 temp
linear_0_A_B linear_0_B_A linear_1_A_B linear_1_B_A
103027.244444 102714.177778 118620.444444 109212.355556
103464.311111 102876.266667 108408.488889 105744.444444
103687.422222 103072.711111 108136.311111 105174.933333
103533.244444 102967.733333 108627.688889 105390.044444
103545.066667 102916.933333 108356.577778 105412.888889
103621.555556 103027.511111 108559.204420 105667.933702
104255.776536 103006.256983 108469.392265 105547.314917
103712.178771 102877.139665 109032.044693 105497.698324
103817.555556 103198.488889 108925.866667 105986.222222
103701.422222 103133.200000 107975.733333 105070.000000But when I do standard output, and display temp2, the result is not the same.
linux$ paste temp2 temp > temp2
linux$ cat temp2
linear_1_A_B linear_1_B_A
118620.444444 109212.355556
108408.488889 105744.444444
108136.311111 105174.933333
108627.688889 105390.044444
108356.577778 105412.888889
108559.204420 105667.933702
108469.392265 105547.314917
109032.044693 105497.698324
108925.866667 105986.222222
107975.733333 105070.000000How to resolve??
| Problem with paste and standard output in linux |
$ paste prices fruits | sort -k2 | cut -f1
1.01
2.18
4.11
4.52
1.73
1.69
1.09paste combines the two files, line by line. sort -k2 sorts them on the second column (the fruit name). cut -f1 returns just the first column (the prices).
For the above, I assumed that the line numbers shown in the display of the fruits and prices files were an artifact of the display software and not part of the actual files.
|
So I have:
$ cat fruits
2 bananas
3 cherries
4 figs
5 dates
6 elderberries
7 apples
8 grapesand
1 $ cat prices
2 2.18
3 4.11
4 1.69
5 4.52
6 1.73
7 1.01
8 1.09Every line from 'fruits' corresponds with the same line from 'prices'. How I can sort the fruits in alphabetical order using cut `n paste, so that the 'prices' looks like or just prints out the following:
1 1.01
2 2.18
3 4.11
4 4.52
5 1.73
6 1.69
7 1.09 | Cut and Paste Commands |
If I correctly understand the question, you could try with pr:
cut -f 5 "${files[@]}" | pr -5 -s' ' -t -l 40where -5 is the number of columns, -s' ' is the separator (space) and -l 40 is the page length (40 lines).Without coreutils, one could use split to create pieces of N lines:
split -lN infile
or
some_command | split -lN
and then paste them together:
paste x* > outfile
rm x* |
I can create a file with multiple columns from a single-column input via paste like this:
some_command | paste - -This works when some_command produces the data in column-major format. In other words, the input
1
2
3
4results in the output
1 2
3 4However, I want the opposite, i.e. I want
1 3
2 4Background: I want to collect all N’th columns from M files into one aggregate file. I tried doing this via:
cut -f 5 "${files[@]}" | paste - - - - - …(with M -s). But as mentioned before, this doesn’t work, as paste expects column-major input. I can’t help but think that there should be a coreutils (or pure Bash) solution for this.
| Use paste with row-major input |
Did some further tests with different alternatives and scenarios
(Edit: cutoff values for compiled version supplemented)
tl;dr:yes, coreutils paste is far slower than cat
there seems no easily available alternative that is uniformly faster than coreutils paste, in particular not for lots of short lines
paste is amazingly stable in throughput across different combinations of line length, number of lines and number of files
for longer lines faster alternatives are provided belowIn Detail:
I tested quite a number of scenarios. Throughput measurement was done using pv as in the original post.
Compared Programs:cat (from GNU coreutils 8.25 used as baseline)
paste (also from GNU coreutils 8.25)
python script from answer above
alternative python script (replacing list comprehension for collecting line fragments by regular loop)
nim program (akin to 4. but compiled executable)File / Line number combinations:#
columns
lines1
200,000
1,0002
20,000
10,0003
2,000
100,0004
200
1,000,0005
20
10,000,0006
2
100,000,000Total amount of data was the same in each test (1.3GB). Each column consisted of 6-digit numbers (e.g. 000'001 to 200'000).
Above combinations were distributed across 1, 10, 100, 1'000, and 10'000 equally sized files as far as possible.
Files where generated like: yes {000001..200000} | head -1000 > 1
Pasting was done like: for i in cat paste ./paste ./paste2 ./paste3; do $i {00001..1000} | pv > /dev/null; done
However, the files pasted were actually all links to the same original file, so all data should be in cache anyway (created directly before pasting and read with cat first; system memory is 128GB, cache size 34GB)
An additional set was run, where data where created on the fly instead of reading from pre-created files and piped into paste (denoted below with number of files=0).
For the last set the command was like for i in cat paste ./paste ./paste2 ./paste3; do $i <(yes {000001..200000} | head -1000) | pv > /dev/null; done
Findings:paste is an order of magnitude slower than cat
paste's throughput is extremely consistent (~300MB/s) across a wide range of line widths, and number of files involved.
Home grown python alternatives can show some advantage as soon as average input file line length is above a certain limit (~1400 characters on my test machine).
Compiled nim version has about double throughput compared to python scripts. Point of break even in comparison with paste is ~500 characters for one input file. This decreases with growing number of input files, down to ~150 characters per input file line as soon as at least 10 input files are involved.
Both, python and nim versions, suffer from processing overhead for many short lines (suspected reason: in both the stdlib functions used try to detect line endings and convert them to platform specific endings). coreutils paste however is not affected.
Seemingly the simultaneous on-the-fly data generation process was the limiting factor for cat, as well as for the nim version with longer lines, and also affected processing speed to some extent.
At some point the multitude of open file handles seems to have a detrimental impact even on coreutils paste. (Just speculating: Maybe this could even affect the parallel version?)Conclusion (at least for test machine)If input files are narrow use coreutils paste, in particular when files are very long.
If input files are rather wide, prefer alternative (Input file line length > 1400 characters for python versions, 150-500 characters for nim version depending on number of input files).
Generally prefer compiled nim version over python scripts.
Beware of too many small fragments. The default soft limit of 1024 open files for a process seems quite reasonable in this context.Suggestion for OP's situation (parallel processing)
If input files are narrow, use coreutils paste in the inner jobs and compiled alternative for the outermost process. If all files have long lines use nim version generally.
Cave: the linked programs are ad-hoc versions, provided as they are, without any guarantees and without explicit error handling. Also separator is hard coded in all three implementations.
|
paste is a brilliant tool, but it is dead slow: I get around 50 MB/s on my server when running:
paste -d, file1 file2 ... file10000 | pv >/dev/nullpaste is using 100% CPU according to top, so it is not limited by, say, a slow disk.
Looking at the source code it is probably because it uses getc:
while (chr != EOF)
{
sometodo = true;
if (chr == line_delim)
break;
xputchar (chr);
chr = getc (fileptr[i]);
err = errno;
}Is there another tool that does the same, but which is faster? Maybe by reading 4k-64k blocks at a time? Maybe by using vector instructions for finding the newline in parallel instead of looking at a single byte at a time? Maybe using awk or similar?
The input files are UTF8 and so big they do not fit in RAM, so reading everything into memory is not an option.
Edit:
thanasisp suggests running jobs in parallel. That improves throughput slightly, but it is still a magnitude slower than pure pv:
# Baseline
$ pv file* | head -c 10G >/dev/null
10.0GiB 0:00:11 [ 897MiB/s] [> ] 3% # Paste all files at once
$ paste -d, file* | pv | head -c 1G >/dev/null
1.00GiB 0:00:21 [48.5MiB/s] [ <=> ]# Paste 11% at a time in parallel, and finally paste these
$ paste -d, <(paste -d, file1*) <(paste -d, file2*) <(paste -d, file3*) \
<(paste -d, file4*) <(paste -d, file5*) <(paste -d, file6*) \
<(paste -d, file7*) <(paste -d, file8*) <(paste -d, file9*) |
pv | head -c 1G > /dev/null
1.00GiB 0:00:14 [69.2MiB/s] [ <=> ]top still shows that it is the outer paste that is the bottleneck.
I tested if increasing the buffer made a difference:
$ stdbuf -i8191 -o8191 paste -d, <(paste -d, file1?) <(paste -d, file2?) <(paste -d, file3?) <(paste -d, file4?) <(paste -d, file5?) <(paste -d, file6?) <(paste -d, file7?) <(paste -d, file8?) <(paste -d, file9?) | pv | head -c 1G > /dev/null
1.00GiB 0:00:12 [80.8MiB/s] [ <=> ]This increased throughput 10%. Increasing the buffer further gave no improvement. This is likely hardware dependent (i.e. it may be due to the size of level 1 CPU cache).
Tests are run in a RAM disk to avoid limitations related to the disk subsystem.
| Fast version of paste |
Managed to solve my own problem by simply assigning the specific line and column as a variable, and concatenating them using echo, simple when you know the answer!
#!/bin/bashcd FREQ/HF
rm Hessian.logfor i in *.out
do
grep -H -A16 "Force Constants (Second Derivatives of the Energy)" $i | tail -n +1 >> Hessian.tmpx=`awk ' NR == 2 {printf " "" %10s %10s %10s %10s %10s \n", $2,$3,$4,$5,$6}' Hessian.tmp`
y=`awk ' NR == 12 {printf "%10s %10s %10s %10s \n", $2,$3,$4,$5}' Hessian.tmp`
a=`awk ' NR == 8 { printf "%5s %10s %10s %10s %10s %10s\n", $2,$3,$4,$5,$6,$7} ' Hessian.tmp`
b=`awk ' NR == 9 { printf "%5s %10s %10s %10s %10s %10s\n", $2,$3,$4,$5,$6,$7} ' Hessian.tmp`
c=`awk ' NR == 10 { printf "%5s %10s %10s %10s %10s %10s\n", $2,$3,$4,$5,$6,$7} ' Hessian.tmp`
d=`awk ' NR == 11 { printf "%5s %10s %10s %10s %10s %10s\n", $2, $3,$4,$5,$6,$7} ' Hessian.tmp`
e=`awk ' NR == 13 { printf "%10s", $3} ' Hessian.tmp`
f=`awk ' NR == 14 { printf "%10s %10s", $3, $4} ' Hessian.tmp`
g=`awk ' NR == 15 { printf "%10s %10s %10s", $3, $4,$5} ' Hessian.tmp`
h=`awk ' NR == 16 { printf "%10s %10s %10s %10s", $3, $4, $5,$6} ' Hessian.tmp`echo "$x $y" >> Hessian.log
awk '
NR == 3, NR == 7 {printf "%5s %10s %10s %10s %10s %10s\n", $2,$3,$4,$5,$6,$7} ' Hessian.tmp >> Hessian.log
echo "$a $e" >> Hessian.log
echo "$b $f" >> Hessian.log
echo "$c $g" >> Hessian.log
echo "$d $h" >> Hessian.log
rm Hessian.tmp
echo "" >> Hessian.log
done |
The original output file contained this block of text among much more information:
Projecting out rotations and translations Force Constants (Second Derivatives of the Energy) in [a.u.]
GX1 GY1 GZ1 GX2 GY2
GX1 0.6941232
GY1 0.0187624 0.0156533
GZ1 -0.1175495 -0.0980708 0.6144300
GX2 -0.6074291 -0.0036667 0.0229726 0.6228918
GY2 0.0069881 -0.0013581 0.0085087 0.0023190 0.0014047
GZ2 -0.0437815 0.0085087 -0.0533084 -0.0145287 -0.0088007
GX3 -0.0866941 -0.0150957 0.0945769 -0.0154627 -0.0093070
GY3 -0.0257505 -0.0142952 0.0895621 0.0013477 -0.0000466
GZ3 0.1613309 0.0895621 -0.5611216 -0.0084438 0.0002920
GZ2 GX3 GY3 GZ3
GZ2 0.0551377
GX3 0.0583102 0.1021568
GY3 0.0002920 0.0244027 0.0143418
GZ3 -0.0018293 -0.1528871 -0.0898540 0.5629509So far I have managed to isolate the data I need along with the relevant headings, and print this to a log file using [grep] and [awk] (below):
#!/bin/bashrm Hessian.logfor i in *.out
do
grep -H -A16 "Force Constants (Second Derivatives of the Energy)" $i | tail -n +1 | awk ' NR == 2 {printf " "" %10s %10s %10s %10s %10s \n", $2,$3,$4,$5,$6} NR == 3, NR == 11 {printf "%5s %10s %10s %10s %10s %10s\n", $2,$3,$4,$5,$6,$7} ' >> Hessian.log
echo "" >> Hessian.log
doneWhich produces:
GX1 GY1 GZ1 GX2 GY2
GX1 0.6941232
GY1 0.0187624 0.0156533
GZ1 -0.1175495 -0.0980708 0.6144300
GX2 -0.6074291 -0.0036667 0.0229726 0.6228918
GY2 0.0069881 -0.0013581 0.0085087 0.0023190 0.0014047
GZ2 -0.0437815 0.0085087 -0.0533084 -0.0145287 -0.0088007
GX3 -0.0866941 -0.0150957 0.0945769 -0.0154627 -0.0093070
GY3 -0.0257505 -0.0142952 0.0895621 0.0013477 -0.0000466
GZ3 0.1613309 0.0895621 -0.5611216 -0.0084438 0.0002920
GZ2 GX3 GY3 GZ3
GZ2 0.0551377
GX3 0.0583102 0.1021568
GY3 0.0002920 0.0244027 0.0143418
GZ3 -0.0018293 -0.1528871 -0.0898540 0.5629509However, I am trying to move the last four lines so that they sit in columns next to the data above, with their respective headings (GZ2, GX3, GY3, GZ3) in the same row as the other headings. To put it simply, the resulting output should be a 9*9 matrix of data with labels for each column and row (as shown below).
GX1 GY1 GZ1 GX2 GY2 GZ2 GX3 GY3 GZ3
GX1 0.6941232
GY1 0.0187624 0.0156533
GZ1 -0.1175495 -0.0980708 0.6144300
GX2 -0.6074291 -0.0036667 0.0229726 0.6228918
GY2 0.0069881 -0.0013581 0.0085087 0.0023190 0.001404
GZ2 -0.0437815 0.0085087 -0.0533084 -0.0145287 -0.0088007 0.0551377
GX3 -0.0866941 -0.0150957 0.0945769 -0.0154627 -0.0093070 0.0583102 0.1021568
GY3 -0.0257505 -0.0142952 0.0895621 0.0013477 -0.0000466 0.0002920 0.0244027 0.0143418
GZ3 0.1613309 0.0895621 -0.5611216 -0.0084438 0.0002920 -0.0018293 -0.1528871 -0.0898540 0.5629509 | How do I append text from one line, to the end of another? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.