output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
Short answer: Use discard mount options when mounting file systems or turning on swap created on the Zram devices.
Extended: When mounting a file system use discard as a mount option, you can set mount options with -o and options separated with a ,, no space between. It should be supported on most Linux file systems, I use it on Btrfs. On swap, use -d when using swapon. You could also in addition to this, periodically run fstrim on the directory that the file system is mounted at, but from what I've seen in the output of zramctl this isn't necessary and the discard mount option is good enough.
Edit: Actually, after some further testing I think it's a good idea to periodically run fstrim on the Zram mount. After compiling Firefox with it's build directory in Zram, there was about 1.1GB of RAM usage. Not nearly as bad as without the discard mount option, but there is room for improvement. Running fstrim on the Zram mount (which only took a couple of seconds to run) caused the RAM usage to go to 400MB, which is normal. I'd probably put it in a cron job or after a portage compile.
Explanation: When files are removed, Zram doesn't remove the compressed pages on memory because it's not notified that the space is not used for data anymore. The discard option performs discard when a file is removed. If you use the discard mount option Zram will be notified about the unused pages and will resize accordingly.
|
I do not think that this is Linux's disk cache. In htop, the memory bar is green (not orange for cache) and I removed the files stored in zram. No processes seem to be using a lot of memory.
The load was compiling software with its build files stored in zram (PORTAGE_TMPDIR which is /var/tmp/portage in Gentoo), with swapfile on zram too. It had zram writeback configured so that it would write to disk if there is not much RAM left.
I compiled 2 softwares, after 1 software, there seemed to be still about 1/2 the memory used, zramctl said that total data used was near 0G, and no process is using much memory, and Linux disk cache wasn't the issue.
With kswapd continuously at 100% CPU utilization the kernel OOM killed the process consuming too much RAM. After this there was still RAM being used but nothing I could find was using it. If it was disk cache, the kernel would have handed over the space to the memory consuming process. But it didn't, so this is most likely NOT a disk cache issue, I rebooted the computer and the 2nd software compiled quickly without an issue!
Does anyone know what could be the case, is there any way I could further identify what is using the memory?
| After a heavy I/O load, and storing many things in Zram, used space is close to total in `free` |
Total_vm was badly calculated by me and the OOM report is correct. app has allocated 59739 pages which is 233MB. So, this is the correct reason of OOM.
|
I have little Linux system with 256MB RAM. I'm little bit confused, where the RAM may be lost? It is running old linux kernel 2.6.38 and I'm not able to ubgrade it (some specific ARM board).
SHM and all tmpfs mounted filesystems are almost empty shmem:448kB
Everyhing is consumed by active_anon pages but running processes does not correspond wih this. Sum of total_vm is just 90MB and there are duplicates, shared memory, unallocated memory...
But active_anon is reported as 235MB. Why? How can I resolve this problem? Is there some memory leak in the kernel?
Here is relevant dmesg
Mem-info:
Normal per-cpu:
CPU 0: hi: 90, btch: 15 usd: 14
active_anon:60256 inactive_anon:67 isolated_anon:0
active_file:0 inactive_file:185 isolated_file:0
unevictable:0 dirty:0 writeback:0 unstable:0
free:507 slab_reclaimable:120 slab_unreclaimable:463
mapped:108 shmem:112 pagetables:217 bounce:0
Normal free:2028kB min:2036kB low:2544kB high:3052kB active_anon:241024kB inactive_anon:268kB active_file:0kB inactive_file:740kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:260096kB mlocked:0k
lowmem_reserve[]: 0 0
Normal: 37*4kB 139*8kB 42*16kB 1*32kB 1*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 2028kB
305 total pagecache pages
65536 pages of RAM
622 free pages
1976 reserved pages
404 slab pages
393 pages shared
0 pages swap cached
[ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name
[ 713] 0 713 666 40 0 0 0 busybox
[ 719] 0 719 634 18 0 0 0 busybox
[ 725] 0 725 634 15 0 0 0 busybox
[ 740] 0 740 654 19 0 0 0 inetd
[ 752] 0 752 634 17 0 0 0 ifplugd
[ 761] 0 761 634 21 0 0 0 busybox
[ 790] 0 790 4297 110 0 0 0 app
[ 792] 0 792 635 15 0 0 0 getty
[ 812] 0 812 634 16 0 0 0 exe
[ 849] 101 849 630 57 0 0 0 lighttpd
[ 850] 101 850 3005 218 0 0 0 php-cgi
[ 851] 101 851 3005 218 0 0 0 php-cgi
[ 3172] 0 3172 72156 59739 0 0 0 app
[ 3193] 0 3193 675 23 0 0 0 ntpd
[ 4003] 0 4003 634 15 0 0 0 ntpd_prog
[ 4004] 0 4004 634 15 0 0 0 hwclock
[ 4005] 0 4005 634 20 0 0 0 hwclock
Out of memory: Kill process 3172 (app) score 912 or sacrifice child
Killed process 3172 (app) total-vm:288624kB, anon-rss:238684kB, file-rss:272kBHere is a list of mounted filesystems. Root filesystem is r/w YAFFS2 on MTD flash.
rootfs on / type rootfs (rw)
/dev/root on / type yaffs2 (rw,relatime)
none on /proc type proc (rw,relatime)
none on /sys type sysfs (rw,relatime)
mdev on /dev type tmpfs (rw,nosuid,relatime,size=10240k,mode=755)
none on /proc/bus/usb type usbfs (rw,relatime)
none on /dev/pts type devpts (rw,relatime,mode=622)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime)
none on /tmp type tmpfs (rw,relatime,size=102400k,mode=777)
none on /run type tmpfs (rw,relatime,size=10240k,mode=755) | Embedded Linux OOM - help with lost RAM |
You could perhaps use madvise(2)’s MADV_FREE for this — it marks pages as available for reclaim, but doesn’t necessarily drop them immediately, and the data can be read back. You’ll know the pages are gone if you get all zeroes back (per page).
|
I'm asking specifically about Linux, but an answer that applies to Unix in general (i.e. POSIX or similar) would be even better, obviously.
Linux uses free memory (i.e. that memory which is not yet allocated to processes) for caching filesystem metadata (and maybe other things). When processes request additional memory, these caches are shrunk to make room.
My question: Is there a method by which an application can allocate memory that serves as a cache only? That is, the allocation is made knowing that the kernel is allowed to seize control of this memory area in some way when available memory runs low and other processes' memory allocations could otherwise not be served.
| Can a process allocate cache memory such that the kernel can seize it when necessary? |
It's getting unaffordable to develop LinuxI am afraid it has always been.
32GB RAM is common on kernel devs desktops.
And yet some of them started encountering ooms when building their allyesconfig-ed kernel.
Lucky you… who are apparently not allyesconfig-ing… you should not need more than 32G… ;-)
On a side note, reading CONFIG_HAVE_OBJTOOL=y as part of your .config file, you might take some benefits from the patches submitted as part of the discussion linked hereabove.Does anyone have a freaking clue on why the compilation eats up so
much RAM?You are probably the only one who could precisely tell. (after considering the size of the miscellaneous *.o files you might be able to find in each top level directory of the kernel source distribution (since compilation was achieved successfuly))
From the information you provide (the kernel.config file) I can only venture a priori :
A/ every component of your kernel will be statically linked :
(since I notice that all your selected OPTION_* are marked "=y")
There is nothing wrong with this per se since there can be many good reasons for building everything in-kernel but this will definitely significantly increase the RAM needed when linking all this together.
=> You probably should consider building kernel parts as modules wherever possible.
B/ a good amount of CONFIG_DEBUG appear set.
Once again there is nothing wrong with that per se however it is likely to increase significantly the RAM needed to link the different parts, not to say even more since it implies CONFIG_KALLSYMS_*=y
On a side note, considering the debugging feature selected, in addition to CONFIG_HZ_100=y I would assume that you are not searching for best possible latencies / performances.
=> I would then consider the opportunity to prefer CONFIG_CC_OPTIMIZE_FOR_SIZE
|
I am trying to compile the mainline Linux kernel with a custom config. This one!
Running on a 64 bit system.
At the last step, when linking the Kernel, it fails because it goes OOM (error 137).
[...]
DESCEND objtool
INSTALL libsubcmd_headers
CALL scripts/checksyscalls.sh
LD vmlinux.o
Killed
make[2]: *** [scripts/Makefile.vmlinux_o:61: vmlinux.o] Error 137
make[2]: *** Deleting file 'vmlinux.o'
[...]ulimit -a says that per process memory is unlimited.
I have tried make, make -j1 make -j4, no difference whatsoever.
Same results with gcc as compiler instead of clang.
Does anyone have a freaking clue on why the compilation eats up so much RAM? It's getting unaffordable to develop Linux :\
| Linux build with custom config using all RAM (8GB)? |
The solution that "fra-san" gave at the comments here fitted perfectly. Using "cgroup-tools" package I was able to limit Chrome memory usage successfully. I tested it opening dozens of tabs at the same time and I could see the memory limit in action. However, I had to leave my script running, since, as much as I mostly use Chrome, system cache consumes a lot of RAM too.
My steps:
1- Used this script from: Limit memory usage for a single Linux process
#!/bin/sh# This script uses commands from the cgroup-tools package. The cgroup-tools commands access the cgroup filesystem directly which is against the (new-ish) kernel's requirement that cgroups are managed by a single entity (which usually will be systemd). Additionally there is a v2 cgroup api in development which will probably replace the existing api at some point. So expect this script to break in the future. The correct way forward would be to use systemd's apis to create the cgroups, but afaik systemd currently (feb 2018) only exposes dbus apis for which there are no command line tools yet, and I didn't feel like writing those.# strict mode: error if commands fail or if unset variables are used
set -euif [ "$#" -lt 2 ]
then
echo Usage: `basename $0` "<limit> <command>..."
echo or: `basename $0` "<memlimit> -s <swaplimit> <command>..."
exit 1
ficgname="limitmem_$$"# parse command line args and find limitslimit="$1"
swaplimit="$limit"
shiftif [ "$1" = "-s" ]
then
shift
swaplimit="$1"
shift
fiif [ "$1" = -- ]
then
shift
fiif [ "$limit" = "$swaplimit" ]
then
memsw=0
echo "limiting memory to $limit (cgroup $cgname) for command $@" >&2
else
memsw=1
echo "limiting memory to $limit and total virtual memory to $swaplimit (cgroup $cgname) for command $@" >&2
fi# create cgroup
sudo cgcreate -g "memory:$cgname"
sudo cgset -r memory.limit_in_bytes="$limit" "$cgname"
bytes_limit=`cgget -g "memory:$cgname" | grep memory.limit_in_bytes | cut -d\ -f2`# try also limiting swap usage, but this fails if the system has no swap
if sudo cgset -r memory.memsw.limit_in_bytes="$swaplimit" "$cgname"
then
bytes_swap_limit=`cgget -g "memory:$cgname" | grep memory.memsw.limit_in_bytes | cut -d\ -f2`
else
echo "failed to limit swap"
memsw=0
fi2- Named it as "limitmem" and copied to /usr/bin/ so I could call it from terminal just with limitmem. Now I can open a process limiting the memory usage to, for example, 800MB using this syntax:limitmem 800M command
In my case: limitmem 1000M google-chrome --password-store=basic --aggressive-cache-discard --aggressive-tab-discard
|
Edit 1:
The freezing just happened and I was able recover from it. Log(syslog) from the freezing until 'now': https://ufile.io/ivred
Edit 2: It seems a bug/problem with GDM3. I'll try Xubuntu.
Edit 3: Now I'm using Xubuntu. The problem still happens, but a lot less often. So.. it is indeed a memory issue.
I'm currently using Ubuntu 18.10 Live CD since my HD died. I did some customizations to my LiveCD mainly towards memory consumption, because I have only 4GB of RAM.
When my free memory goes below 100MB, my pendrive LED starts to blink like crazy and the system freezes letting me time to just get out of GUI interface (Ctrl+Alt+f1...f12) and reboot(Ctrl+Alt+Del) or, sometimes to close Google Chrome with sudo killall chrome.
So I created a very simple script to clean the system cache and close Google Chrome. Closing Chrome out of the blue like that is fine, since it asks you to recover the tabs when it wasn't closed properly.
The question: It works like a charm 95% of the time. I don't know if my script is too simple or there is another reason for this intermittent freezing since I can't check the log, because of the need of reboot. Is there a more efficient way to do that? Am I doing it wrong?
Obs.: I have another script to clean the cache that runs every 15 minutes. Since I created those scripts I am able to use my LiveCD every day with almost no freezing. Maybe 1 per day.. Before that I had to reboot every 30-40min, because I use the Chrome with several tabs.
My script:
#!/bin/bash while true ; do
free=`free -m | grep Mem | awk '{print $4}'`
if [ "$free" -gt 0 ]
then
if [ $free -le 120 ]; #When my memory consuptiom goes below 120MB do the commands below.
thenif pgrep -x "chrome" > /dev/null
then
sudo killall -9 chrome
sudo su xubuntu
/usr/bin/google-chrome-stable --password-store=basic --aggressive-cache-discard --aggressive-tab-discard
else
echo "Stopped"
fi sudo sysctl -w vm.drop_caches=3
sudo sync && echo 3 | sudo tee /proc/sys/vm/drop_caches fi
fi & sleep 1; done | Shell-script to periodically free some memory up with Ubuntu 18.10 LiveCD with only 4GB of RAM. Need some improvement |
You've asked two questions.
1) If the OOM Killer runs + you have no swapping, likely this relates to your vm.swappiness setting. Try setting this to 1. On your antiquated + highly hackable kernel (shudder), setting to 0 (as I recall), disables swapping completely, which likely isn't what you're after.
2) Determining your leaking program might be as easy as running ps auxww repeatedly looking for constantly increasing RSS values or some other metric.
All this said...
Your Kernel is very old. PHP is capped at 5.3 (highly hackable). OpenSSL is buggy. Many related libraries are old + may be the source of memory leaks.
Likely best to upgrade to a recent Distro. A simple upgrade may install more recent code with addresses your memory leakage.
|
We build a system that's intended to be on all the time - it collects and displays graphs of data. If we leave it without changing anything for long enough, we end up with an oom-killer event. That kills our main process (it's got the high oom-score) and our software gets restarted.
Basics: The system is CentOS 6, kernel is 2.6.32.26. The system has 2G of ram and 4G of swap. The application is written in C++ w/Qt 3.
I've set a cron job to grab the contents of /proc/meminfo and /proc/slabinfo every minute. Here's the traces I find most interesting from the meminfo data (the most recent oom-killer is on the right side of the graph):Note SUnreclaim grows until the oom-killer hits. The change in slope on SUnreclaim is where I switched displays.
Here's some interesting traces from the slabinfo data:What this looks like to me is that something's leaking or fragmenting. Whatever it is does seem to get cleaned up when my processes die, but I honestly have no idea what's going on here.
How do I figure out what's leaking?
Updated:
Early on in this process, I started with ps output (not shown here). All of our processes RSS values ramp up quickly to their 'normal' level and then stay put. If this was a process running away with normal memory, I wouldn't need assistance. Instead, there's something we're doing that's causing unswappable memory to be allocated.
As to the upgrade suggestion: The codebase has a lot of dependencies on old libraries, and I can't make a transition to even a 3 series kernel right now.
| Debugging Linux oom-killer - little to no swap use |
Ansible should definitely not be using that much memory. Could you elaborate on the jobs you are running? (How many, what are they doing, modules used, examples, etc.) I see firefox get killed in there, are you opening doing a lot with firefox?
|
Running ArchLinux
uname -a:
Linux localhost 4.7.2-1-ARCH #1 SMP PREEMPT Sat Aug 20 23:02:56 CEST 2016 x86_64 GNU/Linux16gb of RAM
14gb swap
When I run big ansible jobs, it triggers my oom-killer. I would think 16gb is enough to run such jobs but I'm no oom log expert (or linux memory expert for that matter), here are the logs:
Feb 14 11:35:36 localhost kernel: Out of memory: Kill process 22698 (systemd-coredum) score 503 or sacrifice child
Feb 14 11:35:36 localhost kernel: Killed process 22698 (systemd-coredum) total-vm:880316kB, anon-rss:37604kB, file-rss:67380kB, shmem-rss:0kB
Feb 14 11:42:52 localhost kernel: ansible invoked oom-killer: gfp_mask=0x27000c0(GFP_KERNEL_ACCOUNT|__GFP_NOTRACK), order=2, oom_score_adj=0
Feb 14 11:42:52 localhost kernel: ansible cpuset=/ mems_allowed=0
Feb 14 11:42:52 localhost kernel: CPU: 0 PID: 27123 Comm: ansible Not tainted 4.7.2-1-ARCH #1
Feb 14 11:42:52 localhost kernel: Hardware name: Dell Inc. OptiPlex 7020/08WKV3, BIOS A02 11/20/2014
Feb 14 11:42:52 localhost kernel: 0000000000000286 00000000a544d0e1 ffff8803b3147b48 ffffffff812eb132
Feb 14 11:42:52 localhost kernel: ffff8803b3147d28 ffff88024193f000 ffff8803b3147bb8 ffffffff811f6e5c
Feb 14 11:42:52 localhost kernel: ffff8803b3148000 0000000000000000 ffffffff81b28920 ffffffff811789c0
Feb 14 11:42:52 localhost kernel: Call Trace:
Feb 14 11:42:52 localhost kernel: [<ffffffff812eb132>] dump_stack+0x63/0x81
Feb 14 11:42:52 localhost kernel: [<ffffffff811f6e5c>] dump_header+0x60/0x1e8
Feb 14 11:42:52 localhost kernel: [<ffffffff811789c0>] ? page_alloc_cpu_notify+0x50/0x50
Feb 14 11:42:52 localhost kernel: [<ffffffff811762fa>] oom_kill_process+0x22a/0x440
Feb 14 11:42:52 localhost kernel: [<ffffffff8117696a>] out_of_memory+0x40a/0x4b0
Feb 14 11:42:52 localhost kernel: [<ffffffff812ffe08>] ? find_next_bit+0x18/0x20
Feb 14 11:42:52 localhost kernel: [<ffffffff8117c05b>] __alloc_pages_nodemask+0xf0b/0xf30
Feb 14 11:42:52 localhost kernel: [<ffffffff8117c3d4>] alloc_kmem_pages_node+0x54/0xd0
Feb 14 11:42:52 localhost kernel: [<ffffffff81077c06>] copy_process.part.8+0x136/0x19a0
Feb 14 11:42:52 localhost kernel: [<ffffffff811a974a>] ? handle_mm_fault+0xa7a/0x1f60
Feb 14 11:42:52 localhost kernel: [<ffffffff81079647>] _do_fork+0xd7/0x3d0
Feb 14 11:42:52 localhost kernel: [<ffffffff810655f5>] ? __do_page_fault+0x1f5/0x510
Feb 14 11:42:52 localhost kernel: [<ffffffff810799e9>] SyS_clone+0x19/0x20
Feb 14 11:42:52 localhost kernel: [<ffffffff81003c07>] do_syscall_64+0x57/0xb0
Feb 14 11:42:52 localhost kernel: [<ffffffff815de861>] entry_SYSCALL64_slow_path+0x25/0x25
Feb 14 11:42:52 localhost kernel: Mem-Info:
Feb 14 11:42:52 localhost kernel: active_anon:548787 inactive_anon:232682 isolated_anon:0
active_file:28394 inactive_file:24931 isolated_file:8
unevictable:0 dirty:1 writeback:0 unstable:0
slab_reclaimable:1897009 slab_unreclaimable:19547
mapped:51240 shmem:28342 pagetables:20339 bounce:0
free:1284106 free_pcp:446 free_cma:0
Feb 14 11:42:52 localhost kernel: Node 0 DMA free:15628kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15984kB managed:15900k
Feb 14 11:42:52 localhost kernel: lowmem_reserve[]: 0 3468 15978 15978
Feb 14 11:42:52 localhost kernel: Node 0 DMA32 free:1221320kB min:14632kB low:18288kB high:21944kB active_anon:274224kB inactive_anon:273556kB active_file:40556kB inactive_file:36556kB unevictable:0kB isolated(anon):0kB isolated(file):32k
Feb 14 11:42:52 localhost kernel: lowmem_reserve[]: 0 0 12510 12510
Feb 14 11:42:52 localhost kernel: Node 0 Normal free:3899476kB min:52884kB low:66104kB high:79324kB active_anon:1920924kB inactive_anon:657172kB active_file:73020kB inactive_file:63168kB unevictable:0kB isolated(anon):0kB isolated(file):0
Feb 14 11:42:52 localhost kernel: lowmem_reserve[]: 0 0 0 0
Feb 14 11:42:52 localhost kernel: Node 0 DMA: 1*4kB (U) 1*8kB (U) 0*16kB 0*32kB 2*64kB (U) 1*128kB (U) 0*256kB 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (ME) = 15628kB
Feb 14 11:42:52 localhost kernel: Node 0 DMA32: 166992*4kB (UME) 68889*8kB (UE) 7*16kB (H) 11*32kB (H) 11*64kB (H) 2*128kB (H) 1*256kB (H) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1220760kB
Feb 14 11:42:52 localhost kernel: Node 0 Normal: 721354*4kB (UME) 126667*8kB (UEH) 16*16kB (H) 2*32kB (H) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 3899072kB
Feb 14 11:42:52 localhost kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
Feb 14 11:42:52 localhost kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
Feb 14 11:42:52 localhost kernel: 125644 total pagecache pages
Feb 14 11:42:52 localhost kernel: 43931 pages in swap cache
Feb 14 11:42:52 localhost kernel: Swap cache stats: add 2753281, delete 2709350, find 730647/1154037
Feb 14 11:42:52 localhost kernel: Free swap = 12677364kB
Feb 14 11:42:52 localhost kernel: Total swap = 14124028kB
Feb 14 11:42:52 localhost kernel: 4179504 pages RAM
Feb 14 11:42:52 localhost kernel: 0 pages HighMem/MovableOnly
Feb 14 11:42:52 localhost kernel: 84923 pages reserved
Feb 14 11:42:52 localhost kernel: 0 pages hwpoisoned
(...)
Feb 14 11:42:52 localhost kernel: Out of memory: Kill process 27876 (firefox) score 41 or sacrifice child
Feb 14 11:42:52 localhost kernel: Killed process 27876 (firefox) total-vm:4003016kB, anon-rss:1091960kB, file-rss:41516kB, shmem-rss:80216kBHere are some sysctl values I played around with which helped a little bit, but on bigger jobs it is still happening:
vm.overcommit_memory = 2
vm.overcommit_ratio = 100Are some of my ansible jobs really eating up all my system's memory + swap?
| Ansible triggering oom-killer |
The kernel killed:
Killed process 24355 (crawler) total-vm:9099416kB, anon-rss:7805456kB, file-rss:0kBThe process tried to allocate close to 9GB of RAM which is more than your system can handle.
Looks like you have just 2GB of RAM and you've got SWAP disabled. I'd say in advance that having SWAP in a situation like this is unlikely to help at all. If you have tasks which eat that much RAM you must have at the very least 12GB of RAM.
|
The system has killed a process due to "Out of memory" but I cannot understand these messages.
I am not able to find the memory issue.
[Mon Jul 20 21:20:39 2020] crawler invoked oom-killer: gfp_mask=0x24201ca, order=0, oom_score_adj=0
[Mon Jul 20 21:20:39 2020] crawler cpuset=/ mems_allowed=0
[Mon Jul 20 21:20:39 2020] CPU: 0 PID: 24357 Comm: crawler Not tainted 4.4.0-1098-aws #109-Ubuntu
[Mon Jul 20 21:20:39 2020] Hardware name: Xen HVM domU, BIOS 4.2.amazon 08/24/2006
[Mon Jul 20 21:20:39 2020] 0000000000000286 1f6ee74fe469971b ffff8800978779e0 ffffffff81408801
[Mon Jul 20 21:20:39 2020] ffff880097877b98 ffff880204a5ee00 ffff880097877a50 ffffffff81216217
[Mon Jul 20 21:20:39 2020] 0000000000000000 ffff8800e2e2e9c0 ffff8801148cc4c0 ffff880097877a38
[Mon Jul 20 21:20:39 2020] Call Trace:
[Mon Jul 20 21:20:39 2020] [<ffffffff81408801>] dump_stack+0x63/0x82
[Mon Jul 20 21:20:39 2020] [<ffffffff81216217>] dump_header+0x5a/0x1c3
[Mon Jul 20 21:20:39 2020] [<ffffffff813a04e1>] ? apparmor_capable+0x131/0x1b0
[Mon Jul 20 21:20:39 2020] [<ffffffff8119a94b>] oom_kill_process+0x20b/0x3d0
[Mon Jul 20 21:20:39 2020] [<ffffffff8119ad58>] out_of_memory+0x1f8/0x460
[Mon Jul 20 21:20:39 2020] [<ffffffff811a0db3>] __alloc_pages_slowpath.constprop.89+0x943/0xaf0
[Mon Jul 20 21:20:39 2020] [<ffffffff811a11ff>] __alloc_pages_nodemask+0x29f/0x2b0
[Mon Jul 20 21:20:39 2020] [<ffffffff811ec7cc>] alloc_pages_current+0x8c/0x110
[Mon Jul 20 21:20:39 2020] [<ffffffff81196a8b>] __page_cache_alloc+0xab/0xc0
[Mon Jul 20 21:20:39 2020] [<ffffffff811993b0>] filemap_fault+0x160/0x440
[Mon Jul 20 21:20:39 2020] [<ffffffff812af5c6>] ext4_filemap_fault+0x36/0x50
[Mon Jul 20 21:20:39 2020] [<ffffffff811c6b87>] __do_fault+0x77/0x110
[Mon Jul 20 21:20:39 2020] [<ffffffff811cab29>] handle_mm_fault+0x1259/0x1b80
[Mon Jul 20 21:20:39 2020] [<ffffffff8183ee61>] ? __schedule+0x301/0x810
[Mon Jul 20 21:20:39 2020] [<ffffffff8183ee6d>] ? __schedule+0x30d/0x810
[Mon Jul 20 21:20:39 2020] [<ffffffff8183ee61>] ? __schedule+0x301/0x810
[Mon Jul 20 21:20:39 2020] [<ffffffff8183ee6d>] ? __schedule+0x30d/0x810
[Mon Jul 20 21:20:39 2020] [<ffffffff8183ee61>] ? __schedule+0x301/0x810
[Mon Jul 20 21:20:39 2020] [<ffffffff8106de74>] __do_page_fault+0x1a4/0x410
[Mon Jul 20 21:20:39 2020] [<ffffffff810fa96c>] ? ktime_get_ts64+0x4c/0x100
[Mon Jul 20 21:20:39 2020] [<ffffffff8106e102>] do_page_fault+0x22/0x30
[Mon Jul 20 21:20:39 2020] [<ffffffff81846938>] page_fault+0x28/0x30
[Mon Jul 20 21:20:39 2020] Mem-Info:
[Mon Jul 20 21:20:39 2020] active_anon:1971173 inactive_anon:13648 isolated_anon:0
active_file:0 inactive_file:21 isolated_file:0
unevictable:913 dirty:0 writeback:0 unstable:0
slab_reclaimable:7341 slab_unreclaimable:9130
mapped:1013 shmem:20671 pagetables:5035 bounce:0
free:25558 free_pcp:31 free_cma:0
[Mon Jul 20 21:20:39 2020] Node 0 DMA free:15900kB min:128kB low:160kB high:192kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15988kB managed:15900kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
[Mon Jul 20 21:20:39 2020] lowmem_reserve[]: 0 3729 7951 7951 7951
[Mon Jul 20 21:20:39 2020] Node 0 DMA32 free:48524kB min:31640kB low:39548kB high:47460kB active_anon:3694868kB inactive_anon:31728kB active_file:28kB inactive_file:56kB unevictable:1984kB isolated(anon):0kB isolated(file):0kB present:3915776kB managed:3835212kB mlocked:1984kB dirty:0kB writeback:0kB mapped:2068kB shmem:40836kB slab_reclaimable:11392kB slab_unreclaimable:17676kB kernel_stack:1488kB pagetables:9948kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:552 all_unreclaimable? yes
[Mon Jul 20 21:20:39 2020] lowmem_reserve[]: 0 0 4221 4221 4221
[Mon Jul 20 21:20:39 2020] Node 0 Normal free:37808kB min:35808kB low:44760kB high:53712kB active_anon:4189824kB inactive_anon:22864kB active_file:0kB inactive_file:28kB unevictable:1668kB isolated(anon):0kB isolated(file):0kB present:4456448kB managed:4322624kB mlocked:1668kB dirty:0kB writeback:0kB mapped:1984kB shmem:41848kB slab_reclaimable:17972kB slab_unreclaimable:18844kB kernel_stack:1440kB pagetables:10192kB unstable:0kB bounce:0kB free_pcp:124kB local_pcp:4kB free_cma:0kB writeback_tmp:0kB pages_scanned:212 all_unreclaimable? yes
[Mon Jul 20 21:20:39 2020] lowmem_reserve[]: 0 0 0 0 0
[Mon Jul 20 21:20:39 2020] Node 0 DMA: 1*4kB (U) 1*8kB (U) 1*16kB (U) 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15900kB
[Mon Jul 20 21:20:39 2020] Node 0 DMA32: 358*4kB (UME) 132*8kB (UME) 281*16kB (UME) 128*32kB (UME) 98*64kB (UME) 30*128kB (UME) 23*256kB (E) 14*512kB (ME) 8*1024kB (E) 3*2048kB (E) 0*4096kB = 48584kB
[Mon Jul 20 21:20:39 2020] Node 0 Normal: 423*4kB (UME) 127*8kB (UE) 153*16kB (UME) 172*32kB (UE) 103*64kB (UE) 39*128kB (UE) 21*256kB (UE) 10*512kB (ME) 1*1024kB (E) 0*2048kB 1*4096kB (H) = 37860kB
[Mon Jul 20 21:20:39 2020] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[Mon Jul 20 21:20:39 2020] 21375 total pagecache pages
[Mon Jul 20 21:20:39 2020] 0 pages in swap cache
[Mon Jul 20 21:20:39 2020] Swap cache stats: add 0, delete 0, find 0/0
[Mon Jul 20 21:20:39 2020] Free swap = 0kB
[Mon Jul 20 21:20:39 2020] Total swap = 0kB
[Mon Jul 20 21:20:39 2020] 2097053 pages RAM
[Mon Jul 20 21:20:39 2020] 0 pages HighMem/MovableOnly
[Mon Jul 20 21:20:39 2020] 53619 pages reserved
[Mon Jul 20 21:20:39 2020] 0 pages cma reserved
[Mon Jul 20 21:20:39 2020] 0 pages hwpoisoned
[Mon Jul 20 21:20:39 2020] Out of memory: Kill process 24355 (crawler) score 957 or sacrifice child
[Mon Jul 20 21:20:39 2020] Killed process 24355 (crawler) total-vm:9099416kB, anon-rss:7805456kB, file-rss:0kBRegards
Thanks in advance
| Explanation for "Killed Process" |
The Linux kernel has a component called the OOM killer (out of memory). As Patrick pointed out in the comments the OOM killer can be disabled but the default setting is to allow overcommit (and thus enable the OOM killer).
Applications ask the kernel for more memory and the kernel can refuse to give it to them (because there is not enough memory or because ulimit has been used to deny more memory to the process). If overcommit is enabled then an application has asked for some memory and was granted the amount but if the application writes to a new memory page (for the first time) and the kernel actually has to allocate memory for this but cannot do that then the kernel has to decide which process to kill in order to free memory.
The kernel will rather kill new processes than old ones, especially those who (together with their children) consume much memory. So in your case the new process might start but would probably be the one which gets killed.
You can use the files
/proc/self/oom_adj
/proc/self/oom_score
/proc/self/oom_score_adjto check the current settings and to tell the kernel in which order it shall kill processes if necessary.
| What happens if a Linux, let’s say Arch Linux or Debian, is installed with no swap partition or swap file. Then, when running the OS while almost out of RAM, the user opens a new application. Considering that this new application needs more RAM memory than what’s needed, what will happen?
What part of the operating system handles RAM management operations, and can I configure it to behave differently?
| What happens if a Linux distro is installed with no swap and when it’s almost out of RAM executes a new application? [duplicate] |
You're trying to use 512GB of RAM. Either you optimize your program or you rent a server.
I don't think swap could help you.
|
I use Linux Mint 21.2 and my machine is Intel Core i-7 6700 3.4GHZ.
I wrote a prime test in Python, and I checked it with large numbers.
It is like many tests, a variation of little Fermat, so I do some modular exponentiation like powmod(3, n-1, n).
I could testify that n = k * 2^k+1 is prime for k = 6679881.
This is no surprise since it is already proven prime.
My test needed 124 hours for this number with 2.010.852 digits.
Now I wanted to check the Fermat number, F_33, which is n = 2^(2^k)+1 with k = 33.
This number has ~2.500.000 digits.
If I run it, after some minutes, I get the following error:
NU MP: Cannot allocate memory (size=549755818000)
bash: line 1: 19354 Aborted
(core dumped)Do I have a chance to run this test?
| What does Memory allocation mean? |
I think HP doesn't play well with Linux :
https://www.quora.com/Which-laptop-is-a-better-option-for-running-Linux-Dell-HP-or-Lenovo
https://h30434.www3.hp.com/t5/Notebook-Software-and-How-To-Questions/Can-t-install-ubuntu-on-HP-probook-440-G4/td-p/6933204
I sync and eject the media, verified my downloads, even so any distro I try crashes : Debian, Fedora, Gentoo, but they work with my Lenovo (x86 also) pc.
|
Hardware specs :
Product Name HP ProBook 430 G4
Processor 1 Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz (x86)
Memory Size 4096 MB
System BIOS P85 Ver. 01.03 12/05/2016
Serial Number 5CD7097FPZI wrote the MX image to removable media i.e
dd if=mxlinux.iso of=/dev/sda status=progress && sudo eject /dev/sda
BIOS finds the media's MBR, which starts GRUB, which boots MX Linux, and without user input consumes my memory in about 2 minutes, then crashes.
Debian suffers from the same issue, at least it doesn't crash until the partitioning step. Linux Mint, and Ubuntu, don't work due to impossible to troubleshoot, random bugs.
I don't want to use Windows, as a last resort a UN*X like OS would be preferable.
What should I do? What OSes should I try?
| System ran out of memory during MX Linux installation |
Both options are used, depending on the circumstances.
When the kernel needs to allocate pages, and there are none available (or the watermarks have been reached), it will try to reclaim pages from the inactive lists (look for “Inactive” in /proc/meminfo). Reclaiming a page there doesn’t necessarily involve swap:non-dirty, file-backed pages will be discarded (they can be restored from their backing store);
dirty, file-backed pages will be written to their backing store and discarded;
only evictable pages with no backing store of their own will involve swap.The OOM killer only steps in when the above isn’t sufficient; it chooses the “worst” process (based on a number of criteria) and kills it.
|
The question is simple but I haven't found information (to be more precise, I have found information about both options (options below) but without saying which one is used in each situation).
Option 1: The kernel decides which is the best page that can evict from memory and swap to disk and eviction occurs so that the new page can arrive.
Option 2: The kernel kills one (or several) processes to free up considerable memory space at once.
2nd option seems better in performance (instead of going one by one, you free many memory pages at once) but it has the problem that is killing processes. So, which of the two options that I mentioned are implementing the modern linux distributions? Does it depend on the exact situation?
If it depends on the exact linux distribution please answer it in a general way.
| What is done when memory gets filled: One page eviction or an entire process is killed? |
Oh, this topic has been discussed ad neseum already.
See:
The Linux kernel's inability to gracefully handle low memory pressure
Let's talk about the elephant in the room - the Linux kernel's inability to gracefully handle low memory pressure
The solution? Various daemons like earlyoom (installed and enabled by default in Fedora 32) which is my favourite. There are others:nohang - also read the Solution section
oomd
low-memory-monitor
psi-monitor |
I am using a CentOS 7.5 instance in AWS which has 16 CPUs and 32GB memory. I found when I run the following command, the whole system will be unresponsive, I cannot run any commands on it anymore, cannot even establish a new SSH session (but can still ping it). And I do not see OOM killer triggered at all, it seems the whole system just hang forever.
stress --vm 1 --vm-bytes 29800M --vm-hang 0 However if I run stress --vm 1 --vm-bytes 29850M --vm-hang 0 to consume a bit more memory (50MB), OOM kill will be successfully triggered (I can see it in dmesg). And if I run the stress command to consume less memory than 29800MB (e.g. stress --vm 1 --vm-bytes 29700M --vm-hang 0), the system will be responsive (and no OOM kill) and I can run any commands as usual.
So it seems 29800MB is a "magic number" for this instance, if I run stress command to use more memory than it, the command will be OOM killed, and if I run stress command to use less memory than it, everything is OK, if I run stress command to just use 29800MB memory, the whole system will be unresponsive. And I also observed the same behavior in Linux host with different spec, e.g. in a CentOS 7.5 instance which has 72 CPUs and 144GB memory, the "magic number" is "137600MB".
My question is, why won't OOM kill be triggered when the "magic number" of memory is used?
| Why does Linux become unresponsive when a large number of memory is used (OOM cannot be triggered) |
Yes, turning that option on was enough to enable the MemoryMax to work as expected.
|
I am trying to configure my .service file to limit how much memory a given service can use up before being terminated, by percentage of system memory (10% as an upper limit in this case):
[Unit]
Description=MQTT Loop
After=radioLoop.service[Service]
Type=simple
Environment=PYTHONIOENCODING=UTF-8
ExecStart=/usr/bin/python3 -u /opt/pilot/mqttLoop.py
WorkingDirectory=/opt/pilot
StandardOutput=journal
Restart=on-failure
User=pilot
MemoryMax=10%[Install]
WantedBy=multi-user.targetThe line of interest is the MemoryMax line, which I've tried to configured based on my understanding of the systemd docs.
My version of systemd is:
systemd 241 (241)
+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybridBut it does not work.
# ps -m -o lwp,rss,pmem,pcpu,unit -u pilot
LWP RSS %MEM %CPU UNIT
- 76244 30.3 8.5 mqttLoop.service
1232 - - 7.0 mqttLoop.service
1249 - - 1.7 mqttLoop.service
1254 - - 0.2 mqttLoop.serviceI'm getting well above 10% (30% there), and then it does not restart the process. I've tried exchanging MemoryMax for MemoryLimit (the older variant of same value), but it has no effect. What am I missing?
UPDATE
I have determined that the systemd settings for processing counting are correctly turned on.
# grep -i "memory" system.conf
#DefaultMemoryAccounting=yesBut I note the following in my kernel configuration:Will it be enough that I rebuild my kernel with the Memory Controller option selected?
| system MemoryMax by percentage not working? |
For your use case try the mlockall system call to force a specific process to never be swapped, thus avoid swap thrashing slowdown.
I would recommend earlyoom with custom rules over this hack.
|
I strongly despise any kinds of automatic OOM killers, and would like to resolve such situations manually. So for a long time I have
vm.overcommit_memory=1
vm.overcommit_ratio=200But this way, when the memory is overflowed, the system becomes unresponsive. On my old laptop with HDD and 6 GB of RAM, I sometimes had to wait many minutes to switch to a text VT, issue some commands and wait for them to be executed. That's why I have numerous performance indicators to notice such situations beforehand, and often receive questions why would I need them at all. And they don't always help too, because if a memory overflow happened when I wasn't at the laptop, it's too late already.
I suspected the situation would be better on a newer laptop with SSD and 12 GB of RAM, but in fact it's even worse. I have zRam with vm.swappiness=200, which allows up to 16.4 GB of compressed swap, and when it's nearly extinguished, the system becomes even more unresponsive than on the old laptop, to the point even VT switch barely works, as well as I cannot SSH into the system from the local network, so my only resort is blindly invoking the kernel's manual OOM with Alt+SysRq+RF, which sometimes chooses to kill important process like dbus-daemon. I might make a daemon with a sound alert when the swap is almost full, but that's a partial stopgap again, as I may not come in time anyway.
In the past, I tried to mitigate such situations with thrash-protect. It sends SIGSTOP to greedy processes and then automatically SIGCONT-s them, which helped a lot to postpone the total lockup and resolve the situation manually, but in strong overload conditions, it starts freezing virtually everything (which can be explicitly allowlisted though). And it has a lot of irritating side effects. For example, if a shell is frozen, its child processes may remain frozen after thawing the shell. If two processes share a message bus and one of them is frozen, the messages are rapidly accumulated in the bus, which leads to rapidly growing RAM usage again, or lockups (graphical servers and multi-process browsers are especially prone to this).
I tried to run sshd with a -20 priority, like suggested in the similar question, but that doesn't really help: it's as unresponsive as with the default priority.
I would like to have some emergency console which is always locked in RAM and is usable regardless of how overloaded the rest of the system is. Something akin to Ctrl+Alt+Del screen in Windows NT≥6, or even better. Given that it's possible to reserve some RAM with the crashkernel parameter, which I use for kdump, I suspect it's possible to exploit this or some other kernel mechanism for the task too?
| Is it possible to reserve resources for an always-up emergency console? |
I manually downloaded a new installation since the Add/Remove software option wasn't letting me go further. It wasn't perfect, but it was better than a dead-end. In my search, I stumbled across this:
JetBrains ends 32-bit OS support
JetBrains only supported 32bit up to 2021.1.x, and the current version is 2021.3.x. I downloaded a down-level version here:
Install other/mostly older versions of Idea
Otherwise I need to wait until Raspbian officially supports 64-bit:
Raspberry Pi 64 bit status
|
First and foremost, I'm trying to purge an improperly uninstalled version of Intellij-Idea on Raspberry Pi and reinstall.
I installed IntelliJ-Idea on the Raspberry Pi using the below page as a guide:
Install Intellij-Idea on Raspberry Pi
After progressive success decreasing CPU/memory usage, and countless lock-up and kill the jvm loops, I ran into a wall with continuing "OutOfMemory" exceptions after the application launched fine and ran for a few minutes. (It indexes nearly the entire JVM, maven, my home folder, etc.)
Yes, I increased JVM memory allocation (-Xms/-Xmx), increased swap space, etc. Thinking the root of my problem might be a caching issue, I tried to remove a few folders I thought were cache directories, but actually wiped out the main install image and all subdirectories instead.
So now I'm trying to force uninstall/delete so I can re-install.
I've tried:Uninstalling from Add/Remove Software (Only supporting packages list as "installed". The main install doesn't list at all)
sudo apt-get purge/remove intelliJ* (see below output)
sudo apt purge/remove intelliJ* (ditto)
Got frustrated and manually deleted all the other "IdeaIC2021.2" and related folders (after backing up key config files) The start menu link won't work now of course, but I can't get pi to truly purge details and let me reinstall.Bizarre half-installed state messages:
pi@raspberrypi:~ $ sudo apt purge intelliJ*
Reading package lists... Done
Building dependency tree
Reading state information... Done
Note, selecting 'libjdom2-intellij-java-doc' for regex 'intelliJ*'
Note, selecting 'libintellij-annotations-java-doc' for regex 'intelliJ*'
Note, selecting 'libjdom2-intellij-java' for regex 'intelliJ*'
Note, selecting 'libintellij-annotations-java' for regex 'intelliJ*'
Package 'libintellij-annotations-java' is not installed, so not removed
Package 'libintellij-annotations-java-doc' is not installed, so not removed
Package 'libjdom2-intellij-java' is not installed, so not removed
Package 'libjdom2-intellij-java-doc' is not installed, so not removed
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.Yes, I'm an idiot for deleting the wrong folder and then digging the hole deeper, but at this point I just want to overlay with a new install anyway. What's the correct way forward? Manual download and install because pi's registration details are horked? A set of sudo apt-get install commands?
| How to fix improperly uninstalled software on Raspberry Pi (Buster) |
nproc gives the number of CPU cores/threads available, e.g. 8 on a quad-core CPU supporting two-way SMT.
The number of jobs you can run in parallel with make using the -j option depends on a number of factors:the amount of available memory
the amount of memory used by each make job
the extent to which make jobs are I/O- or CPU-boundmake -j$(nproc) is a decent place to start, but you can usually use higher values, as long as you don't exhaust your available memory and start thrashing.
For really fast builds, if you have enough memory, I recommend using a tmpfs, that way most jobs will be CPU-bound and make -j$(nproc) will work as fast as possible.
|
I want to compile as fast as possible. Go figure. And would like to automate the choice of the number following the -j option. How can I programmatically choose that value, e.g. in a shell script?
Is the output of nproc equivalent to the number of threads I have available to compile with?
make -j1
make -j16
| How to determine the maximum number to pass to make -j option? |
Assuming that you are not able to get pssh or others installed, you could do something similar to:
tmpdir=${TMPDIR:-/tmp}/pssh.$$
mkdir -p $tmpdir
count=0
while IFS= read -r userhost; do
ssh -n -o BatchMode=yes ${userhost} 'uname -a' > ${tmpdir}/${userhost} 2>&1 &
count=`expr $count + 1`
done < userhost.lst
while [ $count -gt 0 ]; do
wait $pids
count=`expr $count - 1`
done
echo "Output for hosts are in $tmpdir" |
There is a list of IP addresses in a .txt file, ex.:
1.1.1.1
2.2.2.2
3.3.3.3Behind every IP address there is a server, and on every server there is an sshd running on port 22. Not every server is in the known_hosts list (on my PC, Ubuntu 10.04 LTS/bash).
How can I run commands on these servers, and collect the output?
Ideally, I'd like to run the commands in parallel on all the servers.
I'll be using public key authentication on all the servers.
Here are some potential pitfalls:The ssh prompts me to put the given servers ssh key to my known_hosts file.
The given commands might return a nonzero exit code, indicating that the output is potentially invalid. I need to recognize that.
A connection might fail to be established to a given server, for example because of a network error.
There should be a timeout, in case the command runs for longer than expected or the server goes down while running the command.The servers are AIX/ksh (but I think that doesn't really matter.
| Automatically run commands over SSH on many servers |
As mavillan already suggested, just use terminator. It allows to display many terminals in a tiled way. When enabling the broadcasting feature by clicking on the grid icon (top-left) and choosing "Broadcast All", you can enter the very same command simultaneously on each terminal.
Here is an example with the date command broadcasted to a grid of 32 terminals. |
Is there any tool/command in Linux that I can use to run a command in more than one tab simultaneously? I want to run the same command: ./myprog argument1 argument2 simultaneously in more than one shell to check if the mutexes are working fine in a threaded program. I want to be able to increase the number of instances of this program so as to put my code under stress later on.
I am kind of looking for something like what wall does. I can think of using tty's, but that just seems like a lot of pain if I have to scale this to many more shells.
| How do I run the same linux command in more than one tab/shell simultaneously? |
The common protocols HTTP, FTP and SFTP support range requests, so you can
request part of a file. Note that this also requires server support, so it
might or might not work in practice.
You can use curl and the -r or --range option to specify the range and
eventually just catting the files together. Example:
curl -r 0-104857600 -o distro1.iso 'http://files.cdn/distro.iso'
curl -r 104857601-209715200 -o distro2.iso 'http://files.cdn/distro.iso'
[…]And eventually when you gathered the individual parts you concatenate them:
cat distro* > distro.isoYou can get further information about the file, including its size with the --head option:
curl --head 'http://files.cdn/distro.iso'You can retrieve the last chunk with an open range:
curl -r 604887601- -o distro9.iso 'http://files.cdn/distro.iso'Read the curl man page for more options and explanations.
You can further leverage ssh and tmux to ease running and keeping
track of the downloads on multiple servers.
|
I need to download a large file (1GB). I also have access to multiple computers running Linux, but each is limited to a 50kB/s download speed by an admin policy.
How do I distribute downloading this file on several computers and merge them after all segments have been downloaded, so that I can receive it faster?
| How do I distribute a large download over multiple computers? |
Check out pee ("tee standard input to pipes") from moreutils. This is basically equivalent to Marco's tee command, but a little simpler to type.
$ echo foo | pee md5sum sha256sum
d3b07384d113edec49eaa6238ad5ff00 -
b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c -$ pee md5sum sha256sum <foo.iso
f109ffd6612e36e0fc1597eda65e9cf0 -
469a38cb785f8d47a0f85f968feff0be1d6f9398e353496ff7aa9055725bc63e - |
Under the assumption that disk I/O and free RAM is a bottleneck (while CPU time is not the limitation), does a tool exist that can calculate multiple message digests at once?
I am particularly interested in calculating the MD-5 and SHA-256 digests of large files (size in gigabytes), preferably in parallel. I have tried openssl dgst -sha256 -md5, but it only calculates the hash using one algorithm.
Pseudo-code for the expected behavior:
for each block:
for each algorithm:
hash_state[algorithm].update(block)
for each algorithm:
print algorithm, hash_state[algorithm].final_hash() | Simultaneously calculate multiple digests (md5, sha256)? |
If you have a copy of xargs that supports parallel execution with -P, you can simply do
printf '%s\0' *.png | xargs -0 -I {} -P 4 ./pngout -s0 {} R{}For other ideas, the Wooledge Bash wiki has a section in the Process Management article describing exactly what you want.
|
I have a bunch of PNG images on a directory. I have an application called pngout that I run to compress these images. This application is called by a script I did. The problem is that this script does one at a time, something like this:
FILES=(./*.png)
for f in "${FILES[@]}"
do
echo "Processing $f file..."
# take action on each file. $f store current file name
./pngout -s0 $f R${f/\.\//}
doneProcessing just one file at a time, takes a lot of time. After running this app, I see that the CPU is just 10%. So I discovered that I can divide these files in 4 batches, put each batch in a directory and fire 4, from four terminal windows, four processes, so I have four instances of my script, at the same time, processing those images and the job takes 1/4 of the time.
The second problem is that I lost time dividing the images and batches and copying the script to four directories, open 4 terminal windows, bla bla...
How do that with one script, without having to divide anything?
I mean two things: first how do I from a bash script, fire a process to the background ? (just add & to the end?) Second: how do I stop sending tasks to the background after sending the fourth tasks and put the script to wait until the tasks end? I mean, just sending a new task to the background as one tasks end, keeping always 4 tasks in parallel? if I do not do that the loop will fire zillions of tasks to the background and the CPU will clog.
| Four tasks in parallel... how do I do that? |
for((i=1;i<100;i++)); do nohup bash script${i}.sh & done |
Suppose that I have three (or more) bash scripts: script1.sh, script2.sh, and script3.sh. I would like to call all three of these scripts and run them in parallel. One way to do this is to just execute the following commands:
nohup bash script1.sh &
nohup bash script2.sh &
nohup bash script3.sh &(In general, the scripts may take several hours or days to finish, so I would like to use nohup so that they continue running even if my console closes.)
But, is there any way to execute those three commands in parallel with a single call?
I was thinking something like
nohup bash script{1..3}.sh &but this appears to execute script1.sh, script2.sh, and script3.sh in sequence, not in parallel.
| Calling multiple bash scripts and running them in parallel, not in sequence |
A problem with split --filter is that the output can be mixed up, so you get half a line from process 1 followed by half a line from process 2.
GNU Parallel guarantees there will be no mixup.
So assume you want to do:
A | B | CBut that B is terribly slow, and thus you want to parallelize that. Then you can do:
A | parallel --pipe B | CGNU Parallel by default splits on \n and a block size of 1 MB. This can be adjusted with --recend and --block.
You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/
You can install GNU Parallel in just 10 seconds with:
$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
fetch -o - http://pi.dk/3 ) > install.sh
$ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
12345678 883c667e 01eed62f 975ad28b 6d50e22a
$ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
cc21b4c9 43fd03e9 3ae1ae49 e28573c0
$ sha512sum install.sh | grep da012ec113b49a54e705f86d51e784ebced224fdf
79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
$ bash install.shWatch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
|
Consider the following scenario. I have two programs A and B. Program A outputs to stdout lines of strings while program B process lines from stdin. The way to use these two programs is of course:foo@bar:~$ A | BNow I've noticed that this eats up only one core; hence I am wondering:Are programs A and B sharing the same computational resources? If so, is there a way to run A and B concurrently?Another thing that I've noticed is that A runs much much faster than B, hence I am wondering if could somehow run more B programs and let them process the lines that A outputs in parallel.
That is, A would output its lines, and there would be N instances of programs B that would read these lines (whoever reads them first) process them and output them on stdout.
So my final question is:Is there a way to pipe the output to A among several B processes without having to take care of race conditions and other inconsistencies that could potentially arise? | Executing piped commands in parallel |
This should do the trick:
echo -n $IPs | xargs --max-args=1 -I {} --delimiter ' ' --max-procs=0 \
sh -c "wget -q -O- 'http://{}/somepage.html' | egrep --count '^string'" | \
{ NUM=0; while read i; do NUM=$(($NUM + $i)); done; echo $NUM; }The idea here is to make separate counts and sum these at the end.
Might fail if the separate counts are big enough to be mixed, but it should not be the case.
|
I'm using xargs with the option --max-args=0 (alternatively -P 0).
However, the output of the processes is merged into the stdout stream without regard for proper line separation. So I'll often end up with lines such as:
<start-of-line-1><line-2><end-of-line-1>As I'm using egrep with ^ in my pattern on the whole xargs output this is messing up my result.
Is there some way to force xargs to write the process outputs in order (any order, as long as the output of one process is contiguous)?
Or some other solution?
Edit: more details about the use case:
I want to download and parse web pages from different hosts. As every page takes about a second to load and there are a few dozen pages I want to parallelize the requests.
My command has the following form:
echo -n $IPs | xargs --max-args=1 -I {} --delimiter ' ' --max-procs=0 \
wget -q -O- http://{}/somepage.html | egrep --count '^string'I use bash and not something like Perl because the host IPs (the $IPs variable) and some other data comes from an included bash file.
| How to stop xargs from badly merging output from multiple processes? |
You can get a first impression by checking whether the utility is linked with the pthread library. Any dynamically linked program that uses OS threads should use the pthread library.
ldd /bin/grep | grep -F libpthread.soSo for example on Ubuntu:
for x in $(dpkg -L coreutils grep findutils util-linux | grep /bin/); do if ldd $x | grep -q -F libpthread.so; then echo $x; fi; doneHowever, this produces a lot of false positives due to programs that are linked with a library that itself is linked with pthread. For example, /bin/mkdir on my system is linked with PCRE (I don't know why…) which itself is linked with pthread. But mkdir is not parallelized in any way.
In practice, checking whether the executable contains libpthread gives more reliable results. It could miss executables whose parallel behavior is entirely contained in a library, but basic utility typically aren't designed that way.
dpkg -L coreutils grep findutils util-linux | grep /bin/ | xargs grep pthread
Binary file /usr/bin/timeout matches
Binary file /usr/bin/sort matchesSo the only tool that actually has a chance of being parallelized is sort. (timeout only links to libpthread because it links to librt.) GNU sort does work in parallel: the number of threads can be configured with the --parallel option, and by default it uses one thread per processor up to 8. (Using more processors gives less and less benefit as the number of processors increases, tapering off at a rate that depends on how parallelizable the task is.)
grep isn't parallelized at all. The PCRE library actually links to the pthread library only because it provides thread-safe functions that use locks and the lock manipulation functions are in the pthread library.
The typical simple approach to benefit from parallelization when processing a large amount of data is to split this data into pieces, and process the pieces in parallel. In the case of grep, keep file sizes manageable (for example, if they're log files, rotate them often enough) and call separate instances of grep on each file (for example with GNU Parallel). Note that grepping is usually IO-bound (it's only CPU-bound if you have a very complicated regex, or if you hit some Unicode corner cases of GNU grep where it has bad performance), so you're unlikely to get much benefit from having many threads.
|
In a common Linux distribution, do utilities like rm, mv, ls, grep, wc, etc. run in parallel on their arguments?
In other words, if I grep a huge file on a 32-threaded CPU, will it go faster than on dual-core CPU?
| Are basic POSIX utilities parallelized? |
GNU Parallel is designed for this kind of tasks:
parallel customScript -c 33 -I -file {} -a -v 55 '>' {.}.output ::: *.inputor:
ls | parallel customScript -c 33 -I -file {} -a -v 55 '>' {.}.outputIt will run one jobs per CPU core.
You can install GNU Parallel simply by:
wget https://git.savannah.gnu.org/cgit/parallel.git/plain/src/parallel
chmod 755 parallel
cp parallel semWatch the intro videos for GNU Parallel to learn more:
https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
|
I have a shell scripting problem where I'm given a directory full of input files (each file containing many input lines), and I need to process them individually, redirecting each of their outputs to a unique file (aka, file_1.input needs to be captured in file_1.output, and so on).
Pre-parallel, I would just iterate over each file in the directory and perform my command, while doing some sort of timer/counting technique to not overwhelm the processors (assuming that each process had a constant runtime). However, I know that won't always be the case, so using a "parallel" like solution seems the best way to get shell script multi-threading without writing custom code.
While I have thought of some ways to whip up parallel to process each of these files (and allowing me to manage my cores efficiently), they all seem hacky. I have what I think is a pretty easy use case, so would prefer to keep it as clean as possible (and nothing in the parallel examples seem to jump out as being my problem.
Any help would be appreciated!
input directory example:
> ls -l input_files/
total 13355
location1.txt
location2.txt
location3.txt
location4.txt
location5.txtScript:
> cat proces_script.sh
#!/bin/shcustomScript -c 33 -I -file [inputFile] -a -v 55 > [outputFile]Update:
After reading Ole's answer below, I was able to put together the missing pieces for my own parallel implementation. While his answer is great, here is my addition research and notes I took:
Instead of running my full process, I figured to start with a proof of concept command to prove out his solution in my environment. See my two different implementations (and notes):
find /home/me/input_files -type f -name *.txt | parallel cat /home/me/input_files/{} '>' /home/me/output_files/{.}.outUses find (not ls, that can cause issues) to find all applicable files within my input files directory, and then redirects their contents to a separate directory and file. My issue from above was reading and redirecting (the actual script was simple), so replacing the script with cat was a fine proof of concept.
parallel cat '>' /home/me/output_files/{.}.out ::: /home/me/input_files/*This second solution uses parallel's input variable paradigm to read the files in, however for a novice, this was much more confusing. For me, using find a and pipe met my needs just fine.
| using parallel to process unique input files to unique output files |
Use wait. For example:
Data1 ... > Data1Res.csv &
Data2 ... > Data2Res.csv &
wait
AnalysisProgwill:run the Data1 and Data2 pipes as background jobs
wait for them both to finish
run AnalysisProg.See, e.g., this question.
|
I have a bash shell script in which I pipe some data through about 5 or 6 different programs then the final results into a tab delimited file.
I then do the same again for a separate similar dataset and output to a second file.
Then both files are input into another program for comparative analysis.
e.g. to simplify
Data1 | this | that |theother | grep |sed | awk |whatever > Data1Res.csv
Data2 | this | that |theother | grep |sed | awk |whatever > Data2Res.csv
AnalysisProg -i Data1res.csv Data2res.csvMy question is : how can I make step1 and step2 run at the same time (e.g. using &) but only launch step3 (AnalysisProg) when both are complete?
thx
ps AnalysisProg will not work on a stream or fifo.
| How to run parallel processes and combine outputs when both finished |
Using GNU Parallel,$ parallel -j ${jobs} wget < urls.txtor xargs from GNU Findutils,$ xargs -n 1 -P ${jobs} wget < urls.txtwhere ${jobs} is the maximum number of wget you want to allow to run concurrently (setting -n to 1 to get one wget invocation per line in urls.txt). Without -j/-P, parallel will run as many jobs at a time as CPU cores (which doesn't necessarily make sense for wget bound by network IO), and xargs will run one at a time.
One nice feature that parallel has over xargs is keeping the output of the concurrently-running jobs separated, but if you don't care about that, xargs is more likely to be pre-installed.
|
I've found only puf (Parallel URL fetcher) but I couldn't get it to read urls from a file; something like
puf < urls.txtdoes not work either.
The operating system installed on the server is Ubuntu.
| Is there parallel wget? Something like fping but only for downloading? |
If you're using GNU xargs, there's --process-slot-var:--process-slot-var=environment-variable-name
Set the environment variable environment-variable-name to a unique
value in each running child process. Each value is a decimal integer.
Values are reused once child processes exit. This can be used in a
rudimentary load distribution scheme, for example.So, for example:
~ echo {1..9} | xargs -n2 -P2 --process-slot-var=index sh -c 'echo "$index" "$@" "$$"' _
0 1 2 10475
1 3 4 10476
1 5 6 10477
0 7 8 10478
1 9 10479 |
Suppose I have two resources, named 0 and 1, that can only be accessed exclusively.
Is there any way to recover the "index" of the "parallel processor" that xargs launches in order to use it as a free mutual exclusion service? E.g., consider the following parallelized computation:
$ echo {1..8} | xargs -d " " -P 2 -I {} echo "consuming task {}"
consuming task 1
consuming task 2
consuming task 3
consuming task 4
consuming task 5
consuming task 6
consuming task 7
consuming task 8My question is whether there exists a magic word, say index, where the output would look like
$ echo {1..8} | xargs -d " " -P 2 -I {} echo "consuming task {} with resource index"
consuming task 1 with resource 0
consuming task 2 with resource 1
consuming task 3 with resource 1
consuming task 4 with resource 1
consuming task 5 with resource 0
consuming task 6 with resource 1
consuming task 7 with resource 0
consuming task 8 with resource 0where the only guarantee is that there is only ever at most one process using resource 0 and same for 1. Basically, I'd like to communicate this index down to the child process that would respect the rule to only use the resource it was told to.
Of course, it'd be preferable to extend this to more than two resources. Inspecting the docs, xargs probably can't do this. Is there a minimal equivalent solution? Using/cleaning files as fake locks is not preferable.
| How can I get the index of the xargs "parallel processor"? |
This looks like a job for gnu parallel:
parallel bash -c ::: script_*The advantage is that you don't have to group your scripts by cores, parallel will do that for you.
Of course, if you don't want to babysit the SSH session while the scripts are running, you should use nohup or screen
|
I can ssh into a remote machine that has 64 cores. Lets say I need to run 640 shell scripts in parallel on this machine. How do I do this?
I can see splitting the 640 scripts into 64 groups each of 10 scripts. How would I then run each of these groups in parallel, i.e. one group on each of one of the available cores.
Would a script of the form
./script_A &
./script_B &
./script_C &
...where script_A corresponds to the first group, script_B to the second group etc., suffice?
The scripts within one group that run on one core are ok to run sequentially, but I want the groups to run in parallel across all cores.
| How to run scripts in parallel on a remote machine? |
lftp would do this with the command mirror -R -P 20 localpath - mirror syncs between locations, and -R uses the remote server as the destination , with P doing 20 parallel transfers at once.
As explained in man lftp:
mirror [OPTS] [source [target]] Mirror specified source directory to local target directory. If target
directory ends with a slash, the source base name is appended to target
directory name. Source and/or target can be URLs pointing to directo‐
ries. -R, --reverse reverse mirror (put files)
-P, --parallel[=N] download N files in parallel |
I need to upload a directory with a rather complicated tree (lots of subdirectories, etc.) by FTP. I am unable to compress this directory, since I do not have any access to the destination apart from FTP - e.g. no tar. Since this is over a very long distance (USA => Australia), latency is quite high.
Following the advice in How to FTP multiple folders to another server using mput in Unix?, I am currently using ncftp to perform the transfer with mput -r. Unfortunately, this seems to transfer a single file at a time, wasting a lot of the available bandwidth on communication overhead.
Is there any way I can parallelise this process, i.e. upload multiple files from this directory at the same time? Of course, I could manually split it and execute mput -r on each chunk, but that's a tedious process.
A CLI method is heavily preferred, as the client machine is actually a headless server accessed via SSH.
| How can I parallelise the upload of a directory by FTP? |
Yes, it is. If you want to do two things concurrently, and wait for them both to complete, you can do something like:
sh ./stay/get_it_ios.sh & PIDIOS=$!
sh ./stay/get_it_mix.sh & PIDMIX=$!
wait $PIDIOS
wait $PIDMIXYour script will then run both scripts in parallel, and wait for both scripts to complete before continuing.
|
I know that on the command line I can use & to run a command in the background. But I'm wondering if I can do it in a script.
I have a script like this:
date_stamp=$(date +"%Y-%m-%d" --date='yesterday')
shopt -s extglobcd /my/working/directory/sh ./stay/get_it_ios.sh
sh ./stay/get_it_mix.shcd stay
zip ../stay_$date_stamp.zip ./*201*rm ./stay/!(*py|*sh)And I want to run sh ./stay/get_it_ios.sh and sh ./stay/get_it_mix.sh together to get more accurate data. Is it possible to do this in the scope of a shell script?
| Is it possible to run two commands at the same time in a shell script? |
Red represents the time spent in the kernel, typically processing system calls on behalf of processes. This includes time spent on I/O. There’s no point in trying to reduce it just for the sake of reducing it, because it’s not time that’s wasted — it’s time that’s spent by the kernel doing useful stuff (as long as you’re not thrashing, so look at the number of context switches etc.).I've experimented with using fewer cores for the downloading process. When I do so, it can't keep up with the processing script (I'm DLing from S3). suggests that your current setup is evenly balanced between the I/O needed to feed the processing, and the processing itself, which is a rather nice result. If you suspect that you’ve got too many processes running, and that that’s causing waste (by thrashing), then you could try reducing the number of geoprocessing jobs, to see if your overall throughput increases. The usual benchmarking tips apply: identify what you’re going to tweak, determine what resulting variations could occur and what they mean, only tweak one thing at a time, and measure everything.
| I've read that the color red indicates "kernel processes." Does that mean little daemons that are regulating which task gets to use the CPU? And by extension, transaction costs in an oversubscribed system?
I'm running some large-scale geoprocessing jobs, and I've got two scripts running in parallel at the same time.
The first script does the actual processing, on all 96 cores. It is responsible for almost all of the memory use.
The second script uses curl to download the data to feed the first process, and it does so in parallel. I wrote it to download only until there are n_cores * 3 files downloaded. If that constraint isn't met, it waits for a minute or so and then check again. So, most of the time it isn't running -- or rather it is executing the Sys.sleep() command in R.
I've experimented with using fewer cores for the downloading process. When I do so, it can't keep up with the processing script (I'm DLing from S3).
TL;DR: Would my processes run faster if I could make htop less red? And are they red because there are more processes than cores?
| Lots of red in htop -- does that mean my tasks are tripping over each other? |
GNU parallel is made for just this sort of thing. You can run your script many times at once, with different data from your input piped in for each one:
cat input.txt | parallel --pipe your-script.shBy default it will spawn processes according to the number of processors on your system, but you can customise that with -j N.
A particularly neat trick is the shebang-wrapping feature. If you change the first line of your Bash script to:
#!/usr/bin/parallel --shebang-wrap --pipe /bin/bashand feed it data on standard input then it will all happen automatically. This is less useful when you have cleanup code that has to run at the end, which you may do.
There are a couple of things to note. One is that it will chop up your input into sequential chunks and use those one at a time - it doesn't interleave lines. The other is that it those chunks are split by size, without regard for how many records there are. You can use --block N to set a different block size in bytes. In your case, no more than an eighth of the file size should be about right. Your file sounds like it might be small enough to end up all in a single block otherwise, which would defeat the purpose.
There are a lot of options for particular different use cases, but the tutorial covers things pretty well. Options you might also be interested in include --round-robin and --group.
|
I have written a bash script which is in following format:
#!/bin/bash
start=$(date +%s)
inFile="input.txt"
outFile="output.csv"rm -f $inFile $outFilewhile read line
do -- Block of Commandsdone < "$inFile"end=$(date +%s)runtime=$((end-start))echo "Program has finished execution in $runtime seconds."The while loop will read from $inFile, perform some activity on the line and dump the result in $outFile.
As the $inFile is 3500+ lines long, the script would take 6-7 hours for executing completely. In order to minimize this time, I am planning to use multi-threading or forking in this script. If I create 8 child processes, 8 lines from the $inFile would be processed simultaneously.
How can this be done?
| Multi-Threading/Forking in a bash script |
On linux, the system call to set the CPU affinity for a process is sched_setaffinity. Then there's the taskset tool to do it on the command line.
To have that single program run on only one CPU, I think you'd want something like
taskset -c 1 ./myprogram(set any CPU number as an argument to the -c switch.)
That should be close enough to a single-processor system, as long as your other processes don't run too much compared to the one you want to measure, or they get scheduled to other CPU's. If you want to dedicate one CPU to that single process only, and prevent other processes from running on that CPU, you'd need to set their affinity too.
That, I don't know how to do properly. You'd need to set the processor affinity of init very early in the boot process to make sure it gets inherited to all processes on the system. As a workaround, you could use taskset -c -p 0 $PID for all other processes to force them to run on CPU #0 only.
systemd also has CPUAffinity= to control the affinity in unit files and there are a couple of questions on setting the default affinity here on unix.SE, but I didn't find any with a good solution.
Though as @Kamil Maciorowski commented and answered to another question on superuser.com, setting isolcpus=1
on the kernel command line should "isolate that CPU from the general scheduling algorithms", which is something you may want.
|
I need to run performance tests for my concurrent program and my requirement is that it should be run on only one CPU core. (I don't want to cooperative threads - I want always have a context switching).
So I have two questions:The best solution - How to sign and reserve only one CPU core only for my program (to force OS to not use this CPU core). I guess it is not possible but maybe I'm wrong...
How to set linux (Fedora 24) to use only one CPU core? | Using only one cpu core |
Confiq's answer is a good one for small i and j. However, given the size of i and j in your question, you will likely want to limit the overall number of processes spawned. You can do this with the parallel command or some versions of xargs. For example, using an xargs that supports the -P flag you could parallelize your inner loop as follows:
for i in {0800..9999}; do
echo {001..032} | xargs -n 1 -P 8 -I{} wget http://example.com/"$i-{}".jpg
doneGNU parallel has a large number of features for when you need more sophisticated behavior and makes it easy to parallelize over both parameters:
parallel -a <(seq 0800 9999) -a <(seq 001 032) -P 8 wget http://example.com/{1}-{2}.jpg |
I have the following bash script:
for i in {0800..9999}; do
for j in {001..032}; do
wget http://example.com/"$i-$j".jpg
done
doneAll photos are exist and in fact each iteration does not depend from another. How to parallelize it with possibility of control the number of threads?
| Download several files with wget in parallel |
$! is guaranteed to give you the pid of the process in which the shell executed that tail command. Shells are single threaded, each shell lives in its own process with its own set of variables. There's no way the $! of one shell is going to leak into another shell, just like assigning a shell variable in one shell is not going to affect the variable of the same name in another shell (if we set aside the universal variables of the fish shell).
Now, tail -f /dev/null is a command that runs indefinitely, but for short-lived commands, note that since there's a limited number of possible process ids, process ids inevitably end up being reused.
In:
true &
pid=$!That $pid will contain the id of the process where the shell ran true, but by the time you use that $pid, that pid may well be dead and could be referring to a different process.
|
Say I have multiple bash scripts that run in parallel, with code like the following:
#!/bin/bashtail -f /dev/null &
echo "pid is "$!Is $! guaranteed to give me the PID of the most recent background task in that script, or is it the most recent background task globally? I'm just curious if relying on this feature can cause race conditions when the PID it returns is from a process started in another script.
| Can $! cause race conditions when used in scripts running in parallel? |
The outer loop that you have is basically
for i in {1..10}; do
some_compound_command &
doneThis would start ten concurrent instances of some_compound_command in the background. They will be started as fast as possible, but not quite "all at the same time" (i.e. if some_compound_command takes very little time, then the first may well finish before the last one starts).
The fact that some_compound_command happens to be a loop is not important. This means that the code that you show is correct in that iterations of the inner j-loop will be running sequentially, but all instances of the inner loop (one per iteration of the outer i-loop) would be started concurrently.
The only thing to keep in mind is that each background job will be running in a subshell. This means that changes made to the environment (e.g. modifications to values of shell variables, changes of current working directory with cd, etc.) in one instance of the inner loop will not be visible outside of that particular background job.
What you may want to add is a wait statement after your loop, just to wait for all background jobs to actually finish, at least before the script terminates:
for i in {1..10}; do
for j in {1..10}; do
run_command "$i" "$j"
done &
donewait |
Is this the correct way to start multiple sequential processings in the background?
for i in {1..10}; do
for j in {1..10}; do
run_command $i $j;
done &
done;All j should be processed after each other for a given i, but all i should be processed simultaneously.
| Bash: Multiple for loops in Background |
It seems tar wants to know all the file names upfront. So it is less on-the-fly and more after-the-fly. cpio does not seem to have that problem:
| cpio -vo 2>&1 > >(gzip > /tmp/arc.cpio.gz) | parallel rm |
I have an embarrassingly parallel process that creates a huge amount of nearly (but not completely) identical files. Is there a way to archive the files "on the fly", so that the data does not consume more space than necessary?
The process itself accepts command-line parameters and prints the name of each file created to stdout. I'm invoking it with parallel --gnu which takes care of distributing input (which comes from another process) and collecting output:
arg_generating_process | parallel --gnu my_process | magic_otf_compressorSIMPLE EXAMPLE for the first part of the pipe in bash:
for ((f = 0; $f < 100000; f++)); do touch $f; echo $f; doneHow could magic_otf_compressor look like? It's supposed to treat each input line as file name, copy each file to a compressed .tar archive (the same archive for all files processed!) and then delete it. (Actually, it should be enough to print the name of each processed file, another | parallel --gnu rm could take care of deleting the files.)
Is there any such tool? I'm not considering compressing each file individually, this would waste far too much space. I have looked into archivemount (will keep file system in memory -> impossible, my files are too large and too many) and avfs (couldn't get it to work together with FUSE). What have I missed?
I'm just one step away from hacking such a tool myself, but somebody must have done it before...
EDIT: Essentially I think I'm looking for a stdin front-end for libtar (as opposed to the command-line front-end tar that reads arguments from, well, the command line).
| Virtual write-only file system for storing files in archive |
This is possible and does occur in reality. Use a lock file to avoid this situation. An example, from said page:
if mkdir /var/lock/mylock; then
echo "Locking succeeded" >&2
else
echo "Lock failed - exit" >&2
exit 1
fi# ... program code ...rmdir /var/lock/mylock |
I have a small script that loops through all files of a folder and executes a (usually long lasting) command. Basically it's
for file in ./folder/*;
do
./bin/myProgram $file > ./done/$file
done(Please Ignore syntax errors, it's just pseudo code).
I now wanted to run this script twice at the same time. Obviously, the execution is unnecessary if ./done/$file exists. So I changed the script to
for file in ./folder/*;
do
[ -f ./done/$file ] || ./bin/myProgram $file >./done/$file
doneSo basically the question is:
Is it possible that both scripts (or in general more than one script) actually are at the same point and check for the existance of the done file which fails and the command runs twice?
it would be just perfect, but I highly doubt it. This would be too easy :D
If it can happen that they process the same file, is it possible to somehow "synchronize" the scripts?
| Parallel execution of a program on multiple files |
#!/bin/bashjobs_to_run_num=10
simult_jobs_num=3
have_runned_jobs_cntr=0
check_interval=0.1while ((have_runned_jobs_cntr < jobs_to_run_num)); do
cur_jobs_num=$(wc -l < <(jobs -r)) if ((cur_jobs_num < simult_jobs_num)); then
./random_time_script.sh &
echo -e "cur_jobs_num\t$((cur_jobs_num + 1))"
((have_runned_jobs_cntr++))
# sleep is needed to reduce the frequency of while loop
# otherwise it itself will eat a lot of processor time
# by restlessly checking
else
sleep "$check_interval"
fi
doneThe better way - by using wait -n. No need for checking jobs number every iteration and usage of sleep command.
jobs_to_run_num=10
simult_jobs_num=3while ((have_runned_jobs_cntr < jobs_to_run_num)); do
if (( i++ >= simult_jobs_num )); then
wait -n # wait for any job to complete. New in 4.3
fi
./random_time_script.sh &
((have_runned_jobs_cntr++)) # For demonstration
cur_jobs_num=$(wc -l < <(jobs -r))
echo -e "cur_jobs_num\t${cur_jobs_num}"
done Idea from here - I want to process a bunch of files in parallel, and when one finishes, I want to start the next. And I want to make sure there are exactly 5 jobs running at a time.
Testing
$ ./test_simult_jobs.sh
cur_jobs_num 1
cur_jobs_num 2
cur_jobs_num 3
cur_jobs_num 3
cur_jobs_num 3
cur_jobs_num 3
cur_jobs_num 3
cur_jobs_num 3
cur_jobs_num 3
cur_jobs_num 3 |
In bash script, I have a program like this
for i in {1..1000}
do
foo i
doneWhere I call the function foo 1000 times with parameter i
If I want to make it run in multi-process, but not all at once, what should I do?
So if I have
for i in {1..1000}
do
foo i &
doneIt would start all 1000 processes at once, which is not what I want.
Is there a way to make sure that there is always 100 process running? If some processes are finished, start some new ones, until all 1000 iterations are done. Alternatively, I could wait till all 100 are finished and run another 100.
| Start 100 process at a time in bash script |
To list all files start with number in a directory,
find . -maxdepth 1 -regextype "posix-egrep" -regex '.*/[0-9]+.*\.mp3' -type fProblem with your approach is that the find returns a relative path of a file and you are just expecting a filename itself.
|
I've a problem modifying the files' names in my Music/ directory.
I have a list of names like these:
$ ls
01 American Idiot.mp3
01 Articolo 31 - Domani Smetto.mp3
01 Bohemian rapsody.mp3
01 Eye of the Tiger.mp3
04 Halo.mp3
04 Indietro.mp3
04 You Can't Hurry Love.mp3
05 Beautiful girls.mp3
16 Apologize.mp3
16 Christmas Is All Around.mp3
Adam's song.mp3
A far l'amore comincia tu.mp3
All By My Self.MP3
Always.mp3
Angel.mp3And similar and I would like to cut all the numbers in front of the filenames (not the 3 in the extension).
I've tried first to grep only the files with the number with find -exec or xargs but even at this first step I had no success. After being able to grep I'd like doing the actual name change.
This is what I tried by now:
ls > try-expression
grep -E '^[0-9]+' try-expressionand with the above I got the right result. Then I tried the next step:
ls | xargs -0 grep -E '^[0-9]+'
ls | xargs -d '\n' grep -E '^[0-9]+'
find . -name '[0-9]+' -exec grep -E '^[0-9]+' {} \;
ls | parallel bash -c "grep -E '^[0-9]+'" - {}And similar but I got error like 'File name too long' or no output at all. I guess the problem is the way I'm using xargs or find as expressions in separate commands work well.
Thank you for your help
| Remove numbers from filenames |
Pipe sends the output of one command to the next. You are looking for the & (ampersand). This forks processes and runs them in the background. So if you ran:
WatchDog & TempControl & GPUcontrolIt should run all three simultaneously.
Also when you run sudo bash /etc/rc.local I believe that is running them in series not in parallel (it waits for each command to finish before starting the next). That would be sort of like this:
WatchDog ; TempControl ; GPUcontrolCommand Separators; semi-colon - command1 ; command2This will execute command2 after command1 is finished, regardless of whether or not it was successful& ampersand - command1 & command2This will execute command1 in a subshell and execute command2 at the same time.|| OR logical operator - command1 || command2This will execute command1 and then execute command2 ONLY if command1 failed&& AND logical operator - command1 && command2This will execute command1 and then execute command2 ONLY if command1 succeeded.
|
I have 3 functions, like
function WatchDog {
sleep 1#something
}
function TempControl {
sleep 480#somthing
}
function GPUcontrol {
sleep 480#somethimg
}And i am runing it like
WatchDog | TempControl | GPUcontrolThis script is in rc.local file.
So, logically it should run at automatically.
The thing is that first function is doing fine. But second and third is not starting.
But if I am starting it like
sudo bash /etc/rc.local that is working fine.
What is the problem?
The same thing if i am adding it to init.d directory.
| Parallel running of functions |
Using GNU Parallel it looks like this:
parallel script1.sh {}';' script2.sh {} ::: a b c ::: d e fIt will spawn one job per CPU.
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to. It can often replace a for loop.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bashFor other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel
|
I have script I'd always like to run 'x' instances in parallel.
The code looks a like that:
for A in
do
for B in
do
(script1.sh $A $B;script2.sh $A $B) &
done #B
done #AThe scripts itself run DB queries, so it would benefit from parallel running. Problem is
1) 'wait' doesn't work (because it finished all background jobs and starts new ones (even if I include a threadcounter), that wastes lots of time.
2) I couldn't figure out how to get parallel to do that. I only found examples where the same script gets run multiple times, but not with different parameters.
3) the alternative solution would be:
for A in
do
for B in
do
while threadcount>X
do
sleep 60
done
(script1.sh $A $B;script2.sh $A $B) &
done #B
done #ABut I didn't really figure out how to get the thread count reliable.
Some hints into the right direction are very much welcomed.I'd love to use parallel, but the thing just doesn't work as the documentation tells me.
I do
parallel echo ::: A B C ::: D E F(from the doc) and it tells me
parallel: Input is read from the terminal. Only experts do this on purpose. Press CTRL-D to exit.and that is just the simplest example of the man pages.
| How to run x instances of a script parallel? |
The problem is that you seem to have a disk quota set up and your user doesn't have the right to take up so much space in /some_dir. And no, the --parallel option shouldn't affect this.
As a workaround, you can split the file into smaller files, sort each of those separately and then merge them back into a single file again:
## split the file into 100M pieces named fileChunkNNNN
split -b100M file fileChunk
## Sort each of the pieces and delete the unsorted one
for f in fileChunk*; do sort "$f" > "$f".sorted && rm "$f"; done
## merge the sorted files
sort -T /some_dir/ --parallel=4 -muo file_sort.csv -k 1,3 fileChunk*.sortedThe magic is GNU sort's -m option (from info sort):
‘-m’
‘--merge’
Merge the given files by sorting them as a group. Each input file
must always be individually sorted. It always works to sort
instead of merge; merging is provided because it is faster, in the
case where it works.That will require you to have ~180G free for a 90G file in order to store all the pieces. However, the actual sorting won't take as much space since you're only going to be sorting in 100M chunks.
|
Here is what I do right now,
sort -T /some_dir/ --parallel=4 -uo file_sort.csv -k 1,3 file_unsort.csv
the file is 90GB,I got this error message
sort: close failed: /some_dir/sortmdWWn4: Disk quota exceeded
Previously, I didn't use the -T option and apparently the tmp dir is not large enough to handle this. My current dir has free space of roughly 200GB. Is it still not enough for the sorting temp file?
I don't know if the parallel option affect things or not.
| Sort large CSV files (90GB), Disk quota exceeded |
Actually, you don't have a problem with make, but with your command:
tex dummy.tex &> /dev/null;Runs 'tex' in the background. You don't need to remove '>/dev/null', but '&' is sending 'tex' to the background.
Try this, it must be fine for you:
tex dummy.tex > /dev/null;or run everything in the same subshell, like this:
(tex dummy.tex > /dev/null;rm *.log)or less sane, this:
if test 1 = 1; then tex dummy.tex > /dev/null;rm *.log; fiPD: &> is an extension provided by some shells (including bash) to redirect both stdout and stderr to the same destination, but it's not portable, you should use '>/dev/null 2>&1' instead. (Thanks @Gilles)
Cheers
|
With the following Makefile, GNU make runs the two commands in parallel. Since the first one takes time to finish, rm *.log is run before the log file is created, and fails.
dummy.pdf: dummy.tex
tex dummy.tex &> /dev/null;
rm *.logThe file dummy.tex one line: \bye (a short empty file for TeX). Replacing tex dummy.tex by any other command shows the same behaviour. Removing &> /dev/null would of course solve the problem, but it is not a very good option in my case, since the Makefile is provided by a third party.
Is it possible to prevent GNU make from doing anything in parallel? (the flag -j 1 does not help).
EDIT: output to the terminal:
bruno@bruno-laptop:~/LaTeX/make-experiment$ make
tex dummy.tex &> /dev/null;
rm *.log
rm: cannot remove `*.log': No such file or directory
make: *** [dummy.pdf] Error 1
bruno@bruno-laptop:~/LaTeX/make-experiment$ This is TeX, Version 3.1415926 (TeX Live 2009/Debian)
(./dummy.tex )
No pages of output.
Transcript written on dummy.log. | Forcing GNU make to run commands in order |
GNU sort has a --parallel flag:
sort --parallel=8 data.tsv | uniq -c | sort --parallel=8 -nThis would use eight concurrent processes/threads to do each of the two sorting steps. The uniq -c part will still be using a single process.
As Stéphane Chazelas points out in comments, the GNU implementation of sort is already parallelised (it's using POSIX threads), so modifying the number of concurrent threads is only needed if you want it to use more or fewer threads than what you have cores.
Note that the second sort will likely get much less data than the first, due to the uniq step, so it will be much quicker.
You may also (possibly) improve sorting speed by playing around with --buffer-size=SIZE and --batch-size=NMERGE. See the sort manual.
To further speed the sorting up, make sure that sort writes its temporary files to a fast filesystem (if you have several types of storage attached). You may do this by setting the TMPDIR environment variable to the path of writable directory on such a mountpoint (or use sort -T directory).
|
I would like to ask if there is a out of the box multicore equivalent for a '| sort | uniq -c | sort -n' command?
I know that I can use below procedure
split -l5000000 data.tsv '_tmp';
ls -1 _tmp* | while read FILE; do sort $FILE -o $FILE & done;
sort -m _tmp* -o data.tsv.sortedBut it tastes a bit overhelming.
| multicore equivalent for '| sort | uniq -c | sort -n' command |
You could probably make it a little bit faster by running multiple find calls in parallel. For example, first get all toplevel directories and run N find calls, one for each dir. If you run the in a subshell, you can collect the output and pass it to vim or anything else:
shopt -s dotglob ## So the glob also finds hidden dirs
( for dir in $HOME/*/; do
find -L "$dir" -xtype f -name "*.tex" -exec grep -Fli and {} + &
done
) | vim -R -Or, to be sure you only start getting output once all the finds have finished:
( for dir in $HOME/*/; do
find -L "$dir" -xtype f -name "*.tex" -exec grep -Fli and {} + &
done; wait
) | vim -R -I ran a few tests and the speed for the above was indeed slightly faster than the single find. On average, over 10 runs, the single find call tool 0.898 seconds and the subshell above running one find per dir took 0.628 seconds.
I assume the details will always depend on how many directories you have in $HOME, how many of them could contain .tex files and how many might match, so your mileage may vary.
|
I am thinking methods to make the search faster and/or better which principally uses fgrep or ag.
Code which searches the word and case-insensitively at $HOME, and redirects a list of matches to vim
find -L $HOME -xtype f -name "*.tex" \
-exec fgrep -l -i "and" {} + 2>/dev/null | vim -R -It is faster with ag because of parallelism and ack
find -L $HOME -xtype f -name "*.tex" \
-exec ag -l -i "and" {} + 2>/dev/null | vim -R -Statistics
Small group average statistics with fgrep and ag by time
fgrep ag terdon1 terdon2 terdon3 muru
user 0.41s 0.32s 0.14s 0.22s 0.18s 0.12s
sys 0.46s 0.44s 0.26s 0.28s 0.30s 0.32sCases terdon1 and terdon3 can be equal fast.
I get great fluctuations with those two.
Some Ranking by sys time (not best criteria!)terdon1
terdon2
terdon3
muru
ag
fgrepAbbreviationsterdon1 = terdon-many-find-grep
terdon2 = terdon-many-find-fgrep
terdon3 = terdon-many-find-ag (without F because not exists in ag)Other codes
muru's proposal in comments
grep -RFli "and" "$HOME" --include="*.tex" | vim -R -OS: Debian 8.5
Hardware: Asus Zenbook UX303UA
| How to make this search faster in fgrep/Ag? |
After some ruminations, I came up with an ugly workaround:
#!/bin/bash
proc1=$(mktemp)
proc2=$(mktemp)
proc3=$(mktemp)/path/to/longprocess1 > "$proc1" &
pid1=$!
/path/to/longprocess2 > "$proc2" &
pid2=$!
/path/to/longprocess3 > "$proc3" &
pid3=$!wait "$pid1" "$pid2" "$pid3"
export var1="<("$proc1")"
export var2="<("$proc2")"
export var3="<("$proc3")"
rm -f "$proc1" "$proc2" "$proc3"As requested in a comment, here is how to make this more extensible for an arbitrarily large list:
#!/bin/bash
declare -a pids
declare -a data
declare -a output
declare -a processes# Generate the list of processes for demonstrative purposes
processes+=("/path/to/longprocess1")
processes+=("/path/to/longprocess2")
processes+=("/path/to/longprocess3")index=0
for process in "${processes[@]}"; do
output+=("$(mktemp")
$process > ${output[$index]} &
pids+=("$!")
index=$((index+1))
done
wait ${pids[@]}
index=0
for process in "${processes[@]}"; do
data+="$(<"${output[index]}")"
rm -f "${output[index]}"
index=$((index+1))
done
export dataThe resultant output will be in the data array.
|
I'm trying to set several environment variables with the results from command substitution. I want to run the commands in parallel with & and wait. What I've got currently looks something like
export foo=`somecommand bar` &
export fizz=`somecommand baz` &
export rick=`somecommand morty` &
waitBut apparently when using & variable assignments don't stick. So after the wait, all those variables are unassigned.
How can I assign these variables in parallel?
UPDATE: Here's what I ended up using based off the accepted answer
declare -a data
declare -a output
declare -a processesvar_names=(
foo
fizz
rick
)for name in "${var_names[@]}"
do
processes+=("./get_me_a_value_for $name")
doneindex=0
for process in "${processes[@]}"; do
output+=("$(mktemp)")
${process} > ${output[$index]} &
index=$((index+1))
done
waitindex=0
for out in "${output[@]}"; do
val="$(<"${out}")"
rm -f "${out}" export ${var_names[index]}="$val" index=$((index+1))
doneunset data
unset output
unset processes | How to assign environment variables in parallel in bash |
You can use the split tool:
split -l 1000 words.txt words-will split your words.txt file into files with no more than 1000 lines each named
words-aa
words-ab
words-ac
...
words-ba
words-bb
...If you omit the prefix (words- in the above example), split uses x as the default prefix.
For using the generated files with parallel you can make use of a glob:
split -l 1000 words.txt words-
parallel ./script.sh ::: words-[a-z][a-z] |
I have a words.txt with 10000 words (one to a line). I have 5,000 documents. I want to see which documents contain which of those words (with a regex pattern around the word). I have a script.sh that greps the documents and outputs hits. I want to (1) split my input file into smaller files (2) feed each of the files to script.sh as a parameter and (3) run all of this in parallel.
My attempt based on the tutorial is hitting errors
$parallel ./script.sh ::: split words.txt # ./script.sh: line 22: split: No such file or directoryMy script.sh looks like this
#!/usr/bin/env bashline 1 while read line
line 2 do
some stuff
line 22 done < $1I guess I could output split to a directory loop thru the files in the directory launching grep commands -- but how can do this elegantly and concisely (using parallel)?
| split a file, pass each piece as a param to a script, run each script in parallel |
Let's say your find locates the following files:
./foo/bar
./foo/baz
./foo/quuxIf you use -execdir [...]+, the effective resultant command will be:
( cd ./foo; command bar baz quux )As opposed to (effectively) this, if you use -execdir [...] \;:
( cd ./foo; command bar )
( cd ./foo; command baz )
( cd ./foo; command quux )The same is true for -exec rather than execdir, but it will specify the path rather than changing the working directory. If you use -exec [...]+, the effective resultant command will be:
command ./foo/bar ./foo/baz ./foo/quuxAs opposed to (effectively) this, if you use -exec [...] \;:
command ./foo/bar
command ./foo/baz
command ./foo/quuxLet's see how this behaves with files found in two directories:
$ tree
.
├── bar
│ ├── bletch
│ └── freeble
└── foo
├── bar
├── baz
└── quux
$ find . -type f -exec echo {} \;
./foo/baz
./foo/quux
./foo/bar
./bar/bletch
./bar/freeble
$ find . -type f -execdir echo {} \;
./baz
./quux
./bar
./bletch
./freeble
$ find . -type f -exec echo {} +
./foo/baz ./foo/quux ./foo/bar ./bar/bletch ./bar/freeble
$ find . -type f -execdir echo {} +
./baz ./quux ./bar
./bletch ./freeble |
I understand that find -tests -execdir <command> '{}' ';' runs command for every matching file against the test(s) specified. The command, when using -execdir, is executed in the same parent directory as the matching file (for every matching file), as {} stands for the basename of the matching file.
Now the question is: how is this done when working with multiple files all at once using + instead of ';'? If I use find -tests -execdir <command> '{}' +, all of the files are supplied as arguments to the command specified (in a manner that doesn't exceed max args). How does find carry out <command> on all of them at once?
| How does find -execdir <command> + work? |
Simply remove the ; character, so in final :
for i in *; do something.py $i & doneAnd for running N instance of your script at the same time, see man 1 parallel
See http://www.gnu.org/software/parallel/
| Possible Duplicate:
Parallelizing a for loop The original code might look like this:
for i in *; do something.py $i; doneI was wondering whether I can run these jobs parallelly in the backgroud, such as:
for i in *; do something.py $i &; doneI tried and found the & here won't work..
Moreover, a better way might be that bash allow 8 jobs(or any number) run together in the queue in background, but I don't know how to do that...
Does anyone have ideas about this? Thanks!
| Is there a way to run process parallelly in the loop of a bash script [duplicate] |
Most software build processes use make. Make sure you make make use the -j argument with a number usually about twice the number of CPUs you have, so make -j 8 would be appropriate for your case.
|
I am doing a build on a Linux machine with Ubuntu 10.04 on it. How can I really speed up my build? I have 4 CPUs and lots of RAM. I already reniced the process group to -20. Is there something else I can do?
| How to speed up my build |
This is not safe.
You have not specified what the problem is that you are trying to solve. If your problem is that you want your directory to always be there but be cleaned up from time to time, I would suggest explicitly removing files older than a check file (the sleep 1 is me being paranoid):
touch regression.delete \
&& find regression \! -newer regression.delete -delete & \
&& sleep 1 \
&& run_regressionThat will have problems if you have subdirectories, you could instead write
touch regression.delete \
&& find regression -mindepth 1 -maxdepth 1 \! -newer regression.delete -exec rm -rf '{}' \; & \
&& sleep 1 \
&& run_regressionIf your problem is that you want to start your program as fast as possible, if the momentary absence of the directory is possible and it is not a mountpoint, I usually run something like
mkdir regression.new \
&& chmod --reference regression regression.new \
&& mv regression regression.delete \
&& mv regression.new regression \
&& rm -rf regression.delete & \
run_regressionThat should allow you to start run_regression almost instantly.
Replying to your edit (and editing myself following research in another answer), wildcards must be expanded before the rm command is launched, but the crux of your problem is to know whether the expansion is done after the shell forks. POSIX spec of asynchronous execution does not explicitly specify one way or another as far as I can see, and section 2.1 certainly implies that expansion is a distinct operation and prior to actual fork/exec of the command, but testing (by @adonis, replicated by me using bash 4.3.42(1)) shows that bash takes the most efficient way: if the wildcard expansion takes time then modifications executed by the following command can well influence that expansion. Your original idea therefore risks deleting files you don't want to delete.
I looked at bash source, and execute_cmd.c explicitly states that the fork is done before word expansion:
3922 | /* If we're in a pipeline or run in the background, set DOFORK so we
3923 | make the child early, before word expansion. This keeps assignment
3924 | statements from affecting the parent shell's environment when they
3925 | should not. */ |
Sometimes I need to remove all the contents of a directory and create new files there. Can I do something like this and expect all new files to remain intact:
% rm -rf regression/* & ( sleep 10 ; run_regression )where run_regression timestamps its output files so that they would have unique names and places them in regression?
My thinking is that the shell would resolve regression/* into an explicit list of pre-existing filenames and then rm would be removing the files on that explicit list, but not the new files that run_regression would be creating contemporaneously with rm. Since run_regression timestamps its files there should be no name clashes.
However, I'm not quite sure how to tell when the shell is done listing the files and rm starts to work. Is the above 10 sec adequate? Can I do something like this in bash:
% rm -rf regression/* & ( wait_unil_names_are_resolved ; run_regression )Per comment clarifying that I am indeed asking whether the shell guarantees that wildcards would be expanded into filenames before invoking the tool, even if it's a tool intimately known to the shell. I can imagine that the developer of both the shell and the tool may be tempted to pipeline wildcard expansion with the tool; I hope though that there are standards preventing that.
| How safe it is to output to <dir> simultaneously with rm <dir>/* |
Found that I can do this with xargs and the -P option:
josh@subdivisions:/# seq 1 10 | xargs -P 4 -I {} bash -c "dd if=/dev/zero bs=1024 count=10000000 | pv -c -N {} | dd of=/dev/null"
3: 7.35GiB 0:00:29 [ 280MiB/s] [ <=> ]
1: 7.88GiB 0:00:29 [ 312MiB/s] [ <=> ]
4: 7.83GiB 0:00:29 [ 258MiB/s] [ <=> ]
2: 6.55GiB 0:00:29 [ 238MiB/s] [ <=> ]Send output of the array to iterate over into stdin of xargs; To run all commands simultaneously, use -P 0
|
I want to run a sequence of command pipelines with pv on each one. Here's an example:
for p in 1 2 3
do
cat /dev/zero | pv -N $p | dd of=/dev/null &
doneThe actual commands in the pipe don't matter (cat/dd are just an example)...
The goal being 4 concurrently running pipelines, each with their own pv output. However when I try to background the commands like this, pv stops and all I get are 4 stopped jobs. I've tried with {...|pv|...}&, bash -c "...|pv|..." & all with the same result.
How can I run multiple pv command pipelines concurrently?
| How can I run multiple pv commands in parallel? |
I want 4 processes at a time, each process should process 1 recordparallel -j4 -k --no-notice 'echo "{}"' ::: {1..10}-j4 - number of jobslots. Run up to 4 jobs in parallel
-k - keep sequence of output same as the order of input. Normally the output of a job will be printed as soon as the job completes
::: - argumentsThe output:
1
2
3
4
5
6
7
8
9
10 |
What I'm really trying to do is run X number of jobs, with X amount in parallel for testing an API race condition.
I've come up with this
echo {1..10} | xargs -n1 | parallel -m 'echo "{}"';which prints
7 8 9
10
4 5 6
1 2 3but what I really want to see is (note order doesn't actually matter).
1
2
3
4
5
6
7
8
9
10and those would be processed in parallel 4 at a time (or whatever number of cpus/cores, I have, e.g. --jobs 4). For a total of 10 separate executions.
I tried this
echo {1..10} | xargs -n1 | parallel --semaphore --block 3 -m 'echo -n "{} ";but it only ever seems to print once. bonus points if your solution doesn't need xargs which seems like a hack around the idea that the default record separator is a newline, but I haven't been able to get a space to work like I want either.
10 is a reasonably small number, but lets say it's much larger, 1000
echo {1..1000} | xargs -n1 | parallel -j1000prints
parallel: Warning: Only enough file handles to run 60 jobs in parallel.
parallel: Warning: Running 'parallel -j0 -N 60 --pipe parallel -j0' or
parallel: Warning: raising 'ulimit -n' or 'nofile' in /etc/security/limits.conf
parallel: Warning: or /proc/sys/fs/file-max may help.I don't actually want 1000 processes, I want 4 processes at a time, each process should process 1 record, thus by the time I'm done it will have executed 1000 times.
| How can I run GNU parallel in record per job, with 1 process per core |
Make the lines as such:
(command1 file1_input; command2 file1_output) &
(command1 file2_input; command2 file2_output) &
...And each line will execute its two commands in sequence, but each line will be forked off as parallel background jobs.
If you want the second command to execute only if the first command completed successfully, then change the semicolon to &&:
(command1 file1_input && command2 file1_output) &
(command1 file2_input && command2 file2_output) &
... |
I have text file contain the following commands
command1 file1_input; command2 file1_output
command1 file2_input; command2 file2_output
command1 file3_input; command2 file3_output
command1 file4_input; command2 file4_output
command1 file5_input; command2 file5_output
command1 file6_input; command2 file6_output
command1 file7_input; command2 file7_output
................
................
................
................
................I named this file "commands" then I gave it permission using "chmod a+x"
I want command 1 to be run, then command 2. Also I want this to be applied on all the files (file1, file2, .... etc) at once. How can I modify the content of this file to do that?
I tried the following but it didn't work:
(
command1 file1_input; command2 file1_output
command1 file2_input; command2 file2_output
command1 file3_input; command2 file3_output
command1 file4_input; command2 file4_output
command1 file5_input; command2 file5_output
command1 file6_input; command2 file6_output
command1 file7_input; command2 file7_output
................
................
................
................
................
)& | Running commands at once |
If the csv files are generated by the java command, this will fail because the mv will run before any files have been generated. Since all java processes are sent to the background, the loop will finish almost immediately, so the script continues to the mv which finds no files to move and so does nothing.
A simple solution is to use wait. From help wait (in a bash shell):
$ help wait
wait: wait [-fn] [-p var] [id ...]
Wait for job completion and return exit status.
Waits for each process identified by an ID, which may be a process ID or a
job specification, and reports its termination status. If ID is not
given, waits for all currently active child processes, and the return
status is zero. If ID is a job specification, waits for all processes
in that job's pipeline.The relevant bit here is "If ID is not given, waits for all currently active child processes". This means that you can just add wait after your loop and that will make the script wait until all child processes are finished before continuing:
#!/bin/bashfor f in ./lqns/*.lqn
do
java -jar DiffLQN.jar "$f" &
donewaitmv ./lqns/*.csv csvsAlternatively, you can combine the java and mv commands:
#!/bin/bashfor f in ./lqns/*.lqn
do
java -jar DiffLQN.jar "$f" && mv /lqns/*.csv csvs &
doneAnother, possibly better, option is to use GNU parallel (which should be in the repositories of whatever operating system you are running), a tool designed for precisely this sort of thing. With it, you could do:
parallel java -jar DiffLQN.jar ::: ./lqns/*.lqn
mv ./lqns/*.csv csvs |
I am trying to write a script that has the purpose to parallelize an execution (a program that creates some files) running the processes in background and, when all commands in the for loop are done, will perform an extra command (namely move all produced files in another folder). This is what I came out with for the moment:
#!/bin/bashfor f in ./lqns/*.lqn
do
java -jar DiffLQN.jar $f &
done
mv ./lqns/*.csv csvsThe parallelism works, but they never reach the mv line and the terminal waits and doesn't return. Why is it not returning? How do I fix this?
Maybe the problem is the & of the final for instance? Because it waits for another command but there's nothing more? Even if adding the mv line I thought would have solved it...
| Bash background execution not returning |
A quick trip to Google reveals this interesting approach: http://pebblesinthesand.wordpress.com/2008/05/22/a-srcipt-for-running-processes-in-parallel-in-bash/
for ARG in $*; do
command $ARG &
NPROC=$(($NPROC+1))
if [ "$NPROC" -ge 4 ]; then
wait
NPROC=0
fi
done |
I have 1000 gzipped files which I want to sort.
Doing this sequentially, the procedure looks pretty straightforward:
find . -name *.gz -exec zcat {} | sort > {}.txt \;Not sure that the code above works (please correct me if I did a mistake somewhere), but I hope you understand the idea.
Anyway, I'd like to parallelize ungzip/sort jobs in order to make the whole thing faster. Also, I don't want to see all 1000 processes running simultaneously. It would be great to have some bounded job queue (like BlockingQueue in Java or BlockingCollection in .NET) with configurable capacity. In this case, only, say, 10 processes will run in parallel.
Is it possible to do this in shell?
| How to create a bounded queue for shell tasks? |
The command ulimit -u shows the maximum number of processes that you can start. However, do not actually start that many processes in the background: your machine would spend time switching between processes and wouldn't get around to getting actual work done.
For CPU-bound tasks, run as many tasks as there are cores on your machine, or one more. This is if there's enough RAM to accommodate all these processes and their file cache: if the parallel processes are competing for I/O bandwidth to keep reloading their data, they'll run slower than if you run them sequentially. You can find the number of cores in /proc/cpuinfo.
The easy way to run one task per processor is to use GNU parallel.
If the tasks are I/O-bound and use the same peripherals (e.g. they access files on the same disk), it's usually best to run them sequentially.
|
Is there any limit for parallel execution? if yes, how to find out the maximum limit?
I am creating a script which create a string of scripts concatenated by '&' and uses eval to execute them all together. Something like this:
scriptBuilder="ksh -x script1.sh & ksh -x script2.sh & ksh -x script3.sh";
eval $scriptBuilder;Just want to make sure what is the max limit for parallel execution on the server.
| How to find max parallel execution limit? |
Summary of the comments: The machine is fast but doesn't have enough memory to run everything in parallel. In addition the problem needs to read a lot of data and the disk bandwidth is not enough, so the cpus are idle most of the time waiting for data.
Rearranging the tasks helps.
Not yet investigated compressing the data to see if it can improve the effective disk I/O bandwidth.
|
I have a bash script that takes as input three arrays with equal length: METHODS, INFILES and OUTFILES.
This script will let METHODS[i] solves problem INFILES[i] and saves the result to OUTFILES[i], for all indices i (0 <= i <= length-1).
Each element in METHODSis a string of the form:
$HOME/program/solver -a <method>where solver is a program that can be called as follows:
$HOME/program/solver -a <method> -m <input file> -o <output file> --timeout <timeout in seconds>
The script solves all the problems in parallel and set the runtime limit for each instance to 1 hour (some methods can solve some problems very quickly though), as follows:
#!/bin/bash
source METHODS
source INFILES
source OUTFILESstart=`date +%s`## Solve in PARALLEL
for index in ${!OUTFILES[*]}; do
(alg=${METHODS[$index]}
infile=${INFILES[$index]}
outfile=${OUTFILES[$index]} ${!alg} -m $infile -o $outfile --timeout 3600) &
done
waitend=`date +%s`runtime=$((end-start))
echo "Total runtime = $runtime (s)"
echo "Total number of processes = ${#OUTFILES[@]}"In the above I have length = 619. I submitted this bash to a cluster with 70 available processors, which should take at maximum 9 hours to finish all the tasks. This is not the case in reality, however. When using the top command to investigate, I found that only two or three processes are running (state = R) while all the others are sleeping (state = D).
What am I doing wrong please?
Furthermore, I have learnt that GNU parallel would be much better for running parallel jobs. How can I use it for the above task?
Thank you very much for your help!
Update: My first try with GNU parallel:
The idea is to write all the commands to a file and then use GNU parallel to execute them:
#!/bin/bash
source METHODS
source INFILES
source OUTFILESstart=`date +%s` ## Write to file
firstline=true
for index in ${!OUTFILES[*]}; do
(alg=${METHODS[$index]}
infile=${INFILES[$index]}
outfile=${OUTFILES[$index]}
if [ "$firstline" = true ] ; then
echo "${!alg} -m $infile -o $outfile --timeout 3600" > commands.txt
firstline=false
else
echo "${!alg} -m $infile -o $outfile --timeout 3600" >> commands.txt
fi
done## Solve in PARALLEL
time parallel :::: commands.txtend=`date +%s`runtime=$((end-start))
echo "Total runtime = $runtime (s)"
echo "Total number of processes = ${#OUTFILES[@]}"What do you think?
Update 2: I'm using GNU parallel and having the same problem. Here's the output of top:
top - 02:05:25 up 178 days, 8:16, 2 users, load average: 62.59, 59.90, 53.29
Tasks: 596 total, 7 running, 589 sleeping, 0 stopped, 0 zombie
Cpu(s): 12.9%us, 0.9%sy, 0.0%ni, 63.3%id, 22.9%wa, 0.0%hi, 0.1%si, 0.0%st
Mem: 264139632k total, 260564864k used, 3574768k free, 4564k buffers
Swap: 268420092k total, 80593460k used, 187826632k free, 53392k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28542 khue 20 0 7012m 5.6g 1816 R 100 2.2 12:50.22 opengm_min_sum
28553 khue 20 0 11.6g 11g 1668 R 100 4.4 17:37.37 opengm_min_sum
28544 khue 20 0 13.6g 8.6g 2004 R 100 3.4 12:41.67 opengm_min_sum
28549 khue 20 0 13.6g 8.7g 2000 R 100 3.5 2:54.36 opengm_min_sum
28551 khue 20 0 11.6g 11g 1668 R 100 4.4 19:48.36 opengm_min_sum
28528 khue 20 0 6934m 4.9g 1732 R 29 1.9 1:01.13 opengm_min_sum
28563 khue 20 0 7722m 6.7g 1680 D 2 2.7 0:56.74 opengm_min_sum
28566 khue 20 0 8764m 7.9g 1680 D 2 3.1 1:00.13 opengm_min_sum
28530 khue 20 0 5686m 4.8g 1732 D 1 1.9 0:56.23 opengm_min_sum
28534 khue 20 0 5776m 4.6g 1744 D 1 1.8 0:53.46 opengm_min_sum
28539 khue 20 0 6742m 5.0g 1732 D 1 2.0 0:58.95 opengm_min_sum
28548 khue 20 0 5776m 4.7g 1744 D 1 1.9 0:55.67 opengm_min_sum
28559 khue 20 0 8258m 7.1g 1680 D 1 2.8 0:57.90 opengm_min_sum
28564 khue 20 0 10.6g 10g 1680 D 1 4.0 1:08.75 opengm_min_sum
28529 khue 20 0 5686m 4.4g 1732 D 1 1.7 1:05.55 opengm_min_sum
28531 khue 20 0 4338m 3.6g 1724 D 1 1.4 0:57.72 opengm_min_sum
28533 khue 20 0 6064m 5.2g 1744 D 1 2.1 1:05.19 opengm_min_sum(opengm_min_sum is the solver above)
I guess that some processes consume so much resource that the others do not have anything left and enter the D state?
| BASH: parallel run |
What about not using wait at all, in the while loop?
while [ "$SGE_TASK_ID" -le "$J" ]; do # grep count of matlab processes out of list of user processes
n = $(ps ux | grep -c "matlab") ## if [ "$n" -le "$N" ]; then
if [ "$n" -eq "$N" ]; then
# sleep 1 sec if already max processes started
sleep 1
## wait -n # as soon as one task is done, refill it with another
## n=$(( n - 1 ))
else
# start another process
printf 'Task ID is %d\n' "$SGE_TASK_ID" /share/.../matlab -nodisplay -nodesktop -nojvm -nosplash -r "main; ID=$SGE_TASK_ID; f; exit" & SGE_TASK_ID=$(( SGE_TASK_ID + 1 )) fi
## n=$(( n + 1 ))
doneThe string to grep for may of course have to differ, depending of what you have running (e.g. give f.m some more special name, and grep for that.)
|
I would like to solve the following issue about submitting a job that has been parallelised to a specific node.Let me start with explaining the structure of my problem
I have two very simple Matlab scripts
1) main.m
clear
rng default
P=2;
grid=randn(4,3);
jobs=1;2) f.m
sgetasknum_grid=grid(jobs*(str2double(getenv('SGE_TASK_ID'))-1)+1: str2double(getenv('SGE_TASK_ID'))*jobs,:); %jobsx3result=sgetasknum_grid+1; filename = sprintf('result.%d.mat', ID);
save(filename, 'result')exitWhat I want to do is: Run main.m;
then, run f.m 4 times, allowing for parallel execution of 2 tasks at each time
Everything should be executed on node AHere's how I implement the steps above
1) I save main.m and f.m into a folder named My_folder
2) I create the script td.sh as below and save it into the folder My_folder
#!/bin/bash -l
#$ -S /bin/bash
#$ -l h_vmem=5G
#$ -l tmem=5G
#$ -l h_rt=480:0:0
#$ -cwd
#$ -j y#$ -N trydate
hostnameJ=4 #number tasksN=2 #number tasks executed in parallelexport SGE_TASK_IDSGE_TASK_ID=1
n=0
while [ "$SGE_TASK_ID" -le "$J" ]; do
if [ "$n" -eq "$N" ]; then
wait -n # as soon as one task is done, refill it with another
n=$(( n - 1 ))
fi printf 'Task ID is %d\n' "$SGE_TASK_ID" /share/.../matlab -nodisplay -nodesktop -nojvm -nosplash -r "main; ID=$SGE_TASK_ID; f; exit" & SGE_TASK_ID=$(( SGE_TASK_ID + 1 ))
n=$(( n + 1 ))
donewait3) I go into the terminal and type ssh username@A, then cd /.../My_folder, then bash td.shProblem: I get the following error
td.sh: line 26: wait: -n: invalid option
wait: usage: wait [id]As noticed in the comments below, the issue is that the version of bash on @A is old (the -n option was added to the wait builtin in 4.3) and the sysadmin can't update it. The latest version possible is bash 4.1.
Thus, could you suggest a way to replace wait -n?
| Alternative to wait -n (because server has old version of bash) |
They will all run at the same time
The load will be distributed by your OS to be worked on as many cores as there are available. The time might not be proportional to the number of threads. Here is a silly example why. Assume you have one job that you want to do three times, and it takes the same amount of time every time (1 unit of time). You have two cores. Assume there is nothing else running.Case one: you only have one thread. In this case, the thread runs on one core, and the whole thing takes 3 units of time to complete. Total time: 3
Case two: You have two threads. In one unit of time, the job gets done twice (once per core). You then have to wait a whole unit of time for the third iteration to complete. Total time: 2
Case 3: You have 3 threads. Your OS will try and make everything fair, and so will split up the time evenly between the three processes. By then end of unit 1, NONE of them will be completed. By unit 2 they will all be done. (see case above). Total time: 2Starting more threads will not really hurt your performance much (the cost of starting a thread is less than 1MB) but it might not help either.
The only way to know what would be faster to do is it test it, but use the following rules as a guide: Use at least the same number of threads as you have cores. Additionally, if process has lots and lots of memory access all over the place it may actually be faster to have more threads than cores (memory access is very slow compared to executing other instructions, and the OS will fill the time with real execution of something else that does not have to wait).
|
Say I have a 4-core workstation, what would Linux (Ubuntu) do if I execute
mpirun -np 9 XXXWill 9 run immediately together, or they will run 4 after 4?
I suppose that using 9 is not good, because the remainder 1 will make the computer confused, (I don't know is it going to be confused at all, or the "head" of the computer will decide which core among the 4 cores will be used?) Or it will be randomly picked. Who decide which one core to call?
If I feel my cpu is not bad and my ram is okay and large enough, and my case is not very big. Is it a good idea in order to fully use my cpu and ram, that I do mpirun -np 8 XXX, or even mpirun -np 12 XXX. | `mpirun -np N`: what if `N` is larger than my physical cores? |
You could try making a Beowulf Cluster. You set up one host as a master and the rest as nodes. It's been done in the past by others, including NASA as the wikipedia entry on Beowulf Cluster says.
Building your own cluster computer farm might cost more in power than you'd gain in compute resources.
I have not tried this myself, but I've always wanted to give it a shot.
|
I have a few Linux machines laying around and I wanted to make a cluster computer network. There will be 1 monitor that would be for the controller. The controller would execute a script that would perform a task and split the load onto the computers.
Lets say I have 4 computers that are all connected to the controller. I wanted to compile a program using GCC but I wanted to split the work 3 ways. How would I do that?
Any help would be appreciated.
| Remotely execute commands but still have control of the host |
Command pipelines already run in parallel. With the command:
command1 | command2Both command1 and command2 are started. If command2 is scheduled and the pipe is empty, it blocks waiting to read. If command1 tries to write to the pipe and its full, command1 blocks until there's room to write. Otherwise, both command1 and command2 execute in parallel, writing to and reading from the pipe.
|
Normally, pipelines in Unix are used to connect two commands and use the output of the first command as the input of the second command. However, I recently come up with the idea (which may not be new, but I didn't find much Googling) of using pipeline to run several commands in parallel, like this:
command1 | command2This will invoke command1 and command2 in parallel even if command2 does not read from standard input and command1 does not write to standard output. A minimal example to illustrate this is (please run it in an interactive shell)
ls . -R 1>&2|ls . -RMy question is, are there any downsides to use pipeline to parallelize the execution of two commands in this way? Are there anything that I have missed in this idea?
Thank you very much in advance.
| Pipeline as parallel command |
If you install the GNU Parallel tool you can make pretty easy work of what you're trying to accomplish:
$ find . -maxdepth 1 -type f -note -iname "*.gpg" | sort | \
parallel --gnu -j 8 --workdir $PWD ' \
echo "Encrypting {}..."; \
gpg --trust-model always \
--recipient "[emailprotected]" --output "{}.gpg" \
--encrypt "{}" && rm "{}" \
'details
The above is taking the output of find and running it through to parallel, and running 8 at a time. Everywhere there's an occurrence of {} the filenames that are being passed through from find will replace the {} in those spots.
ReferencesRunning shell script in parallel |
I'm running something like this:
find . -maxdepth 1 -type f -note -iname "*.gpg" | sort | while read file ; do
echo "Encrypting $file..."
gpg --trust-model always --recipient "[emailprotected]" --output "$file.gpg" \
--encrypt "$file" && rm "$file"
doneThis runs great, but it seems that GPG is not optimized to use multiple cores for an encryption operation. The files I'm encrypting are about 2GB in size and I have quite a bit of them. I'd like to be able to run X jobs in parallel to encrypt the files and then remove them. How can I do this, setting a limit to, say, 8 jobs at a time?
| Running up to X commands in parallel |
It sounds as if you simply should write a small processing script and use GNU Parallel for parallel processing:
http://www.gnu.org/software/parallel/man.html#example__gnu_parallel_as_dir_processor
So something like this:
inotifywait -q -m -r -e CLOSE_WRITE --format %w%f my_dir |
parallel 'mv {} /tmp/processing/{/};myscript.sh /tmp/processing/{/} other_inputs; rm /tmp/processing/{/}'Watch the intro videos to learn more: http://pi.dk/1
Edit:
It is required that myscript.sh can deal with 0 length files (e.g. ignore them).
If you can avoid the touch you can even do:
inotifywait -q -m -r -e CLOSE_WRITE --format %w%f my_dir |
parallel myscript.sh {} other_inputsInstalling GNU Parallel is as easy as:
wget http://git.savannah.gnu.org/cgit/parallel.git/plain/src/parallel
chmod 755 parallel | Possible Duplicate:
How to run a command when a directory's contents are updated? I'm trying to write a simple etl process that would look for files in a directory each minute, and if so, load them onto a remote system (via a script) and then delete them.
Things that complicate this: the loading may take more than a minute.
To get around that, I figured I could move all files into a temporary processing directory, act on them there, and then delete them from there. Also, in my attempt to get better at command line scripting, I'm trying for a more elegant solution. I started out by writing a simple script to accomplish my task, shown below:
#!/bin/bashfor i in ${find /home/me/input_files/ -name "*.xml"}; do
FILE=$i;
done;
BASENAME=`basename $FILE`
mv $FILE /tmp/processing/$BASENAME
myscript.sh /tmp/processing/$BASENAME other_inputs
rm /tmp/processing/$BASENAMEThis script removes the file from the processing directory almost immediately (which stops the duplicate processing problem), cleans up after itself at the end, and allows the file to be processed in between.
However, this is U/Linux after all. I feel like I should be able to accomplish all this in a single line by piping and moving things around instead of a bulky script to maintain.
Also, using parallel to concurrent process this would be a plus.
Addendum: some sort of FIFO queue might be the answer to this as well. Or maybe some other sort of directory watcher instead of a cron. I'm open for all suggestions that are more elegant than my little script. Only issue is the files in the "input directory" are touched moments before they are actually written to, so some sort of ! -size -0 would be needed to only handle real files.
| process files in a directory as they appear [duplicate] |
GNU Parallel does that and more (using ssh).
It can even deal with mixed speed of machines, as it simply has a queue of jobs, that are started on the list of machines (e.g. one per CPU core). When one jobs finishes another one is started.
So it does not divide the jobs into clusters before starting, but does it dynamically.
Watch the intro videos to learn more: http://pi.dk/1
|
parallel from moreutils is a great tool for, among other things, distributing m independent tasks evenly over n CPUs. Does anybody know of a tool that accomplishes the same thing for multiple machines? Such a tool of course wouldn't have to know about the concept of multiple machines or networking or anyhting like that -- I'm just talking about distributing m tasks into N clusters, where in cluster i N_i tasks are run in parallel.
Today I use my own BASH scripts to accomplish the same thing, but a more streamlined and clean tool would be great. Does anyobdy know of any?
| Multi-machine tool in the spirit of moreutils' `parallel`? |
A bug in GNU Parallel does, that it only starts processing after having read one job for each jobslot. After that it reads one job at a time.
In older versions the output will also be delayed by the number of jobslots. Newer versions only delay output by a single job.
So if you sent one job per second to parallel -j10 it would read 10 jobs before starting them. Older versions you would then have to wait an additional 10 seconds before seeing the output from job 3.
A workaround the limitation at start is to feed one dummy job per jobslot to parallel:
true >jobqueue; tail -n+0 -f jobqueue | parallel &
seq $(parallel --number-of-threads) | parallel -N0 echo true >> jobqueue
# now add the real jobs to jobqueueA workound the output is to use --linebuffer (but this will mix full lines from different jobs).
|
GNU Parallel, without any command line options, allows you to easily parallelize a command whose last argument is determined by a line of STDIN:
$ seq 3 | parallel echo
2
1
3Note that parallel does not wait for EOF on STDIN before it begins executing jobs — running yes | parallel echo will begin printing infinitely many copies of y right away.
This behavior appears to change, however, if STDIN is relatively short:
$ { yes | ghead -n5; sleep 10; } | parallel echoIn this case, no output will be returned before sleep 10 completes.
This is just an illustration — in reality I'm attempting to read from a series of continually generated FIFO pipes where the FIFO-generating process will not continue until the existing pipes start to be consumed. For example, my command will produce a STDOUT stream like:
/var/folders/2b/1g_lwstd5770s29xrzt0bw1m0000gn/T/tmp.PFcggGR55i
/var/folders/2b/1g_lwstd5770s29xrzt0bw1m0000gn/T/tmp.UCpTBzI3J6
/var/folders/2b/1g_lwstd5770s29xrzt0bw1m0000gn/T/tmp.r2EmSLW0t9
/var/folders/2b/1g_lwstd5770s29xrzt0bw1m0000gn/T/tmp.5TRNeeZLmtManually cat-ing each of these files one at a time in a new terminal causes the FIFO-generating process to complete successfully. However, running printfifos | parallel cat does not work. Instead, parallel seems to block forever waiting for input on STDIN — if I modify the pipeline to printfifos | head -n4 | parallel cat, the deadlock disappears and the first four pipes are printed successfully.
This behavior seems to be connected to the --jobs|-j parameter. Whereas { yes | ghead -n5; sleep 10; } | parallel cat produces no output for 10 seconds, adding a -j1 option yields four lines of y almost immediately followed by a 10 second wait for the final y. Unfortunately this does not solve my problem — I need every argument to be processed before parallel can get EOF from reading STDIN. Is there any way to achieve this?
| Make GNU Parallel not delay before executing arguments from STDIN |
My naive method so far has to create a temporary folder, track the PID's, have each thread write to a file with its pid, then once all jobs complete read all pids and merge them into a single file in order of PID's spawned.This is almost exactly what GNU Parallel does.
parallel do_stuff ::: job1 job2 job3 ... jobn > outputThere are some added benefits:The temporary files are automatically removed, so there is no cleanup - even if you kill GNU Parallel.
You only need temporary space for the currently running jobs: The temporary space for completed jobs is freed when the job is done.
If you want output in the same order as the input use --keep-order.
If you want output mixed line-by-line from the different jobs, use --line-buffer.GNU Parallel has quite a few features for splitting up a task into smaller jobs. Maybe you can even use one of those to generate the smaller jobs?
|
I've run into a couple of similar situations where I can break a single-core bound task up into multiple parts and run each part as separate job in bash to parallelize it, but I struggle with collating the returned data back to a single data stream. My naive method so far has to create a temporary folder, track the PID's, have each thread write to a file with its pid, then once all jobs complete read all pids and merge them into a single file in order of PID's spawned. Is there a better way to handle these kind of multiple-in-one-out situations using bash/shell tools?
| How to merge data from multiple background jobs back to a single data stream in bash |
You haven't found any reliable answer because there is no widely applicable reliable answer. The performance gain from multiple cores is hard to predict except for well-defined tasks, and even then it can depend on many other factors such as available memory (no benefit from multiple cores if they're all waiting for some file to load).
For ordinary desktop use, you can generally gain responsiveness from having two cores: one to run the application from which you want a response, one to run GUI effects. The cores are idle most of the time but both do work when you start some task. Beyond two cores, the benefits tend to trail off. And even with two cores, a very lean GUI can mean that you don't get any benefit.
Parallelizing a single task is difficult; except for some very simple cases (for which the technical term is “embarrassingly parallel”) it requires significant effort from the programmer and is often plain not doable. Displaying a web page, for example, is a matter of positioning elements one by one and executing Javascript code, and it all needs to be done in order, so doesn't benefit from multiple cores. The benefit of multiple cores for web browsing is when you want to do something else while a complex web page is being rendered.
Some graphics software has parallel routines for large tasks (e.g. some transformations of large images). You will gain from multiple cores there, but again only for those tasks that have been written to take advantage of multiple processors. If you're going to run image transformations as background tasks, you'll definitely benefit from at least two cores (one for the task, one for interactive use) and possibly from more if the task itself takes advantage of multiple cores.
More than four cores is unlikely to give any benefit for a machine that doesn't do fancy things such as multiple simultaneous users, large compilations, large numerical calculations, etc. Two cores is likely to have some benefit over one for most tasks. Between two and four, it isn't clear-cut. A faster dual core will give more consistent benefits than going from dual to quad-core, but a faster clock speed has downsides as well, especially for a laptop, since it means the processor will use more power and require louder cooling.
| I plan to get a new notebook and try to find out if a quad-core processor gives me any advantages over a regular dual-core machine. I use common Linux Distributions (Ubuntu, Arch etc.) and mostly Graphics Software: Scribus, Inkscape, Gimp. I want to use this new processor for a few years.
I've done a lot of research but could not find any reliable and up-to-date answers. So:
The latest kernel makes use of multi core processors. But does that give me any noticeable advantages on a daily basis? I'm talking about regular multi-tasking with common Linux applications.
| Linux & Linux-Software: Advantages of a multi core processor [closed] |
GNU parallel has several options to limit resource usage when starting jobs in parallel.
The basic usage for two nested loops would be
parallel python sim -a {1} -p {2} ::: 1 2 3 4 5 ::: 17.76 20.01 21.510 23.76If you want to launch at most 5 jobs at the same time, e.g., you could say
parallel -j5 python <etc.>Alternatively, you can use the --memfree option to start new jobs only when enough memory is free, e.g. at least 256 MByte
parallel --memfree 256M python <etc.>Note that the last option will kill the most recently started job if the memory falls below 50% of the "reserve" value stated (but it will be re-qeued for catch-up automatically).
|
I want to run some simulations using a Python tool that I had made. The catch is that I would have to call it multiple times with different parameters/arguments and everything.
For now, I am using multiple for loops for the task, like:
for simSeed in 1 2 3 4 5
do
for launchPower in 17.76 20.01 21.510 23.76
do
python sim -a $simSeed -p $launchPower
done
doneIn order for the simulations to run simultaneously, I append a & at the end of the line where I call the simulator.
python sim -a $simSeed -p $launchPower &Using this method I am able to run multiple such seeds. However, since my computer has limited memory, I want to re-write the above script so that it launches the inner for loop parallelly and the outer for loop sequentially.
As an example, for simSeed = 1, I want 5 different processes to run with launchPower equal to 17.76 20.01 21.510 23.76. As soon as this part is complete, I want the script to run for simSeed = 2 and again 5 different parallel processes with launchPower equal to 17.76 20.01 21.510 23.76.
How can I achieve this task?
TLDR:
I want the outer loop to run sequentially and inner loop to run parallelly such that when the last parallel process of the inner loop finishes, the outer loop moves to the next iteration.
| Bash consecutive and parallel loops/commands |
What you see in C is using threads, so the process usage is the total of all its threads. If there are 4 threads with 100% CPU usage each, the process will show as 400%
What you see in python is almost certainly parallelism via the multiprocess model. That's a model meant to overcome Python's threading limitations. Python can only run itself one thread at a time (see the Python Global Interpreter Lock - GIL). In order to do better than that one can use the multiprocess module which ends up creating processes instead of threads, which in turn show in ps as multiple processes, which then can use up to 100% CPU each since they are (each) single-threaded.
I bet that if you run ps -afeT you'll see the threads of the C program but no additional threads for the python program.
|
I work on a shared cluster. I've seen people run parallelized c code on this cluster which, when I use top to see what processes are running, are shown to be using (for example) 400% of the CPU, since they are using four processors for a single instance of their code.
Now someone is running (what I hear to be) a parallelized Python code. However, instead of top showing the Python code to be using 400% of the CPU, it is being shown as four different processes, each using their own processor (at 100%).
I am wondering, does Python (when parallelized) show with top as running as many different processes (as opposed to C) or is this Python code not actually running in parallel?
I don't know if Stack Exchange would be a better place for this question. Since I am using top I figured this place would be better. Let me know if I should move it.
| How does a parallelized Python program look with top command? |
Simply, the echo command triggers one write syscall which is atomic.
Note that write doesn’t guarantee to write all bytes it is given, but in this case (few data), it does.
Then in theory write(fd, buffer, n) can write less than n bytes and return the actual written number of written bytes to enable the program to write bytes at buffer+n.
Such a thing may happen with a pipe since the pipe doesn’t have an infinite capacity.
From write(2)If the file was open(2)ed with O_APPEND, the file offset is first set to the end of the file before writing. The adjustment of the file offset and the write operation are performed as an atomic step.According to POSIX.1, if count is greater than SSIZE_MAX, the result is implementation-defined; see NOTES for the upper limit on Linux. |
I have two scripts running parallelly and they are echoing to the same file. One script is echoing +++++++++++++++ to the file while the other script is echoing =========== to the file.
Below is the first script
#!/bin/bash
while [ 1==1 ];
do
echo "+++++++++++++++" >> log.txt
# commands
doneBelow is the second script
#!/bin/bash
while [ 1==1 ];
do
echo "===========" >> log.txt
# commands
doneThe log.txt file has around 1400000 lines printed and not a single line has jumbled case like ++== or something like that?
Does Linux prevent this kind of jumbling from happening and if it does then how and why ?
| Race condition not seen while two scripts write to a same file |
In these cases I'd rather open another terminal. What is the reason that you don't want to do that?
Downside of running qsub, is that you have to write a tiny script file for a trivial operation, which takes you some time. I don't know how many other users are working on the same network, but the purpose is meant as a scheduler for jobs of several users on the cluster. Especially if there are no free cores available, your simple job will end up in the queue, taking you more time.
Did you consider screen as an alternative? With screen you can start and pause a different session in the same terminal. The workflow would be like thisworking in the terminal
$ screen
your tiny jobs
Detach screen (Ctrl-a Ctrl-d)
working in the terminal
$ screen -r (to resume)
Check status of this tiny job
$ exit
And you're back |
When I'm running a task on a computer network? I've just started to realize that if I qsub any task, then the task won't hog up my terminal, and I can do other things on the same terminal (which is quite useful even if the task only takes a single minute to finish).
And then I run qstat to see which taks have finished and which ones haven't.
http://pubs.opengroup.org/onlinepubs/009604599/utilities/qsub.html is a good explanation of qsub.
| Are there any disadvantages against using qsub to run tasks all the time? |
Using GNU Parallel:
parallel --timeout 5 -j 8 -N0 ../sage ./loader.sage.py ::: {1..4000} 2>/dev/nullThis will execute ../sage ./loader.sage.py 4000 times, 8 jobs at a time, each with a timeout of 5 seconds
From the parallel man page:
--timeout duration
Time out for command. If the command runs for longer than duration seconds it will get killed as per --termseq.Note: This command replaces your entire loop.
|
I have a simple Bash script I am running to parallelize and automate the execution of a program written in Sage MATH:
#!/bin/bash
for i in {1..500}; do
echo Spinning up threads...
echo Round $i
for j in {1..8}; do
../sage ./loader.sage.py &
done
wait
done 2>/dev/nullI would like to add a timeout so that on each thread, after 5 seconds,
../sage ./loader.sage.py &will timeout, kill the thread, and continue execution. How would I go about doing this? Apologies in advance if this is a noob question, I can't seem to get the syntax right.
I am running this in a Ubuntu WSL. The program I am calling is written in Python and run through the Sage MATH interpreter which liaises to Singular.
| Adding a timeout to a parallelized call in Bash |
Crudely,
#!/bin/shset -- *.md
while [ $# -gt 0 ]
do
pandoc "${1} -f markdown -o ${1%.md}.pdf" &
shift
if [ $# -gt 0 ]
then
pandoc "${1} -f markdown -o ${1%.md}.pdf" &
shift
fi
wait
doneWith xargs:
find . -type f -name '*.md' -print0 | xargs -0 -n2 -P2 -I{} pandoc {} -f markdown -o {}.pdfyou would have to rename them afterwards, as the above would result in files named a.md.pdf, b.md.pdf, etc. Note that to be safe with filenames, we're asking find to print null-separated filenames and asking xargs to read in null-separated input. Rename the files with:
for f in ./*.md.pdf; do mv -- "${f}" "${f%.md.pdf}.pdf"; done |
files:
$ ls
a.md
b.md
c.md
d.md
e.mdCommand: pandoc file.md -f markdown file.pdf
How would I parallely process two pandoc instances simulatneously? Possibly with xargs or parallel.
It would work like
Iteration/ cmd 1 / cmd 2
1 / pandoc a.md -f markdown a.pdf / pandoc b.md -f markdown b.pdf
2 / pandoc c.md -f markdown c.pdf / pandoc d.md -f markdown d.pdf
3 / pandoc e.md -f markdown e.pdf / pandoc f.md -f markdown f.pdf
4 / pandoc g.md -f markdown g.pdf / pandoc h.md -f markdown h.pdf
The files are randomly named.
| How to process multiple files with pandoc? |
Simply with GNU parallel:
parallel ::: 'pip install pipenv && pipenv install --dev' \
'npm install -g grunt-cli && npm install' |
1. Summary
I don't understand, how I can combine parallel and sequential commands in Linux.2. Expected behavior
Pseudocode:
pip install pipenv sequential pipenv install --dev
parallel task
npm install -g grunt-cli sequential npm install
Windows batch working equivalent:
start cmd /C "pip install pipenv & pipenv install --dev"
start cmd /C "npm install -g grunt-cli & npm install"3. Not helpedI don't think, that & and wait can solve this problem, see rsaw comment.
I read, that GNU parallel — is better way for parallel tasks, but I can't find, which syntax I need to use in GNU parallel, that solve this task.
I try parallelshell:
parallelshell "pip install pipenv && pipenv install --dev" "npm install -g grunt-cli && npm install"Full .sh file:
git clone --depth 1 https://github.com/Kristinita/KristinitaPelican
wait
cd KristinitaPelican
wait
parallelshell "pip install pipenv && pipenv install --dev" "npm install -g grunt-cli && npm install"But at first pipenv install --dev command run for me, then npm install. It sequential, not parallel. | Combine parallel and sequential commands |
GNU Parallel is built for exactly this:
doit() {
gitFolder="$1"
parent=$(dirname $gitFolder);
Status=$(git -C $parent status)
if [[ $Status == *Changes* ]]; then
echo $parent;
git -C $parent status --porcelain
echo ""
elif [[ $Status == *ahead* ]]; then
echo "Push $parent";
echo
elif [[ $Status == *diverged* ]]; then
echo "Sync $parent";
echo
fi
}
export -f doit
find / -maxdepth 3 -not -path / -path '/[[:upper:]]*' -type d -name .git -not -path "*/Trash/*" -not -path "*/Temp/*" -not -path "*/opt/*" -print 2>/dev/null |
parallel -j0 doitGrouping of output is done by default.
You can even let GNU Parallel compute $parent:
doit() {
parent="$1"
Status=$(git -C $parent status)
:
}
... | parallel -j0 doit {//} |
I have this code to check the status of all of my git folders.
find / -maxdepth 3 -not -path / -path '/[[:upper:]]*' -type d -name .git -not -path "*/Trash/*" -not -path "*/Temp/*" -not -path "*/opt/*" -print 2>/dev/null |
{
while read gitFolder; do
(
parent=$(dirname $gitFolder);
Status=$(git -C $parent status)
if [[ $Status == *Changes* ]]; then
echo $parent;
git -C $parent status --porcelain
echo ""
elif [[ $Status == *ahead* ]]; then
echo "Push $parent";
echo
elif [[ $Status == *diverged* ]]; then
echo "Sync $parent";
echo
fi
) &
done
wait
}When I run it sequentially, I get a nice readable print in the terminal. But the speed gets slower. When I run it in parallel (using &), I get a very good speed, but the output becomes a total mess.
Is it possible to lock the output for each inner shell somehow and print each inner shell's standard output in a block?
| Is it possible to bundle subshell output? |
One would assume that "60 seconds" (and even "5 minutes") is just a good estimate, and that there is a risk that the first batch is still in progress when the second batch is started. If you want to separate the batches (and if there is no problem aside from the log-files in an occasional overlap), a better approach would be to make a batch number as part of the in-progress filenaming convention.
Something like this:
[[ -s ]] $local_dir/batch || echo 0 > $local_dir/batch
batch=$(echo $local_dir/batch)
expr $batch + 1 >$local_dir/batchbefore the for-loop, and then at the start of the loop, check that your pattern matches an actual file
[[ -f "$file" ]] || continueand use the batch number in the filename:
mv $file_location $local_dir/in_progress$batch.logand for forth. That reduces the risk of collision.
|
I have the following in a shell script:
for file in $local_dir/myfile.log.*;
do
file_name=$(basename $file);
server_name=$(echo $file_name | cut -f 3 -d '.');
file_location=$(echo $file); mv $file_location $local_dir/in_progress1.log mysql -hxxx -P3306 -uxxx -pxxx -e "set @server_name='${server_name}'; source ${sql_script};" rm $local_dir/in_progress1.log
doneIt basically gets all files in a directory that match the criteria, extracts a servername from the filename, before passing it across to a MySQL script for procesing.
What I am wondering is if I have 10 files that take 60 seconds each to complete, and after 5 minutes I then start a second instance of the shell script:a) will the second script still see the files that havent been processed
b) will it cause problems for the first instance if it deletes files or will I be able to run them in parallel without issue?
| query r.e running for loop scripts in parallel |
In addition to sending them to the background, use the wait built in to wait for all background processes to finish before continuing.
for el in $test1_partition
do
(scp david@${SERVER_LOCATION[0]}:$dir1/pp_monthly_9800_"$el"_200003_5.data $TEST1/. || scp david@${SERVER_LOCATION[1]}:$dir2/pp_monthly_9800_"$el"_200003_5.data $TEST1/.) &
WAITPID="$WAITPID $!"
done
for sl in $test2_partition
do
(scp david@${SERVER_LOCATION[0]}:$dir1/pp_monthly_9800_"$sl"_200003_5.data $TEST2/. || scp david@${SERVER_LOCATION[1]}:$dir2/pp_monthly_9800_"$sl"_200003_5.data $TEST2/.) &
WAITPID="$WAITPID $!"
done
wait $WAITPID
echo "All files done copying." | I am running my below shell script from machineA which is copying the files machineB and machineC into machineA. If the files are not there in machineB, then it should be there in machineC.
The below shell script will copy the files into TEST1 and TEST2 directory in machineA..
#!/bin/bash
set -ereadonly TEST1=/data01/test1
readonly TEST2=/data02/test2
readonly SERVER_LOCATION=(machineB machineC)
readonly FILE_LOCATION=/data/snapshotdir1=$(ssh -o "StrictHostKeyChecking no" david@${SERVER_LOCATION[0]} ls -dt1 "$FILE_LOCATION"/[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9] | head -n1)
dir2=$(ssh -o "StrictHostKeyChecking no" david@${SERVER_LOCATION[1]} ls -dt1 "$FILE_LOCATION"/[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9] | head -n1)echo $dir1
echo $dir2if [ "$dir1" = "$dir2" ]
then
rm -rf $TEST1/*
rm -rf $TEST2/*
for el in $test1_partition
do
scp david@${SERVER_LOCATION[0]}:$dir1/pp_monthly_9800_"$el"_200003_5.data $TEST1/. || scp david@${SERVER_LOCATION[1]}:$dir2/pp_monthly_9800_"$el"_200003_5.data $TEST1/.
done
for sl in $test2_partition
do
scp david@${SERVER_LOCATION[0]}:$dir1/pp_monthly_9800_"$sl"_200003_5.data $TEST2/. || scp david@${SERVER_LOCATION[1]}:$dir2/pp_monthly_9800_"$sl"_200003_5.data $TEST2/.
done
fiIs there a way to run process parallelly in the loop of a bash script
Currently it copies the file from machineB and machineC into machineA TEST1 directory first, and if it is done, then only it will go and copy the files from machineB and machineC into machineA TEST2 directory.. Is there any way I transfer the files both in TEST1 and TEST2 directory simultaneously?
I am running Ubuntu 12.04
| How to parallelize the for loop while scp the files? [duplicate] |
In one simple word: No.
What wastes a lot of effort is to switch between kernel space and user space, such switching is where the most waste is produced. There is (a lot) of work done just to get to where the real operation needs to be executed. The less switches needed the most efficient an operation should be.
There are operations that are completely done in kernel space (and there is no (safe) way to bypass that). In such cases, the most time is spent in kernel space, and that is the most efficient way to execute them.
There are other operations that must be executed in user space as the kernel has no service/function that implements it. In such operations, the more time that is used in user space, the more efficient the operation is.
But someone might have implemented an efficient kernel service in user space with some not-so-efficient algorithm. That will increase the user time but the result would be less efficient. Compared with the same service in kernel space.
And some other developer might be calling the kernel to read one byte at a time (and having to switch for every byte) instead of the equivalent call to read one meg at a time (if there is an equivalent function for a block instead of a byte).
And, in the end, it must be that some mix of kernel and user operations will be executed. To read a disk block, for example, the kernel should supply the function and it should be a "fire and forget" until the memory block (buffer) is filled with the result of the disk block read. To access process memory (like a program array), no kernel call should be needed.
There is no simple way to measure time efficiency.
|
In some unix shells there is the time command which prints how much time a given command takes to be executed. The output looks likereal 1m0.000s
user 10m0.000s
sys 0m0.000sIf I write a program that uses parallelization on multiple cores, the user-time can be a multiple of the real-time.
My question is, whether I can conclude, that if the user-time is very close to the real-time multiplied by the number of threads used, that the program is parallelized optimally? That is, that for example no thread has to wait for long periods of time for other threads.
| Is an optimal user-time to real-time ratio an indicator for efficient parallelization? |
Put this in your apt configuration file:
Acquire::Queue-Mode "access";
or use it on the command line like this:
apt-get -o Acquire::Queue-mode=access update
|
I have a machine where apt-get update hangs for ever with a "Waiting for headers" message, suggesting that one source is not responding. From this question I know I can do sudo apt-get -o Debug::Acquire::http=true update to identify the culprit. However, it is still complicated to find out which query is not responding, because apt-get seems to be making multiple queries in parallel.
How can I can tell apt-get update to only download one file at a time?
| How can I tell `apt-get update` to download only one file at a time? |
GNU Parallel will not do multithreading, but it will do multiprocessing, which might be enough for you:
seq 50000 | parallel my_MC_sim --iteration {}It will default to 1 process per CPU core and it will make sure the output of two parallel jobs will not be mixed.
You can even put this parallelization in the Octave script. See https://www.gnu.org/software/parallel/parallel_tutorial.html#Shebang
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to. It can often replace a for loop.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bashFor other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel
|
I am computing Monte-Carlo simulations using GNU Octave 4.0.0 on my 4-core PC. The simulation takes almost 4 hours to compute the script for 50,000 times (specific to my problem), which is a lot of time spent for computation. I was wondering if there is a way to run Octave on multiple cores simultaneously to reduce the time of computations.
Thanks in advance.
| Run GNU Octave script on multiple cores |
Per the man page:
-U, --unlink, --delete
Delete input files after succesful compression or decompression.so you could simply run
lzop -dU -- {"$PRIMARY","$SECONDARY"}/*.lzoto delete each lzo file as soon as it's successfully decompressed.
lzop is single-threaded so if you want parallel processing you could use gnu parallel:
parallel lzop -dU -- ::: {"$PRIMARY","$SECONDARY"}/*.lzo |
So I have .lzo files in /test01/primary folder which I need to uncompress and then delete all the .lzo files. Same thing I need to do in /test02/secondary folder as well. I will have around 150.lzo files in both folders so total around 300 files.
From a command line I was running like this to uncomressed one file lzop -d file_name.lzo
What is the fastest way to uncompressed all .lzo files and then delete all .lzo files from both folders. I came up with below code.
#!/bin/bashset -eexport PRIMARY=/test01/primary
export SECONDARY=/test02/secondarylzop -d $PRIMARY/* & lzop -d $SECONDARY/*
find $PRIMARY $SECONDARY -name '*.lzo' -deleteMay be we can decompress the .lzo files in parallel and then delete all .lzo file in both the folders simultaneously to speed up the process.
Is there a better way?
| Uncompressed .lzo files in parallel and then delete the original .lzo files |
( a_1; a_2 ) &
( b_1; b_2 ) &
( c_1; c_2 ) &
waitThis would run three background jobs and then wait for all to finish. Each of the three background jobs would run its commands one after the other.
For a slightly more complicated variation:
for task in a b c; do
for num in 1 2; do "${task}_$num"; done &
done
waitThis would do the same thing, but would construct the strings a_1, a_2 etc. and then execute the tasks resulting from generating these strings as commands. This would obviously only work if your tasks are actual commands with these names.
|
I would like to run tasks a_1, a_2, b_1, b_2, c_1, c_2 in the following fashion:
a_i, b_j, c_k (where i, j, k are 0 or 1) can be run in parallel. But a_2 should be run right after a_1 completion (they use the same resources so a_2 should wait for a_1 to free the resources). Same with b, c.
How can I do this in bash?
| Running multiple jobs: a combination of parallel and serial |
If you use GNU Parallel then this works:
parallel 'ffmpeg -i {} -f wav - | opusenc --bitrate 38 - {.}.opus' ::: *m4aMaybe that is good enough?
It has the added benefit that it only runs 1 job per cpu thread, so if you have 1000 files you will not overload your machine.
|
I want to do the following conversion:
for f in *.m4a; do
( ffmpeg -i "$f" -f wav - | opusenc --bitrate 38 - "${f%.m4a}.opus" ) &
doneI know I could use ffmpeg directly to convert to opus, but I want to use opusenc in this case, since it's a newer version.
When I run opusenc after the ffmpeg it works fine, but when I try to run the above I just get a bunch of Stopped and nothing happens.
| How do I run an on-the-fly ffmpeg (pipe) conversion in parallel? |
The performance difference is most likely in how buffering works between Perl and Java. In this case, you used A bufferedReader in java which gives it an advantage. Perl does buffer around 4k from disk.
You could try a few things here. One is to use the read function in perl to get larger blocks at a time. That may improve performance.
Another option might be to investigate the various mmap related perl modules.
|
Apologies if this is off topic - it concerns the relative efficiencies of running I/O-heavy Perl/Java scripts in parallel on a Ubuntu system.
I have written two simple versions of a file copy script (Perl and Java) - see below. When I run the scripts on a 15GB file, each takes a similar amount of time on a 48-core machine running Ubuntu Server 12.04 (perl 2m10s, java 2m27s).
However, when I run six instances in parallel, each operating on a different 15GB input file, I observe very different processing times:Perl: one instance completes in 2m6s, all others take 27m26s -
28m10s.
Java: all instances take 3m27s - 4m37s.Looking at the processor cores in top during the long-running Perl processes, I see that the occupied cores have I/O wait percentages (%wa) of 70%+, implying some kind of disk contention (all files are on one HD). Presumably, then, Java's BufferedReader is somehow less sensitive to this disk contention.
Question - Does this seem like a reasonable conclusion? And if so, can anyone suggest any actions I can take at the OS-level or in Perl to make the Perl script as efficient as Java for this kind of task?
Note - my goal is not simply to copy files - my real scripts contain additional logic, but exhibit the same performance behaviour as the simplified scripts below.
Perl
#!/usr/bin/perl -w
open(IN, $ARGV[0]) || die();
open(OUT, ">$ARGV[1]") || die();
while (<IN>) {
print OUT $_
}
close(OUT);
close(IN);Java
import java.io.*;
public class CopyFileLineByLine {
public static void main(String[] args) throws IOException {
BufferedReader br = null;
PrintWriter pw = null;
try {
br = new BufferedReader(new FileReader(new File(args[0])));
pw = new PrintWriter(new File(args[1]));
String line;
while ((line = br.readLine()) != null) {
pw.println(line);
}
}
finally {
if (pw != null) pw.close();
if (br != null) br.close();
}
}
} | Running jobs in parallel on Ubuntu - I/O contention differences between Perl and Java |
With GNU xargs and a shell with support for process substitution
xargs -r -0 -P4 -n1 -a <(printf '%s\0' myfile*) mycommandWould run up to 4 mycommands in parallel.
If mycommand doesn't use its stdin, you can also do:
printf '%s\0' myfile* | xargs -r -0 -P4 -n1 mycommandWhich would also work with the xargs of modern BSDs.
For a recursive search for myfile* files, replace the printf command with:
find . -name 'myfile*' -type f -print0(-type f is for regular-files only. For a glob-equivalent, you need zsh and its printf '%s\0' myfile*(.)).
|
Let's say I have a command accepting a single argument which is a file path:
mycommand myfile.txtNow I want to execute this command over multiple files in parallel, more specifically, file matching pattern myfile*.
Is there an easy way to achieve this?
| Execute command on multiple files matching a pattern in parallel |
You'd like to read the xargs manual and look up the -L and the -P flags in there.
tail -f logfile.log | grep 'patternline' |
xargs -P 4 -L 1 bash scriptname.shThis will execute at most four instances of the command at a time (-P 4), and with one line of input for each invocation (-L 1).
Add -t to xargs to see what gets executed.
|
How would I execute a bash script in parallel for each line ? Actually, I will be tailing to log file and, for each line found, I want to execute a script in the background; something like the example below:
tailf logfile.log | grep 'patternline' | while read line ; do
bash scriptname.sh "$line" & ;
doneI would like to know how to perform the above task using xargs (OR any other suitable method) in parallel and also how to limit processes.
Thanks in advance.
| parallel processing using xargs |
If you have GNU Parallel you can do this:
parallel do_it {} --option foo < argumentlistGNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bashFor other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel
|
Say I have a great number of jobs (dozens or hundreds) that need doing, but they're CPU intensive and only a few can be run at once. Is there an easy way to run X jobs at once and start a new one when one has finished? The only thing I can come up with is something like below (pseudo-code):
jobs=(...);
MAX_JOBS=4;
cur_jobs=0;
pids=(); # hash/associative array
while (jobs); do
while (cur_jobs < MAX_JOBS); do
pop and spawn job and store PID and anything else needed;
cur_jobs++;
done
sleep 5;
for each PID:
if no longer active; then
remove PID;
cur_jobs--;
doneI feel like I'm over-complicating the solution, as I often do. The target system is FreeBSD, if there might be some port that does all the hard work, but a generic solution or common idiom would be preferable.
| Can you make a process pool with shell scripts? |
Use systemd-analyze built-in tool. You are especially interested in two options: blame and plot
systemd-analyze blame
systemd-analyze plot > graph.svgblame: Print list of running units ordered by time to init
plot: Output SVG graphic showing service initialization |
When I wanted to know how long systemd actually needed to boot the default target, how would I do that ? An then, is it possible to create a graph to show which unit takes how much time to initialize and to what degree they are run in parallel ?
| How can I count the time that needs systemd to boot a default target and then graph it? |
While you can do this with a shell script, this is going at it the hard way. Shell scripts aren't very good at manipulating multiple background jobs.
My recommendation is to use GNU make or some other version of make that has a -j option to execute multiple jobs in parallel. Write each subtask as a makefile rule.
I think the makefile snippet below implements your rules, but your code was hard to follow so I might now have gotten it right. The first line enumerates the output files from the input files (note: never overwrite any input files! If the job stops in the middle for any reason, you'll end up with data for which you won't know whether it's been processed). The indented lines are the commands to run. Use a tab to indent each command, not 8 spaces. In these commands, $< represents the source file (a .in file), $@ represents the target (the .out file), and $* is the target without its extension. All $ signs in shell commands must be doubled, and each command line is executed in a separate subshell unless you put a \ at the end which cancels that newline (so the shell sees one long line starting with set -e and ending with done).
all: $(patsubst %.in,%.out,$(wildcard foo/*.in))
%.out: %.in
cp $< $*.tmp.in
set -e; \
for f in bar/*; do \
awk 'NR==FNR{a[$$0]=$$0;next}!a[$$0]' $$f $*.tmp.in >$*.tmp.out; \
mv $*.tmp.out $*.tmp.in; \
done
mv $*.tmp.in $@Put this in a file called Makefile and call make -j12.
|
As part of my research project I'm processing huge amount of data splitted up into many files.
All files in folder foo have to be processed by the script myScript involving all elements of folder bar.
This is myScript:
for f in bar/*
do
awk 'NR==FNR{a[$0]=$0;next}!a[$0]' $f $1 > tmp
cp tmp $1
doneThe first idea to just process all files with a for loop is valid:
for f in foo/*
do
./myScript $f
doneHowever, this simply takes forever. Simply starting every myScript in background by appending & would create thousands of parallely executed instances of awk and cp with huge input, which is obviously bad.
I thought of limiting the number of "threads" created with the following
for f in foo/*
do
THREAD_COUNT=$(ps | wc -f)
while [ $THREAD_COUNT -ge 12 ]
do
sleep 1
THREAD_COUNT=$(ps | wc -f)
done
./myScript $f &
doneAs a side note: I'm comparing with 12, because I've got 8 cores on my nodes and apparently there's always bash, ps and wc running as well as the header line at the moment of the call of ps | wc -l.
Unfortunately the call of myScript causes more than one additional entry in ps, so the behaviour of my script wasn't as intended.
So here's my question: Is there a simpler way? A way which is more stable?
I'm not doing anything else on the nodes, so everything happening is caused by the scripts only.
| control number of started programs in bash |
Using GNU Parallel:
parallel my_process {} ::: files*This will run one my_process file per CPU thread.
You can tell GNU Parallel to make sure there is 10G of RAM free before it starts the next job:
parallel --memfree 10G my_process {} ::: files*If the free mem goes below 5G then GNU Parallel will kill the newest job and restart it when there is 10G free again.
|
So, I have 10 CPU core and 20 data to process. I want to process the data in parallel but I am afraid if I just process 20 at once it will make some problem. So, I want to process 10 data 2 times. Is there any command to do this?
Add info:
The data are in file format. It is quite huge, per file can reach 10GB. In my expreience if I launch more than 10 process, the PC will become really slow and even lag. So I am limiting the process to be only 10 which is equal to the number of cores. As for my RAM, I believe the software which process the file will not load everything at once so the RAM usage is quite low. That is why I just need to parallel the process for every 10 data. For now, I generate 10 shell script which execute parallel and each shell script contains sequential command.
| Processing command parallel per batch |
Replacing -delete with -print (which is the default) and piping
into GNU parallel should mostly do it:
find . -name '*.in' -type f | parallel rm --This will run one job per core; use -j N to use N parallel jobs
instead.
It's not completely obvious that this will run much faster than
deleting in sequence, since deleting is probably more I/O- than
CPU-bound, but it would be interesting to test out.
(I said "mostly do it" because the two commands are not fully
equivalent; for example, the parallel version will not do the right
thing if some of your input paths include newline characters.)
|
I want to recursively delete all files that end with .in. This is taking a long time, and I have many cores available, so I would like to parallelize this process. From this thread, it looks like it's possible to use xargs or make to parallelize find. Is this application of find possible to parallelize?
Here is my current serial command:
find . -name "*.in" -type f -delete | Parallelize recursive deletion with find |
Running each command in a different terminal will work; you can also start them in a single terminal with & at the end of the first to put it in the background (see Run script and not lose access to prompt / terminal):
sudo ptpd -c -g -b eth1 -h -D &
sudo tcpdump -nni eth1 -e icmp[icmptype] == 8 -w capmasv6.pcap |
I want to run two commands on terminal on my virtual machine at the same time.
I have this as of now:
sudo ptpd -c -g -b eth1 -h -D; sudo tcpdump -nni eth1 -e icmp[icmptype] == 8 -w capmasv6.pcapHowever, the tcpdump command only starts running when I press CtrlC, and I don't want to cancel the first command.
If I just open two different terminals and write the command in each, is that fine or will it not work as I want it to?
| Running multiple commands at the same time |
I assume it is the for loop you want parallelized:
#! /bin/bash
# Split the input file into one file for each shot. NB mustclose each o/p file at the earliest opportunity otherwise it will crash!
susplit <$1 key=fldr stem=fldr_ verbose=1 close=1sucit() {
i=$1
echo $i
suchw key1=tstat key2=tstat a=200 < $i | suwind key=tracf min=10 max=400 tmin=0 tmax=6 | suweight a=0 | suresamp rf=4 | sustatic hdrs=1 sign=-1 | sureduce rv=1.52 | sumedian median=1 xshift=0 tshift=0 nmed=41 | suflip flip=3 | sureduce rv=1.52 | suflip flip=3 | suresamp rf=0.25 | suweight inv=1 a=0 | sustatic hdrs=1 sign=1
}
export -f sucitparallel sucit ::: fldr* > $2# Tidy up files by removing single shot gathers and LIST
rm -f fldr* LIST &Depending on what susplit does you can make it even faster. If a shot in "large_data_file" starts with <shot>\n and ends with </shot>\n then something like this may work:
sucpipe() {
suchw key1=tstat key2=tstat a=200 | suwind key=tracf min=10 max=400 tmin=0 tmax=6 | suweight a=0 | suresamp rf=4 | sustatic hdrs=1 sign=-1 | sureduce rv=1.52 | sumedian median=1 xshift=0 tshift=0 nmed=41 | suflip flip=3 | sureduce rv=1.52 | suflip flip=3 | suresamp rf=0.25 | suweight inv=1 a=0 | sustatic hdrs=1 sign=1
}
export -f sucpipeparallel --block -1 --recstart '<shot>\n' --recend '</shot>\n' --pipepart -a $1 sucpipe > $2It will try to split bigfile into n blocks, where n=number of cores. The splitting is done on the fly so it will not write temporary files first. Then GNU Parallel will pass each block to a sucpipe.
If bigfile is binary (i.e. not text) with a header of 3200 bytes and a recordlength of 1000 bytes, then this might work:
parallel -a bigfile --pipepart --recend '' --block 1000 --header '.{3200}' ...For more details walk through the tutorial: man parallel_tutorial Your command line will love you for it.
|
I need this to be more efficient
Right now it takes up to 20 hrs depending on the line (these are fairly large MCS datasets).Split large data file into its "shots"
Creates a list of each shot name to be used in for loop
Loops through each shot and performs the same processes
Appends each shot to a new data file, so that you have the same line aa before, but processed. In this case i am filtering the data repeatedly, which is why I think this can be run in parallel.You can ignore all of the SU commands and as well as everything in the for loop, I just need to know how to run this in parallel (say 32 nodes). This is a relatively new topic for me, so an in depth explanation would be appreciated!
script:
#! /bin/bash
# Split the input file into one file for each shot. NB mustclose each o/p file at the earliest opportunity otherwise it will crash!
susplit <$1 key=fldr stem=fldr_ verbose=1 close=1# Create a list of shot files
ls fldr* > LIST# Loop over each shot file; suppress direct wave; write to new concatenated output file
for i in `cat LIST`; do
echo $i
suchw key1=tstat key2=tstat a=200 < $i | suwind key=tracf min=10 max=400 tmin=0 tmax=6 | suweight a=0 | suresamp rf=4 | sustatic hdrs=1 sign=-1 | sureduce rv=1.52 | sumedian median=1 xshift=0 tshift=0 nmed=41 | suflip flip=3 | sureduce rv=1.52 | suflip flip=3 | suresamp rf=0.25 | suweight inv=1 a=0 | sustatic hdrs=1 sign=1 >> $2
done# Tidy up files by removing single shot gathers and LIST
rm -f fldr* LIST & | How can I run this bash script in parallel? |
GNU Parallel is built for this:
< input.xt parallel -P 8 -d '\r\n' -n 1000 curl -s -X POST --data-binary '{}' http://...If you want to keep the \r\n, use --pipe. This defaults to passing chunks of ~1 MB:
< input.xt parallel -P 8 --pipe curl -s -X POST --data-binary @- http://... |
I have a large input file which contains 30M lines, new lines in \r\n. I want to process this file in parallel by sending chunks of 1000 lines (or less, for the remainder of the file) to a REST API with curl.
I tried the following:
< input.xt tr -d '\r' | xargs -P 8 -r -d '\n' -n 1000 -I {} curl -s -X POST --data-binary '{}' http://...Note that I am stripping the \r's with tr from the input first, because xargs does not seem to be able to split on multiple characters.
However, that command above still seems to provide exactly one line to the curl process, albeit for 8 curl processes in parallel (because of the -P 8 argument).
How can I fix this command such that chunks of 1000 lines are passed to curl, while remaining the parallelism?
I understand that those lines will arrive in random order at the REST service, which is fine for my use case.
| How to combine multiple lines with xargs |
Things have changed since June.
Git version e81a0eba now has --memsuspend
--memsuspend size (alpha testing)Suspend jobs when there is less than 2 * size memory free. The size can be
postfixed with K, M, G, T, P, k, m, g, t, or p which would multiply the size
with 1024, 1048576, 1073741824, 1099511627776, 1125899906842624, 1000,
1000000, 1000000000, 1000000000000, or 1000000000000000, respectively.If the available memory falls below 2 * size, GNU parallel will suspend some
of the running jobs. If the available memory falls below size, only one job
will be running.If a single job takes up at most size RAM, all jobs will complete without
running out of memory. If you have swap available, you can usually lower
size to around half the size of a single jobs - with the slight risk of
swapping a little.Jobs will be resumed when more RAM is available - typically when the oldest
job completes. |
Let's say that I have 10 GBs of RAM and unlimited swap.
I want to run 10 jobs in parallel (gnu parallel is an option but not the only one necessarily). These jobs progressively need more and more memory but they start small. These are CPU hungry jobs, each running at 1 core.
For example, assume that each job runs for 10 hours and starts at 500MB of memory and when it finishes it needs 2GBs, memory increasing linearly. So, if we assume that they increase linearly, at 6 hours and 40 minutes these jobs will exceed the 10GBs of ram available.
How can I manage these jobs so that they always run in RAM, pausing the execution of some of them while letting the others run?
Can GNU parallel do this?
| parallel: Pausing (swapping out) long-running progress when above memory limit threshold |
Subsets and Splits