output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
You can get a first impression by checking whether the utility is linked with the pthread library. Any dynamically linked program that uses OS threads should use the pthread library.
ldd /bin/grep | grep -F libpthread.soSo for example on Ubuntu:
for x in $(dpkg -L coreutils grep findutils util-linux | grep /bin/); do if ldd $x | grep -q -F libpthread.so; then echo $x; fi; doneHowever, this produces a lot of false positives due to programs that are linked with a library that itself is linked with pthread. For example, /bin/mkdir on my system is linked with PCRE (I don't know why…) which itself is linked with pthread. But mkdir is not parallelized in any way.
In practice, checking whether the executable contains libpthread gives more reliable results. It could miss executables whose parallel behavior is entirely contained in a library, but basic utility typically aren't designed that way.
dpkg -L coreutils grep findutils util-linux | grep /bin/ | xargs grep pthread
Binary file /usr/bin/timeout matches
Binary file /usr/bin/sort matchesSo the only tool that actually has a chance of being parallelized is sort. (timeout only links to libpthread because it links to librt.) GNU sort does work in parallel: the number of threads can be configured with the --parallel option, and by default it uses one thread per processor up to 8. (Using more processors gives less and less benefit as the number of processors increases, tapering off at a rate that depends on how parallelizable the task is.)
grep isn't parallelized at all. The PCRE library actually links to the pthread library only because it provides thread-safe functions that use locks and the lock manipulation functions are in the pthread library.
The typical simple approach to benefit from parallelization when processing a large amount of data is to split this data into pieces, and process the pieces in parallel. In the case of grep, keep file sizes manageable (for example, if they're log files, rotate them often enough) and call separate instances of grep on each file (for example with GNU Parallel). Note that grepping is usually IO-bound (it's only CPU-bound if you have a very complicated regex, or if you hit some Unicode corner cases of GNU grep where it has bad performance), so you're unlikely to get much benefit from having many threads.
|
In a common Linux distribution, do utilities like rm, mv, ls, grep, wc, etc. run in parallel on their arguments?
In other words, if I grep a huge file on a 32-threaded CPU, will it go faster than on dual-core CPU?
| Are basic POSIX utilities parallelized? |
As others have said, and as is mentioned in the link you provide in your question, having an 8MiB stack doesn’t hurt anything (apart from consuming address space — on a 64-bit system that won’t matter).
Linux has used 8MiB stacks for a very long time; the change was introduced in version 1.3.7 of the kernel, in July 1995. Back then it was presented as introducing a limit, previously there wasn’t one:Limit the stack by to some sane default: root can always
increase this limit if needed.. 8MB seems reasonable.On Linux, the stack limit also affects the size of program arguments and the environment, which are limited to one quarter of the stack limit; the kernel enforces a minimum of 32 pages for the arguments and environment.
For threads, if the stack limit (RLIMIT_STACK) is unlimited, pthread_create applies its own limits to new threads’ stacks — and on most architectures, that’s less than 8MiB.
|
For example, on OSX, it's even less than 512k.
Is there any recommended size, having in mind, that the app does not use recursion and does not allocate a lot of stack variables?
I know the question is too broad and it highly depends on the usage, but still wanted to ask, as I was wondering if there's some hidden/internal/system reason behind this huge number. I was wondering, as I intend to change the stack size to 512 KiB in my app - this still sounds like a huge number for this, but it's much smaller than 8MiB - and will lead to significantly decreased virtual memory of the process, as I have a lot of threads (I/O).
I also know this doesn't really hurt, well explained here: Default stack size for pthreads
| Why on modern Linux, the default stack size is so huge - 8MB (even 10 on some distributions) |
(pacaur uses makepkg, see https://wiki.archlinux.org/index.php/Makepkg )
In /etc/makepkg.conf add
MAKEFLAGS="-j$(expr $(nproc) \+ 1)"
to run #cores + 1 compiling jobs concurrently.
When using bash you can also add
export MAKEFLAGS="-j$(expr $(nproc) \+ 1)"
to your ~/.bashrc to make this default for all make commands, not only those for AUR packages.
|
Is there any way to have make use multi-threading (6 threads is ideal on my system) system-wide, instead of by just adding -j6 to the command line? So, that if I run make, it acts the same as if I was running make -j6? I want this functionality because I install a lot of packages from the AUR using pacaur (I'm on Arch), so I don't directly run the make command, but I would still like multi-threading to build packages faster.
| Use multi-threaded make by default? |
They are actually showing the same information in different ways. This is what the -f and -L options to ps do (from man ps, emphasis mine):-f Do full-format listing. This option can be combined with many
other UNIX-style options to add additional columns. It also causes the command arguments to be printed. When used with -L, the NLWP (number of threads) and LWP (thread ID) columns will be added.
-LShow threads, possibly with LWP and NLWP columns.
tidTID the unique number representing a dispatacable entity (alias lwp, spid). This value may also
appear as: a process ID (pid); a process group ID
(pgrp); a session ID for the
session leader (sid); a thread group ID for the thread group leader (tgid); and a tty
process group ID for the process group leader (tpgid).So, ps will show thread IDs in the LWP column while the PID column is the actual process identifier.
top on the other hand, lists the different threads in the PID column though I can't find an explicit mention of this in man top.
|
When I run top -H, I see that my multiple mysql threads all have the same PID. However, in ps -eLf I see each one has a different PID:
ps -eLf
UID PID PPID LWP C NLWP STIME TTY TIME CMD
mysql 1424 1 1424 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld
mysql 1424 1 1481 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld
mysql 1424 1 1482 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld
mysql 1424 1 1483 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld
mysql 1424 1 1484 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld
mysql 1424 1 1485 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld
mysql 1424 1 1486 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld
mysql 1424 1 1487 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld
mysql 1424 1 1488 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld
mysql 1424 1 1489 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld
mysql 1424 1 1490 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld
mysql 1424 1 1791 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld
mysql 1424 1 1792 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld
mysql 1424 1 1793 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld
mysql 1424 1 1794 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld
mysql 1424 1 1809 0 17 18:41 ? 00:00:00 /usr/sbin/mysqld
mysql 1424 1 1812 0 17 18:41 ? 00:00:00 /usr/sbin/mysqldand in top -H
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1424 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.08 mysqld
1481 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.16 mysqld
1482 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.33 mysqld
1483 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.16 mysqld
1484 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.23 mysqld
1485 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.27 mysqld
1486 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.15 mysqld
1487 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.18 mysqld
1488 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.16 mysqld
1489 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.16 mysqld
1490 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.34 mysqld
1791 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.26 mysqld
1792 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.54 mysqld
1793 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.00 mysqld
1794 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.00 mysqld
1809 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.00 mysqld
1812 mysql 20 0 539m 56m 7200 S 0.0 1.5 0:00.13 mysqldWhat is going on and which one should I believe?
| Why do top and ps show different PIDs for the same processes? |
In Linux, each thread has a pid, and that’s what htop shows. The “process” to which all the threads belong is the thread whose pid matches its thread group id.
In your case, grep Tgid /proc/1021/status would show the value 1019 (and this would be true for all the rg identifiers shown by htop).
See Are threads implemented as processes on Linux? for details.
|
I'm using htop and looking at a process (rg) which launched multiple threads to search for text in files, here's the tree view in htop:
PID Command
1019 |- rg 'search this'
1021 |- rg 'search this'
1022 |- rg 'search this'
1023 |- rg 'search this'Why am I seeing PIDs for the process' threads? I thought threads didn't have a PID and they just shared their parent's PID.
| Why do threads have their own PID? |
I found three solutions:With GNU tar, using the awesome -I option:
tar -I pigz -xvf /path/to/archive.tar.gz -C /where/to/unpack/it/With a lot of Linux piping (a "geek way"):
unpigz < /path/to/archive.tar.gz | tar -xvC /where/to/unpack/it/More portable (to other tar implementations):
unpigz < /path/to/archive.tar.gz | (cd /where/to/unpack/it/ && tar xvf -)(You can also replace tar xvf - with pax -r to make it POSIX-compliant, though not necessarily more portable on Linux-based systems.)
Credits go to @PSkocik for a proper direction, @Stéphane Chazelas for the 3rd variant and to the author of this answer.
|
I know how to gunzip a file to a selected location.
But when it comes to utilizing all CPU power, many consider pigz instead of gzip. So, the question is how do I unpigz (and untar) a *.tar.gz file to a specific directory?
| unpigz (and untar) to a specific directory |
No, vim is not multithreaded. Multiple cores won't help you here.
First we have to agree on what a huge file is. I suppose you mean a file larger than the RAM size. Vim was not designed for large files. Furthermore, when not sufficient line ends are present, vim might not be able to open the file at all.
Decide if you want to just read the file content or if you want to edit it. The program less has many features unknown to most people and is actually a very good tool to view large files because it doesn't require the file to fit into memory.
If you want to edit the file you are better off with non-interactive text editors like sed, awk or maybe a perl script. Those editors were designed for this very purpose and happily process files larger than your RAM.
Also see my answer to: What happens if I use vi on large files?
|
I have a HUGE (and I mean huge) text file that I am going to process with vim. I could process it using two different (debian) machines.
One is duel-core and one is octo-core. A single core on my duel-core box a faster than a single core on my octo-core box.
Does 'vim' utilize multithreading in such a way as to make my work go faster on my octo-core box?
| Is vim multithreaded? |
GNU parallel is made for just this sort of thing. You can run your script many times at once, with different data from your input piped in for each one:
cat input.txt | parallel --pipe your-script.shBy default it will spawn processes according to the number of processors on your system, but you can customise that with -j N.
A particularly neat trick is the shebang-wrapping feature. If you change the first line of your Bash script to:
#!/usr/bin/parallel --shebang-wrap --pipe /bin/bashand feed it data on standard input then it will all happen automatically. This is less useful when you have cleanup code that has to run at the end, which you may do.
There are a couple of things to note. One is that it will chop up your input into sequential chunks and use those one at a time - it doesn't interleave lines. The other is that it those chunks are split by size, without regard for how many records there are. You can use --block N to set a different block size in bytes. In your case, no more than an eighth of the file size should be about right. Your file sounds like it might be small enough to end up all in a single block otherwise, which would defeat the purpose.
There are a lot of options for particular different use cases, but the tutorial covers things pretty well. Options you might also be interested in include --round-robin and --group.
|
I have written a bash script which is in following format:
#!/bin/bash
start=$(date +%s)
inFile="input.txt"
outFile="output.csv"rm -f $inFile $outFilewhile read line
do -- Block of Commandsdone < "$inFile"end=$(date +%s)runtime=$((end-start))echo "Program has finished execution in $runtime seconds."The while loop will read from $inFile, perform some activity on the line and dump the result in $outFile.
As the $inFile is 3500+ lines long, the script would take 6-7 hours for executing completely. In order to minimize this time, I am planning to use multi-threading or forking in this script. If I create 8 child processes, 8 lines from the $inFile would be processed simultaneously.
How can this be done?
| Multi-Threading/Forking in a bash script |
I wouldn't call it multithreading as such but you could simply launch 70 jobs in the background:
for i in {1..70}; do
wget http://www.betaservice.domain.host.com/web/hasChanged?ver=0 2>/dev/null &
doneThat will result in 70 wget processes running at once. You can also do something more sophisticated like this little script:
#!/usr/bin/env bash## The time (in minutes) the script will run for. Change 10
## to whatever you want.
end=$(date -d "10 minutes" +%s);## Run until the desired time has passed.
while [ $(date +%s) -lt "$end" ]; do
## Launch a new wget process if there are
## less than 70 running. This assumes there
## are no other active wget processes.
if [ $(pgrep -c wget) -lt 70 ]; then
wget http://www.betaservice.domain.host.com/web/hasChanged?ver=0 2>/dev/null &
fi
done |
I have a service which I am calling from another application. Below is my service URL which I am calling -
http://www.betaservice.domain.host.com/web/hasChanged?ver=0I need to do some load test on my above service URL in multithreaded way instead of calling sequentially one by one.
Is there any way from bash shell script, I can put a load on my above service URL by calling it in multithreaded way? I can have 60-70 threads calling above URL in parallel very fast if possible?
| How to call a service URL from bash shell script in parallel? |
Your issue is the max user processes limit.
From the getrlimit(2) man page:RLIMIT_NPROC
The maximum number of processes (or, more precisely on Linux, threads) that can be created for the real user ID of the calling process. Upon encountering this limit, fork(2) fails with the error EAGAIN.Same for pthread_create(3):EAGAIN Insufficient resources to create another thread, or a system-imposed limit on the number of threads was encountered. The latter case may occur in two ways: the RLIMIT_NPROC soft resource limit (set via setrlimit(2)), which limits the number of process for a real user ID, was reached; or the kernel's system-wide limit on the number of threads, /proc/sys/kernel/threads-max, was reached.Increase that limit for your user, and it should be able to create more threads, until it reaches other resource limits.
Or plain resource exhaustion - for 1Mb stack and 20k threads, you'll need a lot of RAM.
See also NPTL caps maximum threads at 65528?: /proc/sys/vm/max_map_count could become an issue at some point.
Side point: you should use -pthread instead of -lpthread. See gcc - significance of -pthread flag when compiling.
|
my server has been running with Amazon Ec2 linux. I have a mongodb server inside. The mongodb server has been running under heavily load, and, unhappily , I've ran into a problem with it :/
As known, the mongodb creates new thread for every client connection, and this worked fine before. I don't know why, but MongoDB can't create more than 975 connections on the host as a non-privileged user ( it runs under a mongod user) . But when I'm running it as a root user, it can handle up to 20000 connections(mongodb internal limit). But, further investigations show, that problem isn't the MongoDB server, but a linux itself.
I've found a simple program, which checks max connections number:
/* compile with: gcc -lpthread -o thread-limit thread-limit.c */
/* originally from: http://www.volano.com/linuxnotes.html */#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <pthread.h>
#include <string.h>#define MAX_THREADS 100000
#define PTHREAD_STACK_MIN 1*1024*1024*1024
int i;void run(void) {
sleep(60 * 60);
}int main(int argc, char *argv[]) {
int rc = 0;
pthread_t thread[MAX_THREADS];
pthread_attr_t thread_attr; pthread_attr_init(&thread_attr);
pthread_attr_setstacksize(&thread_attr, PTHREAD_STACK_MIN); printf("Creating threads ...\n");
for (i = 0; i < MAX_THREADS && rc == 0; i++) {
rc = pthread_create(&(thread[i]), &thread_attr, (void *) &run, NULL);
if (rc == 0) {
pthread_detach(thread[i]);
if ((i + 1) % 100 == 0)
printf("%i threads so far ...\n", i + 1);
}
else
{
printf("Failed with return code %i creating thread %i (%s).\n",
rc, i + 1, strerror(rc)); // can we allocate memory?
char *block = NULL;
block = malloc(65545);
if(block == NULL)
printf("Malloc failed too :( \n");
else
printf("Malloc worked, hmmm\n");
}
}
sleep(60*60); // ctrl+c to exit; makes it easier to see mem use
exit(0);
}And the sutuation is repeated again, as root user I can create around 32k threads, as non-privileged user(mongod or ec2-user ) around 1000 .
This is an ulimit for root user:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 59470
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 60000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimitedThis is an ulimit for mongod user:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 59470
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 60000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 1024
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimitedKernel max threads:
bash-4.1$ cat /proc/sys/kernel/threads-max
118940SELinux is disabled. Don't know how to solve this strange problem...Possibly, somebody does?
| Linux max threads count |
The obvious is:
parallel -j 2 do_CopyInPrimary ::: "${PRIMARY_PARTITION[@]}" &
parallel -j 2 do_CopyInSecondary ::: "${SECONDARY_PARTITION[@]}" &
waitBut this way the secondary does not wait for the primary to finish and it does not check if the primary was successful. Let us assume that $PRIMARY_PARTITION[1] corresponds to $SECONDARY_PARTITION[1] (so if you cannot read the file from
$PRIMARY_PARTITION[1] you will read it from $SECONDARY_PARTITION[1] - That also means that $PRIMARY_PARTITION and $SECONDARY_PARTITION have the same number of elements). Then you can condition the running of $SECONDARY_PARTITION[1] on $PRIMARY_PARTITION[1].
do_Copy() {
PRIMARY_PARTITION=(550 274 2 546 278) # this will have more file numbers
SECONDARY_PARTITION=(1643 1103 1372 1096 1369) # this will have more file numbers
pel=${PRIMARY_PARTITION[$1]}
sel=${SECONDARY_PARTITION[$1]}
do_CopyInPrimary $pel ||
do_CopyInSecondary $sel ||
echo Could not copy neither $pel nor $sel
}
export -f do_Copy
# Number of elements in PRIMARY_PARTITION == SECONDARY_PARTITION
seq ${#PRIMARY_PARTITION[@]} | parallel -j 2 do_CopyThis will get the dependency right, but it will only copy 2 at a time in total. With -j4 you risk running 4 primaries at the same time, so we need to guard against that, too:
do_Copy() {
PRIMARY_PARTITION=(550 274 2 546 278) # this will have more file numbers
SECONDARY_PARTITION=(1643 1103 1372 1096 1369) # this will have more file numbers
pel=${PRIMARY_PARTITION[$1]}
sel=${SECONDARY_PARTITION[$1]}
sem -j2 --fg --id primary do_CopyInPrimary $pel ||
sem -j2 --fg --id secondary do_CopyInSecondary $sel ||
echo Could not copy neither $pel nor $sel
}
export -f do_Copy
# Number of elements in PRIMARY_PARTITION == SECONDARY_PARTITION
seq ${#PRIMARY_PARTITION[@]} | parallel -j 4 do_Copysem will limit the number of primaries to 2 and the number of secondaries to 2.
|
I am trying to copy files from machineB and machineC into machineA as I am running my below shell script on machineA.
If the files is not there in machineB then it should be there in machineC for sure so I will try copying the files from machineB first, if it is not there in machineB then I will try copying the same files from machineC.
I am copying the files in parallel using GNU Parallel library and it is working fine. Currently I am copying two files in parallel.
Currently, I am copying the PRIMARY_PARTITION files in PRIMARY folder using GNU parallel and once that is done, then I copy the SECONDARY_PARTITION files in SECONDARY folder using same GNU parallel so it is sequential as of now w.r.t PRIMARY and SECONDARY folder
Below is my shell script and everything works fine -
#!/bin/bashexport PRIMARY=/test01/primary
export SECONDARY=/test02/secondary
readonly FILERS_LOCATION=(machineB machineC)
export FILERS_LOCATION_1=${FILERS_LOCATION[0]}
export FILERS_LOCATION_2=${FILERS_LOCATION[1]}
PRIMARY_PARTITION=(550 274 2 546 278) # this will have more file numbers
SECONDARY_PARTITION=(1643 1103 1372 1096 1369) # this will have more file numbersexport dir3=/testing/snapshot/20140103# delete primary files first and then copy
find "$PRIMARY" -mindepth 1 -deletedo_CopyInPrimary() {
el=$1
scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMARY/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMARY/.
}
export -f do_CopyInPrimary
parallel -j 2 do_CopyInPrimary ::: "${PRIMARY_PARTITION[@]}"# delete secondary files first and then copy
find "$SECONDARY" -mindepth 1 -deletedo_CopyInSecondary() {
el=$1
scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $SECONDARY/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $SECONDARY/.
}
export -f do_CopyInSecondary
parallel -j 2 do_CopyInSecondary ::: "${SECONDARY_PARTITION[@]}"Problem Statement:-
Is there any way I can launch two threads, one to copy files in PRIMARY folder using the same setup as I have above, meaning it will copy two files in parallel. And second thread to copy the files in SECONDARY folder using the same setup as I have above, it should also copy two files parallel simultaneously?
Meaning they should copy files in parallel both in PRIMARY and SECONDARY folder simultaneously not once PRIMARY folder is done, then copy files in SECONDARY folder.
Currently, once PRIMARY folder file is done, then only I try copying the files in SECONDARY folder.
In short, I just need to launch two threads one thread will run this -
# delete primary files first and then copy
find "$PRIMARY" -mindepth 1 -deletedo_CopyInPrimary() {
el=$1
scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMARY/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMARY/.
}
export -f do_CopyInPrimary
parallel -j 2 do_CopyInPrimary ::: "${PRIMARY_PARTITION[@]}"And second thread will run this -
# delete secondary files first and then copy
find "$SECONDARY" -mindepth 1 -deletedo_CopyInSecondary() {
el=$1
scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $SECONDARY/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $SECONDARY/.
}
export -f do_CopyInSecondary
parallel -j 2 do_CopyInSecondary ::: "${SECONDARY_PARTITION[@]}"And once all the files are copied successfully, it should echo the message, that all the files are copied. In java, I know how to launch two threads and each thread is performing certain task but not sure how in bash shell script this will work?
My main task is to copy two files in parallel using GNU parallel in PRIMARY folder and SECONDARY folder at a same time?
Is this possible to do in bash shell script?
| How to launch two threads in bash shell script? |
A single CPU handles one process at a time. But a "process" is a construct of an operating system; the OS calls playing a video in VLC a single process, but it's actually made up of lots of individual instructions. So it's not as if a CPU is tasked with playing a video and has to drop everything it was doing. A CPU can take on the task of playing a video → switch over to checking for keyboard or mouse input → draw some stuff on the screen → check to see if devices have been attached in any known port → and so on. All within the blink of an eye.
Modern computers excel at multi-tasking. This is why you can launch a video in VLC and have it play continuously even though your computer is doing 100 other odd jobs "at the same time".
|
This is the reasoning for my question: I read this in a text book
“Each CPU (or core) can be working on one process at a time.”
I'm assuming that this used to be accurate but is no longer fully true. How does multi threading play into this? Or is this still true, can a cpu core on linux still only work on one process at a time?
| Can a single core of a cpu process more than one process? |
You can do as @UlrichDangel suggested in the comments and replace the executable gzip with pigz. If you want something a little less invasive you can also create functions for gzip and gunzip and add them to your $HOME/.bashrc file.
gzip() {
pigz "$@"
}
export -f gzipgunzip() {
unpigz "$@"
}
export -f gunzipNow when you run zgrep or zcat it will use pigz instead.
ReferencesReplace bzip2 and gzip with pbzip2 and pigz system wide? |
I'm running zgrep on a computer with 16 CPUs, but it only takes one CPU to run the task.
Can I speed it up, perhaps utilize all 16 cores?
P.S The IO is just fine, I could just copy the gzipped file into memory disk
| Speed up zgrep on a multi-core computer |
I have found a way to get access to more than 4096 threads.
My docker container is a centos7 image; which has by default a user limit set to 4096 processes; as defined in /etc/security/limits.d/20-nproc.conf :
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.* soft nproc 4096
root soft nproc unlimitedwhen logging in to my docker container; I added into the ~/.bashrc the command ulimit -u unlimited such that this limit is removed for that user. Now I can break through this 4096 ceiling.
I am not thoroughly happy with this solution; since this means that I need to adapt all containers that would run on docker-host since they each have their own limit; and since I run all build commands as user 1001 it seems like when a container asks for how many threads he has running; he "sees" all threads of all containers together; not only those from his own instance.
I created an issue in docker-for-linux github for this: https://github.com/docker/for-linux/issues/654
|
TLDR
When spinning up multiple docker containers in which I run npm ci, I start getting pthread_create: Resource temporarily unavailable errors (less than 5 docker containers can run fine). I deduce there is some kind of thread limit somewhere, but I cannot find which one is blocking here.
configurationa Jenkins instance spins up docker containers for each build (connection through ssh into this docker container).
in each container some build commands are run; I see the error often when using npm ci since this seems to create quite some threads; but I don't think the problem is related to npm itself.
all docker containers run on a single docker-host. It's specifications:docker-hostIntel(R) Xeon(R) Gold 5118 CPU @ 2.30GHz with 12 cores, 220 GB RAM
Centos 7
Docker version 18.06.1-ce, build e68fc7a
systemd version 219
kernel 3.10.0-957.5.1.el7.x86_64errors
I can see the error under different forms:jenkins failing to contact the docker container; errors like: java.lang.OutOfMemoryError: unable to create new native thread
git clone failing inside the container with ERROR: Error cloning remote repo 'origin' ... Caused by: java.lang.OutOfMemoryError: unable to create new native thread
npm ci failing inside the container with node[1296]: pthread_create: Resource temporarily unavailableThings I have investigated or tried
I looked quite a lot a this question.docker-host has systemd version 219 and is hence does not have the TasksMax attribute.
/proc/sys/kernel/threads-max = 1798308
kernel.pid_max = 49152
number of threads (ps -elfT | wc -l) is typically 700, but with multiple containers running I have seen it climb to 4500.
all builds run as some user with pid 1001 inside the docker container; however there is no user with pid 1001 on the docker-host so I don't know which limits apply to this user.
I have already increased multiple limits for all users in /etc/security/limits.conf (see below)
I created a dummy user with uid 1001 on docker-host and made sure it had also nproc limit set to unlimited. Logging onto that user ulimit -u = unlimited. This still didn't solve the problem/etc/security/limits.conf :
* soft nproc unlimited
* soft stack 65536
* soft nofile 2097152output of ulimit -a as root:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 899154
max locked memory (kbytes, -l) 1048576
max memory size (kbytes, -m) unlimited
open files (-n) 1048576
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 65536
cpu time (seconds, -t) unlimited
max user processes (-u) 899154
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimitedlimits of my dockerd process (cat /proc/16087/limits where 16087 is pid of dockerd)
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size unlimited unlimited bytes
Max core file size unlimited unlimited bytes
Max resident set unlimited unlimited bytes
Max processes unlimited unlimited processes
Max open files 65536 65536 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 899154 899154 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us | 'pthread_create: Resource temporarily unavailable' when running multiple docker instances |
You are experiencing the problem of appending to a file in parallel. The easy answer is: Don't.
Here is how you can do it using GNU Parallel:
doit() {
url="$1"
uri="$2"
urlstatus=$(curl -o /dev/null --insecure --silent --head --write-out '%{http_code}' "${url}""${uri}" --max-time 5 ) &&
echo "$url $urlstatus $uri"
}
export -f doitparallel -j200 doit :::: url uri >> urlstatus.txtGNU Parallel defaults to serializing the output, so you will not get output from one job that is mixed with output from another.
GNU Parallel makes it easy to get the input included in the output using --tag. So unless the output format is fixed, I would do:
parallel --tag -j200 curl -o /dev/null --insecure --silent --head --write-out '%{http_code}' {1}{2} --max-time 5 :::: url uri >> urlstatus.txtIt will give the same output - just formatted differently. Instead of:
url urlstatus uriyou get:
url uri urlstatus |
here is a shell script which takes domain and its parameters to find status code . this runs way faster due to threading but misses lot of requests.
while IFS= read -r url <&3; do
while IFS= read -r uri <&4; do
urlstatus=$(curl -o /dev/null --insecure --silent --head --write-out '%{http_code}' "${url}""${uri}" --max-time 5 ) &&
echo "$url $urlstatus $uri" >> urlstatus.txt &
done 4<uri.txt
done 3<url.txtif i ran normally it process all requests but the speed is very low. is there a way through which speed is maintained and it also not misses all requests .
| Bash script multithreading in curl commands |
tr ',' ':' <test1.txt | xargs -P 4 -I XX ruby test.rb "http://XX/"Assuming that the test1.txt file contains lines like
127.0.0.1,80
127.0.0.1,8080then the tr would change this to
127.0.0.1:80
127.0.0.1:8080and the xargs would take a line at a time and replace XX in the given command string with the contents of the line and run the command. With -P 4 we get at most four simultaneous processes running.
If your file has trailing commas on each line, remove them first:
sed 's/,$//' test1.txt | tr ',' ':' | xargs ...as above...or even
sed -e 's/,$//' -e 'y/,/:/' test1.txt | xargs ...as above... |
#!/bin/bash
while IFS="," read ip port; do
ruby test.rb "http://$ip:$port/"&
ruby test.rb "https://$ip:$port/";
done <test1.txtHow would i do this multithreading?
if i do more lines divided by & it only runs the same command with the same ip&port more times, i want it to run with next ip&port nor the same
the file looks like
192.168.1.2,8089,
| How to bash multithread? |
Assuming that you can afford to tell the system to run it at some later time, and the sysadmin is sensible and has at installed, you can use the following to get it to run when load levels are low enough (zero by default, but the sysadmin can set any arbitrary value for the threshold):
batch << EOF
<command>
EOFOther than that though, the only way I know of to do this is to poll the load average yourself and fire off the command when it's below some threshold. If you decide to go that route, what you want to look at is the first field in /proc/loadavg, which gives the 1-minute average.
|
I'm a student who wants to benchmark a NGS pipeline, monitoring performance according to how many cores it has allocated to it and the size of the input file. For this reason, I wrote a bash script to call it multiple times with different nr_of_cores parameters and input files, noting down completion time and other stats. The whole thing takes about a day to run through the various scenarios. (Spending, of course, most time on running the biggest file with a single core, so it's not like I'm blocking the whole server for the duration)
I have access to a shared 64-core server to run it on. The shared server, however, varies wildly in number of idle cores depending on time of day and people trying to get projects done. (The top nr_of_cores I'll test is 36, with a small file.)
My question: Is there an easy way to make my bash script wait until it knows [X] cores are available before executing a command? I figure that way, I'd get more reliable data, I wouldn't slow down people with more urgent tasks to run, and I could start the script whenever, instead of waiting until I happen to see with htop that it's a slow day.
Thanks!
| How to delay bash script until there's enough idle cores to run it? |
Both moreutils parallel and GNU parallel will do this for you. With moreutils' parallel, it looks like:
parallel -j "$(nproc)" pngquant [pngquant-options] -- *.pngnproc outputs the number of available processors (threads), so that will run available-processors (-j "$(nproc)") pngquants at once, passing each a single PNG file. If the startup overhead is too high, you can pass more PNG files at once to each run with the -n option; -n 2 would pass two PNGs to each pngquant.
|
I am running a command (pngquant to be precise: https://github.com/pornel/pngquant) in a terminal window. I noticed, that if I open 4 terminal windows, and run pngquant command in each of them, I get 4x speed increase, effectively compressing 4 times as many images in the same time as before.
So I used this approach and assigned each pngqunat process a portion of images I want to compress, effectively creating multiple processes on multiple threads
Can you run command on multiple threads without doing this tricks that I did? I would like to just say "run pngquant compression on all this images and use all threads available."
| run command on multiple threads |
$ ps -eLf
UID PID PPID LWP C NLWP STIME TTY TIME CMD
root 1 0 1 0 1 19:25 ? 00:00:00 init [4]
...
root 1699 1 1699 0 1 19:25 ? 00:00:00 /usr/bin/kdm
root 1701 1699 1701 8 2 19:25 tty10 00:13:10 /usr/bin/X :1 vt10 ...
root 1701 1699 1703 0 2 19:25 tty10 00:00:00 /usr/bin/X :1 vt10 ...
root 1706 1699 1706 0 1 19:25 ? 00:00:00 -:1
root 1707 1699 1707 0 2 19:25 tty9 00:00:02 /usr/bin/X :0 vt9 ...
root 1707 1699 1710 0 2 19:25 tty9 00:00:00 /usr/bin/X :0 vt9 ...
root 1713 1699 1713 0 1 19:25 ? 00:00:00 -:0
....answers your question, I think.
Nevertheless, the question seems to be mixing several things together - multithreading isn't about not using fork()/exec(). Threads share the same address space and if you want to run a different process you certainly don't want it to have access to the same address space. And if you decided not to use external programs (especially in the shell that since you mention it), you'd have to code all the functionality again.
Multithreading isn't a cure for everything. It can mostly be a cure only for well parallelizable problems actually - check wiki page for a nice short overview. Making a program multithreaded doesn't make it better, in most cases it makes it worse due to the bugs in synchronization code (if present at all).
|
None of the command-line shells that I am aware of are multithreaded. In particular, even those shells that support "job control" (Control-Z, bg, fg, etc) do so via facilities (namely, fork, exec, signals, pipes and PTYs) that predate Unix threads.
Nor is Emacs multithreaded even though it is able to "do many things at the same time". (Again, it forks and execs external programs a lot and uses signals, pipes and PTYs to communicate with those external programs.)
My question is, Does the dominant implementation of the X11 protocol (X.org) use Unix threads -- in either the server or any of the client libraries?
If so, approximately when did it (or its ancestor, XFree86, or XFree86's ancestor) start doing so?
| Is any part of the X.org software multithreaded? |
This is 100% normal with respect to threading on any and all operating systems. The documentation for your thread library, any examples and tutorials you may find, etc. are likely to make a point of this as it is often confusing to people when they are learning the ropes of threading.
Threads are by default (and by definition) not synchronized.
This means unless you make arrangements to the contrary (via locks, semaphores, whatever), do not expect threads to execute in any particular order. It does not matter what they are doing or how you think they are "likely to happen" based on what they are doing, etc. Put simply: do not think about them this way, that is not what they are or how they are meant to be used. Do not bother coming up with mentalizations such as "I am in single user mode, I am trying to still the system like water because...and..blah blah" ad infinitum, etc., it is a waste of time, there is nothing to learn that way. Water is inevitably not still, that's how liquids are. End of story. Move on and enjoy.
Executing threads that, as you say, do next to nothing on a system that you have set-up to do next to nothing will result in a light load for the scheduler, but note the CPU still runs at full speed and a lot of nothing happening quickly is not necessarily more predictable than any other scenario. The scheduler is not intended to synchronize or predictably order threads and processes. I really really really hope me saying this does not inspire you to concoct a more complicated and pseudo-logical experiment, because as already stated, that is a waste of time, move on.
You can and probably will synchronize them, but notice that is a delicate trick that first requires you understand why they do not just "do it naturally"(clue: because there is no reason for them to do so, they don't)."What makes a deterministic OS seem non-deterministic?"Perhaps the analogy of fluid dynamics is not so silly. Fluids seem chaotic -- smoke can even seem to be alive -- and I believe in mathematics the study of fluid dynamics and chaos theory do cross over. However, most people are probably not stunned by this ("Amazing! What could cause such behavior in natural phenomenon? Perhaps it is evidence of God!" ...no) because we can understand how the interaction of deterministic rules (the laws of fluid dynamics) viewed on a certain scale will produce results that are only crudely predictable ("Well, the water will spill out of the glass...") and hence are effectively chaotic or non-deterministic (the pattern of the splash on the table) . Even if that is "only a very small fire" or glass of water, we recognize there is the motion of literally trillions of trillions of particles involved.
If you have ever busy looped with a counter just to see how fast your computer can count, you will know that is on the order of billions of times per second. Keep in mind that most of the "minor" system events constantly going on even with an "idle" system will still be much more substantial than a thread whose activity is measured by the printing of an ID, if that is all the thread is doing. Going with our fluid analogy, even on a day that seems "completely windless" to us, smoke does still not form an orderly column (but we can see more clearly how its own movement affects it).
The scheduler is supposed to ensure that all processes run in a balanced way (gravity will make sure water will spill), however, not by giving each of them exactly 1 nanosecond at a time (which might make your experiment more predictable). This is all well and good since, once again, it is NOT the role of a scheduler to synchronize anything, it is to make everything happen in an appropriately balanced way. These two goals are not the same. When you do get into synchronization, you will appreciate how while synchronization can achieve a clear balance, it comes at the cost of efficiency.
Also, the Linux scheduler uses a red-black tree to organize its queue, it does not simply run everything around in a FILO line (I doubt any modern general purpose scheduler will do that). That implies greater complexity and hence, more apparently chaotic behaviour.
I don't want to encourage you to persist in your fixation, but if you got the threads to perform a more substantial task, one that will actually take them something resembling time to do, and start them offset by a few seconds on an otherwise idle system, then in fact you will see completely deterministic results. Just keep in mind this is not a truly effective means of synchronization -- when the wind kicks up and the water gets rough, you will see the chaos start ;)"Do you know of any research which studies the influence of these
events on the OS scheduler?"No, because there is nothing truly mysterious about it, as in the case of fluids. The scheduler actually is completely deterministic, just not in the sense or on the scale you are considering. If you put a bunch of informative printks in the kernel code and understand the algorithm of the scheduler, you won't find anything non-deterministic -- everything will happen exactly the way it is supposed to.
|
The question refers to the output of a multi-threaded application, where each thread merely prints its ID (user assigned number) on the standard output. Here all threads have equal priority and compete for CPU quota in order to print on the standard output. However, running the same application a sufficiently large number of times will result in different order of IDs being printed on the screen. This order is due to the OS scheduler which is a piece of software and therefore deterministic. Nonetheless, its behavior seems non-deterministic, which brings me back to the initial question.
| What makes the Linux scheduler seem unpredictable? |
I think you hit either a number of processes limit or a memory limit.
When I try your program on my computer and reach the pid == -1 state, fork() returns the error EAGAIN, with error message: Resource temporarily unavailable. As a normal user, I could create approx 15k processes.
There are several reasons this EAGAIN could happen, detailed in man 2 fork: not enough memory,
hitting a limit like RLIMIT_NPROC,
deadline scheduler specifics.In my case, I think I just hit the RLIMIT_NPROC limit, aka what ulimit -u usually shows. The best is to display this limit within the program, so you have the real value, not your shell's limits.
#include <sys/time.h>
#include <sys/resource.h>int main() {
struct rlimit rlim;
getrlimit(RLIMIT_NPROC, &rlim);
printf("RLIMIT_NPROC soft: %d, hard: %d\n", rlim.rlim_cur, rlim.rlim_max);Which yields:
RLIMIT_NPROC soft: 15608, hard: 15608
total: 15242Which looks reasonable as I have other processes running, including a web browser. Now, as root, the limits don't really apply anymore and I could fork() much more: I created more than 30k processes, close to my 32k pid_max.
Now, if I take my normal user shell's PID (echo $$), and as root in another shell, I do: prlimit --pid $SHELLPID --nproc=30000, and then launch your program in this shell, I can create almost 30k processes:
RLIMIT_NPROC soft: 30000, hard: 30000
total: 29678Finally: you should also consider memory usage, because on my system, I used a lot of RAM and swap to create all those processes, and maybe it was the limit you hit. Check with free.
|
I was wondering how many processes can I create on my machine (x64 with 8Gb of RAM and running Ubuntu). So I made simple master process which was continiously creating child processes, and that child processes were just sleeping all the time. I ended with just 11-12k processes. Then I switched processes to threads and got exactly same result.
My pid_max is set to 32768, all per-user limits are disabled. Physical memory usage is just couple of bytes. Could you tell me what prevents the system to create new threads at that point?
p.s. here is my source code for multiprocessing test written in C
#include <stdio.h>
#include <unistd.h>int main() {
pid_t pid;
int count = 0;
while (1) {
pid = fork();
if (pid == -1) {
printf("total: %d\n", count);
return 0;
}
if (pid == 0) {
while (1) sleep(10);
}
count++;
}
} | What is a limit for number of threads? |
Little bit confusing. fork is a system call which creates a new process by copying the parent process' image. After that if child process wants to be another program, it calls some of the exec family system calls, such as execl. If you for example want to run ls in shell, shell forks new child process which then calls execl("/bin/ls").
If you see two programs and their pid's are different, check their ppid's (parent id's). For example, if p1 is ppid of process whose pid is p2, it means that process whose id is p1 forked that process. But if first process' ppid is not same that the other process' pid, it means that the same command is executed twice.
If pid and ppid are same, but tid's (thread id's) are different, it means that it's one process that has 2 threads.
I think that making your own shell is a good start point.
|
When executing ps command in my Linux system i see some user processes twice (different PID...). I wonder if they are new processes or threads of the same process. I know some functions in standard C library that could create a new process such fork(). I wonder what concrete functions can make a process appear twice when i execute ps command because i am looking in the source code where the new process or thread is created.
| Which system calls could create a new process? |
As far as the Linux kernel is concerned, threads are processes with some more sharing than usual (e.g. their address space, their signal handling, and their process id, which is really their thread group id).
When a process starts, it has a single thread, with a stack etc. When that thread starts another thread, it’s up to the creating thread to provide a stack for the new thread; that is typically done using mmap, because mmap supports various flags which help ensure that the allocated memory is suitable for use as a stack. See the example program in man 2 clone. However there is no requirement to use mmap, any allocated block of memory meeting the requirements for a stack can be used.
Stacks set up for threads aren’t private: they are visible to other threads sharing the same address space. However, they must be reserved for a single thread’s use as a stack (multiple threads sharing a single stack won’t work well, to say the least).
See Are threads implemented as processes on Linux? for more context and history.
|
I'm currently studying Linux. I know the thread is a kind of lightweight process on Linux. But I wonder to know where the thread stack space comes from.
The stack of the thread is private. It is independent of the process stack. Based on my search, some people said thread stack was created by mmap(). And also, some people said mmap() space is not heap. It is between stack and heap. So thread stack comes from the memory mapping segment of the process. Is that correct?
| Does thread stack comes from the memory mapping segment of the process on Linux? |
Those threads are used for the mesa disk cache:
util_queue_init(&cache->cache_queue, "disk$", 32, 4,
UTIL_QUEUE_INIT_RESIZE_IF_FULL |
UTIL_QUEUE_INIT_USE_MINIMUM_PRIORITY |
UTIL_QUEUE_INIT_SET_FULL_THREAD_AFFINITY);https://sources.debian.org/src/mesa/22.0.3-1/src/util/disk_cache.c/?hl=174#L174
And inside util_queue_init() then:
/* Form the thread name from process_name and name, limited to 13
* characters. Characters 14-15 are reserved for the thread number.
* Character 16 should be 0. Final form: "process:name12"
*
* If name is too long, it's truncated. If any space is left, the process
* name fills it.
*/https://sources.debian.org/src/mesa/22.0.3-1/src/util/u_queue.c/?hl=405#L414-L420
Thus, all GUI processes that somehow call into that mesa code create those extra threads, e.g. on a f33 desktop system of mine:
pid tid comm cls
1942 1989 gnome-s:disk$0 BAT
1942 1990 gnome-s:disk$1 BAT
1942 1991 gnome-s:disk$2 BAT
1942 1992 gnome-s:disk$3 BAT
2041 2237 Xwaylan:disk$0 BAT
2041 2238 Xwaylan:disk$1 BAT
2041 2239 Xwaylan:disk$2 BAT
2041 2240 Xwaylan:disk$3 BAT
2041 2259 Xwaylan:disk$0 BAT
2041 2260 Xwaylan:disk$1 BAT
2041 2261 Xwaylan:disk$2 BAT
2041 2262 Xwaylan:disk$3 BAT
2292 2325 gsd-xse:disk$0 BAT
2292 2326 gsd-xse:disk$1 BAT
2292 2327 gsd-xse:disk$2 BAT
2292 2328 gsd-xse:disk$3 BAT
2307 2344 ibus-x1:disk$0 BAT
2307 2345 ibus-x1:disk$1 BAT
2307 2346 ibus-x1:disk$2 BAT
2307 2347 ibus-x1:disk$3 BAT
2464 2578 firefox:disk$0 BAT
2464 2579 firefox:disk$1 BAT
2464 2580 firefox:disk$2 BAT
2464 2581 firefox:disk$3 BAT
2756 2785 firefox:disk$0 BAT
2756 2786 firefox:disk$1 BAT
2756 2787 firefox:disk$2 BAT
2756 2788 firefox:disk$3 BAT
2806 2841 firefox:disk$0 BAT
2806 2842 firefox:disk$1 BAT
2806 2843 firefox:disk$2 BAT
2806 2844 firefox:disk$3 BAT
2919 3078 firefox:disk$0 BAT
2919 3079 firefox:disk$1 BAT
2919 3080 firefox:disk$2 BAT
2919 3081 firefox:disk$3 BAT
3346 3367 firefox:disk$0 BAT
3346 3368 firefox:disk$1 BAT
3346 3369 firefox:disk$2 BAT
3346 3370 firefox:disk$3 BAT
3408 3426 firefox:disk$0 BAT
3408 3427 firefox:disk$1 BAT
3408 3428 firefox:disk$2 BAT
3408 3429 firefox:disk$3 BAT
5794 5825 firefox:disk$0 BAT
5794 5826 firefox:disk$1 BAT
5794 5827 firefox:disk$2 BAT
5794 5828 firefox:disk$3 BAT
6345 6364 firefox:disk$0 BAT
6345 6365 firefox:disk$1 BAT
6345 6366 firefox:disk$2 BAT
6345 6367 firefox:disk$3 BAT
9502 9525 firefox:disk$0 BAT
9502 9526 firefox:disk$1 BAT
9502 9527 firefox:disk$2 BAT
9502 9528 firefox:disk$3 BAT
22548 22565 firefox:disk$0 BAT
22548 22566 firefox:disk$1 BAT
22548 22567 firefox:disk$2 BAT
22548 22568 firefox:disk$3 BAT
33788 33807 vlc:disk$0 BAT
33788 33808 vlc:disk$1 BAT
33788 33809 vlc:disk$2 BAT
33788 33810 vlc:disk$3 BAT
48178 74574 kwallet:disk$0 BAT
48178 74575 kwallet:disk$1 BAT
48178 74576 kwallet:disk$2 BAT
48178 74577 kwallet:disk$3 BAT
60824 60830 chromiu:disk$0 BAT
60824 60831 chromiu:disk$1 BAT
60824 60832 chromiu:disk$2 BAT
60824 60833 chromiu:disk$3 BAT
69502 69519 firefox:disk$0 BAT
69502 69520 firefox:disk$1 BAT
69502 69521 firefox:disk$2 BAT
69502 69522 firefox:disk$3 BAT |
I am using Ubuntu 20.04 LTS. The kernel version is 5.4.0-42.
Here is an example program:
// mre.c
// Compile with: cc -o mre mre.c -lSDL2
#include <stdio.h>
#include <SDL2/SDL.h>
int main(void)
{
SDL_Init(SDL_INIT_VIDEO); // Doesn't work without SDL_INIT_VIDEO
getchar();
}When I look at the running program ./mre in htop with thread names turned on, I see it has these four threads:mre:disk$3
mre:disk$2
mre:disk$1
mre:disk$0And here are some threads of /usr/libexec/ibus-x11 with similar names:ibus-x1:disk$3
ibus-x1:disk$2
ibus-x1:disk$1
ibus-x1:disk$0Many programs don't have them (maybe they aren't using a certain graphical interface?)
Such threads always come in fours (my computer has four cores) and are listed in descending order. /usr/lib/xorg/Xorg has eight of these threads, two of each number 0-3. What are they for?
| What are these threads named disk$0, disk$1, etc.? |
Probably not. All cron has to do is (to express it simplified) watch until it is time to run one job or the other, and if so, fork a process which runs that job and periodically check if the job is finished in order to clean it up.
MT could be used for this waiting, but I think that would be overkill. With the wait()/waitpid() family functions, it is possible to have a look at all children at once (would be good for kindergarten teachers :-D). And you can have a look without blocking, so you have as well the possibility to continue looking for the time to execute the next job. And SIGCHLD exists as well.
| I don't know where I can find more information about crontab, so I ask here.
Crontab is it multithread? How does it work?
| Is crontab multithread? [closed] |
In the context of a Unix or linux process, the phrase "the stack" can mean two things.
First, "the stack" can mean the last-in, first-out records of the calling sequence of the flow of control. When a process executes, main() gets called first. main() might call printf(). Code generated by the compiler writes the address of the format string, and any other arguments to printf() to some memory locations. Then the code writes the address to which flow-of-control should return after printf() finishes. Then the code calls a jump or branch to the start of printf(). Each thread has one of these function activation record stacks. Note that a lot of CPUs have hardware instructions for setting up and maintaining the stack, but that other CPUs (IBM 360, or whatever it's called) actually used linked lists that could potentially be scattered about the address space.
Second, "the stack" can mean the memory locations to which the CPU writes arguments to functions, and the address that a called-function should return to. "The stack" refers to a contiguous piece of the process' address space.
Memory in a Unix or Linux or *BSD process is a very long line, starting at about 0x400000, and ending at about 0x7fffffffffff (on x86_64 CPUs). The stack address space starts at the largest numerical address. Every time a function gets called, the stack of function activatio records "grows down": the process code puts function arguments and a return address on the stack of activatio records, and decrements the stack pointer, a special CPU register used to keep track of where in the address space of the stack, the process current variables' values reside.
Each thread gets a piece of "the stack" (stack address space) for its own use as a function activation record stack. Somewhere between 0x7fffffffffff and a lower address, each thread has an area of memory reserved for use in function calls. Usually this is only enforced by convention, not hardware, so if your thread calls function after nested function, the bottom of that thread's stack can overwrite the top of another thread's stack.
So each thread has a piece of "the stack" memory area, and that's where the "shared stack" terminology comes from. It's a consequence of a process address space being a single linear chunk of memory, and the two uses of the term "the stack". I'm pretty sure that some older JVMs (really ancient) in reality only had a single thread. Any threading inside the Java code was really done by a single real thread. Newer JVMs, JVMs who invoke real threads to do Java threads, will have the same "shared stack" I describe above. Linux and Plan 9 have a process-starting system call (clone() for Linux, rfork() in Plan 9) that can set up processes that share parts of the address space, and maybe different stack address spaces, but that style of threading never really caught on.
|
From the book Advanced programming in the Unix environment I read the following line regarding threads in Unix like systemsAll the threads within a process share the same address space, file
descriptors, stacks, and process-related attributes.Because they can
access the same memory,the threads need to synchronize access to
shared data among themselves to avoid inconsistencies.What does the author mean by stacks here ? I do Java programming and know each thread gets its own stack . So I am confused by shared stacks concept here .
| What is meant by stack in connection to a process? |
A much more efficient answer that does not use grep:
build_k_mers() {
k="$1"
slot="$2"
perl -ne 'for $n (0..(length $_)-'"$k"') {
$prefix = substr($_,$n,2);
$fh{$prefix} or open $fh{$prefix}, ">>", "tmp/kmer.$prefix.'"$slot"'";
$fh = $fh{$prefix};
print $fh substr($_,$n,'"$k"'),"\n"
}'
}
export -f build_k_mersrm -rf tmp
mkdir tmp
export LC_ALL=C
# search strings must be sorted for comm
parsort patterns.txt | awk '{print >>"tmp/patterns."substr($1,1,2)}' &# make shorter lines: Insert \n(last 12 char before \n) for every 32k
# This makes it easier for --pipepart to find a newline
# It will not change the kmers generated
perl -pe 's/(.{32000})(.{12})/$1$2\n$2/g' large_strings.txt > large_lines.txt
# Build 12-mers
parallel --pipepart --block -1 -a large_lines.txt 'build_k_mers 12 {%}'
# -j10 and 20s may be adjusted depending on hardware
parallel -j10 --delay 20s 'parsort -u tmp/kmer.{}.* > tmp/kmer.{}; rm tmp/kmer.{}.*' ::: `perl -e 'map { printf "%02x ",$_ } 0..255'`
wait
parallel comm -23 {} {=s/patterns./kmer./=} ::: tmp/patterns.??I have tested this on a full job (patterns.txt: 9GBytes/725937231 lines, large_strings.txt: 19GBytes/184 lines) and on my 64-core machine it completes in 3 hours.
|
I am using the following grep script to output all the unmatched patterns:
grep -oFf patterns.txt large_strings.txt | grep -vFf - patterns.txt > unmatched_patterns.txtpatterns file contains the following 12-characters long substrings (some instances are shown below):
6b6c665d4f44
8b715a5d5f5f
26364d605243
717c8a919aa2large_strings file contains extremely long strings of around 20-100 million characters longs (a small piece of the string is shown below):
121b1f212222212123242223252b36434f5655545351504f4e4e5056616d777d80817d7c7b7a7a7b7c7d7f8997a0a2a2a3a5a5a6a6a6a6a6a7a7babbbcbebebdbcbcbdbdbdbdbcbcbcbcc2c2c2c2c2c2c2c2c4c4c4c3c3c3c2c2c3c3c3c3c3c3c3c3c2c2c1c0bfbfbebdbebebebfbfc0c0c0bfbfbfbebebdbdbdbcbbbbbababbbbbcbdbdbdbebebfbfbfbebdbcbbbbbbbbbcbcbcbcbcbcbcbcbcb8b8b8b7b7b6b6b6b8b8b9babbbbbcbcbbbabab9b9bababbbcbcbcbbbbbababab9b8b7b6b6b6b6b7b7b7b7b7b7b7b7b7b7b6b6b5b5b6b6b7b7b7b7b8b8b9b9b9b9b9b8b7b7b6b5b5b5b5b5b4b4b3b3b3b6b5b4b4b5b7b8babdbebfc1c1c0bfbec1c2c2c2c2c1c0bfbfbebebebebfc0c1c0c0c0bfbfbebebebebebebebebebebebebebdbcbbbbbab9babbbbbcbcbdbdbdbcbcbbbbbbbbbbbabab9b7b6b5b4b4b4b4b3b1aeaca9a7a6a9a9a9aaabacaeafafafafafafafafafb1b2b2b2b2b1b0afacaaa8a7a5a19d9995939191929292919292939291908f8e8e8d8c8b8a8a8a8a878787868482807f7d7c7975716d6b6967676665646261615f5f5e5d5b5a595957575554525How can we speed up the above script (gnu parallel, xargs, fgrep, etc.)? I tried using --pipepart and --block but it doesn't allow you to pipe two grep commands.
Btw these are all hexadecimal strings and patterns.
| Boosting the grep search using GNU parallel |
You can use taskset from util-linux.The masks
may be specified in hexadecimal (with or without a leading "0x"), or as
a CPU list with the --cpu-list option. For example,
0x00000001 is processor #0, 0x00000003 is processors #0 and #1, 0xFFFFFFFF is processors #0 through #31, 32 is processors #1, #4, and #5, --cpu-list 0-2,6
is processors #0, #1, #2, and #6. When taskset returns, it is guaranteed that the given program has been
scheduled to a legal CPU. |
I have a bug in my Linux app that is reproducable only on single-core CPUs.
To debug it, I want to start the process from the command line so that it is limited to 1 CPU even on my multi-processor machine.
Is it possible to change this for a particular process, e.g. to run it so that it does not run (its) multiple threads on multiple processors?
| Run process as if on a single-core machine to find a bug |
It's been a few years since I ran a slurm cluster, but squeue should give you what you want. Try:
squeue --nodelist 92512 -o "%A %j %C %J"(that should give your jobid, jobname, cpus, and threads for your jobs on node 92512)
BTW, unless you specifically only want details from one particular node, you might be better off searching by job id rather than node id.
There are a lot of good sites with documentation on using slurm available on the web, easily found via google - most universities etc running an HPC cluster write their own docs and help and "cheat-sheets", customised to the details of their specific cluster(s) (so take that into account and adapt any examples to YOUR cluster). There's also good generic documentation on using slurm at https://slurm.schedmd.com/documentation.html
|
I am working on a cluster machine that uses the Slurm job manager. I just started a multithreaded code and I would like to check the core and thread usage for a given node ID. For example,
scoreusage -N 92512
were "scoreusage" is the command that I am unsure of.
| Check CPU/thread usage for a node in the Slurm job manager |
Following @Tomes advice, I'm trying to answer my own question, based on my comment exchange with @user10489.
Of course I am no expert on this matter, so don't hesitate to amend or correct my statements if needed.
But first, a clarification, because on a lot of websites people confuse block size and sector size :A block is the smallest amount of data a file system can handle (very often 4096 bytes by default, for example for EXT4, but it can be changed during formatting). I believe in the Windows world that's called a cluster.
A sector is the smallest amount of data a drive can handle. Since circa 2010, all HDDs use 4096 byte sectors (e.g., the physical sector size is 4096 bytes). But to stay compatible with older OSes, that can only handle HDDs with 512 bytes sectors, modern drives still present themselves as HDDs with 512 bytes (e.g., their logical sector size is 512 bytes). The conversion from the logical 512 bytes, as seen by the OS, and the physical 4096 bytes of the HDD, is done by the HDD's firmware. This is called Advanced Format HDDs (aka 512e/4Kn HDDs, e for emulated and n for native)So, an out-of-the-box HDD presents itself with a logical sector size of 512 bytes, because the drive's manufacturer want it to be recognized by all OSes, including old ones. But all modern OSes can handle native 4K drives (Linux can do this since kernel 2.6.31 in 2010). So a legitimate question is : if you know you won't ever use pre-2010 OSes, wouldn't it make sense, prior to using a new HDD, to modify it's logical sector size from 512 bytes to 4096 bytes ?
Someone did a benchmark to find out if there are real benefits to this, and found out that there was a real difference only in one case : single-threaded R/W tests. In multi-threaded tests, he found no significant difference.
My question is : does this specific use case translate in real life ? E.g., does Linux do a lot of single threaded R/W operations ? In which case setting the HDD's logical sector size to 4096 would result in some real benefits.
I still don't have the answer to this question. But I think another way to look at it is to say that, on modern OSes, it doesn't hurt to change a drive's default 512 bytes logical sector size to 4096 bytes : best case scenario you are getting some performance improvements if the OS does single-threaded R/W operations, and worst case scenario nothing changes.
Again, the only reason a drive uses 512 bytes logical sectors out-of-the-box is to stay compatible with older pre-2010 OSes. On modern OSes, setting it to 4096 bytes won't hurt.
One last thing to notice is that all HDD's don't support that change. As far as I know, those who do report explicitly their supported logical sector sizes :
# hdparm -I /dev/sdX | grep 'Sector size:'
Logical Sector size: 512 bytes [ Supported: 512 4096 ]
Physical Sector size: 4096 bytesIt can then be changed also with hdparm, or with the manufacturer's proprietary tools.
[ EDIT ]
But there's a reason why changing the logical sector size from 512 to 4K may not be such a good idea. According to Wikipedia, aside from the OS, an application is also a potential area using 512-byte-based code :So, does that mean that even with a modern OS supporting 4Kn, you can get into trouble if a specific application doesn't support it ?
In that case it makes probably more sense to keep the HDD's default 512e logical sector size, unless you can be absolutely sure that all your applications can handle 4Kn.
[ EDIT 2 ]
At second thought, there's probably no big risk to switch to 4K sectors on modern hardware and software. Most software will work at the filesystem level, and those who have direct raw block access (formatting tools, cloning tools, ...) will probably support 4K sectors, unless they're outdated. See also Switching HDD sector size to 4096 bytes
|
Modern HDDs all are "Advanced Format" ones, e.g. by default they report a logical/physical sector size of 512/4096.
By default, most Linux formatting tools use a block size of 4096 bytes (at least that's the default on Debian/EXT4).
Until today, I thought that this was kind of optimized : Linux/EXT4 sends chunks of 4K data to the HDD, which can handle them optimally, even though its logical sector size is 512K.
But today I read this quite recent (2021) post. The guy did some HDD benchmarks, in order to check if switching his HDD's logical sector size from 512e to 4Kn would provide better performances. His conclusion :Remember: My theory going in was that the filesystem uses 4k blocks, and everything is properly aligned, so there shouldn’t be a meaningful difference.Does that hold up? Well, no. Not at all. (...) Using 4kb blocks… there’s an awfully big difference here. This is single threaded benchmarking, but there is consistently a huge lead going to the 4k sector drive here on 4kb block transfers.
(...)Conclusions: Use 4k Sectors!
As far as I’m concerned, the conclusions here are pretty clear. If you’ve got a modern operating system that can handle 4k sectors, and your drives support operating either as 512 byte or 4k sectors, convert your drives to 4k native sectors before doing anything else. Then go on your way and let the OS deal with it.Basically, his conclusion was that there was quite a performance improvement in switching the HDD's logical sector size to 4Kn, vs the out-of-box 512e :Now, an important thing to note : that particular benchmark was single threaded. He also did a 4-threaded benchmark, which didn't show any significant differences between 512e and 4Kn.
Thus my questions :His conclusion holds up only if you have single threaded processes that read/write on the drive. Does Linux have such single threaded processes ?
And thus, would you recommend to set a HDD's logical sector size to 4Kn ? | Are there any benefits in setting a HDD's logical sector size to 4Kn? |
In versions of GNU libc prior to 2.26 and on some architectures including x86_64, upon return from the function passed to clone(), the libc would eventually call exit_group() (with the returned value as argument which you don't pass hence the random 16) which would cause all threads (the whole process) to terminate.
It was fixed in this commit (see corresponding bug report).commit 3f823e87ccbf3723eb4eeb63b0619f1a0ceb174e
Author: Adhemerval Zanella <[emailprotected]>
Date: Thu Jun 22 08:49:34 2017 -0300 Call exit directly in clone (BZ #21512) On aarch64, alpha, arm, hppa, mips, nios2, powerpc, sh, sparc, tile,
and x86_64 the clone syscall jumps to _exit after the child execution
and the function ends the process execution by calling exit_group.
This behavior have a small issue where threads created with
CLONE_THREAD using clone syscall directly will eventually exit the
whole group altogether instead of just the thread created. Also,
s390, microblaze, ia64, i386, and m68k differs by calling exit
syscall directly. This patch changes all architectures to call the exit syscall
directly, as for s390, microblaze, ia64, i386, and m68k. This do not
have change glibc internal behavior in any sort, since the only
usage of clone implementation in posix_spawn calls _exit directly
in the created child (fork uses a direct call to clone). Checked on x86_64-linux-gnu, i686-linux-gnu, aarch64-linux-gnu,
powerpc-linux-gnu, powerpc64le-linux-gnu, sparc64-linux-gnu,
and sparcv9-linux-gnu.With older versions, you could work around it by calling the exit system call directly (syscall(SYS_exit, 0)) instead of using return, or if you don't want to modify your function, pass a wrapper function to clone() defined as:
int wrapper(void *arg)
{
syscall(SYS_exit, t2_thread(arg));
return 0; /* never reached */
} |
I am trying my hands on clone() system call to create a Thread. However program is terminating itself as it return from t2_thread() function. Why is this behaviour? What am I missing?
#define _GNU_SOURCE
#include<sys/syscall.h>
#include<stdio.h>
#include<unistd.h>
#include<stdlib.h>
#include<errno.h>
#include<sched.h>int t2_thread(void *arg)
{
printf("Thread 2 (%ld)\n",syscall(SYS_gettid));
return;
}
int main(int argc, char **argv)
{
const size_t STCK_SZ = 65536;
char *stck;
int flags;
stck = malloc(STCK_SZ);
if(stck == NULL)
{
perror("malloc");
exit(EXIT_FAILURE);
}
flags = CLONE_SIGHAND |CLONE_FS |CLONE_VM |CLONE_FILES | CLONE_THREAD;
if(clone(t2_thread, stck + STCK_SZ, flags, NULL)==-1)
{
perror("clone");
exit(EXIT_FAILURE);
} printf("Thread 1 (%ld)\n",syscall(SYS_gettid)); for(;;)
{
printf("T1\n");
sleep(1);
}
exit(EXIT_SUCCESS);
}By the way output of this program is:
Thread 1 (8963)
T1
Thread 2 (8964)$echo $?
16What should I do to execute for loop infinitely?
| Why is this code exiting with return code 16? |
According to the man page: Linux supports PTHREAD_SCOPE_SYSTEM, but not PTHREAD_SCOPE_PROCESSAnd if you take a look at the glibc's implementation:
0034 /* Catch invalid values. */
0035 switch (scope)
0036 {
0037 case PTHREAD_SCOPE_SYSTEM:
0038 iattr->flags &= ~ATTR_FLAG_SCOPEPROCESS;
0039 break;
0040
0041 case PTHREAD_SCOPE_PROCESS:
0042 return ENOTSUP;
0043
0044 default:
0045 return EINVAL;
0046 } |
I read that their is 1:1 mapping of user and kernel thread in linux
What is the difference between PTHREAD_SCOPE_PROCESS & PTHREAD_SCOPE_SYSTEM in linux if kernel is considering every thread like a process then there will not be any performance difference? Correct me I'm wrong
| Pthread scheduler scope variables? |
GNU Parallel will not do multithreading, but it will do multiprocessing, which might be enough for you:
seq 50000 | parallel my_MC_sim --iteration {}It will default to 1 process per CPU core and it will make sure the output of two parallel jobs will not be mixed.
You can even put this parallelization in the Octave script. See https://www.gnu.org/software/parallel/parallel_tutorial.html#Shebang
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to. It can often replace a for loop.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bashFor other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel
|
I am computing Monte-Carlo simulations using GNU Octave 4.0.0 on my 4-core PC. The simulation takes almost 4 hours to compute the script for 50,000 times (specific to my problem), which is a lot of time spent for computation. I was wondering if there is a way to run Octave on multiple cores simultaneously to reduce the time of computations.
Thanks in advance.
| Run GNU Octave script on multiple cores |
One way round this problem (if the backup directory is on it's own partition) is to leave the volume unmounted, mounting just before starting the rsync command. This negates the need to use flock and may have the benefit of prolonging drive longevity/reducing power consumption.
/etc/fstab: add the noauto option to the partition so it doesn't get automatically mounted on boot
In the daily.local or cron.daily scheduled task:
#!/bin/sh
mount /mnt/media_backup && \
/usr/local/bin/rsync -avz /mnt/media_primary/ /mnt/media_backup/ && \
umount /mnt/media_backupThe double ampersand operator (&&) will only start the next command if the previous command is successful. Hence, if the backup disk can't be mounted (because it is already mounted and rsync is already running on the partition), the rest of the command will not proceed.
|
Platform information:
OpenBSD 6.2 amd64
$ rsync --version
rsync version 3.1.2 protocol version 31
I'm trying to sync a large directory (4TB) using the following daily.local file (for Linux admins, this is essentially a cron daily task):
#!/bin/sh
# Sync the primary storage device to the backup disk
/usr/local/bin/rsync -avz /mnt/media_primary/ /mnt/media_backup/The initial rsync copy takes more than a day.
After a day or two, I end up with multiple running copies of rsync in my processes list: new ones are started as scheduled and these new processes seem to be competing with each other and not finishing the task (quickly at least)!
Is there a way to make a new rsync process aware of other rsync processes (or another way to avoid rsync race conditions)?
I know I could just run rsync manually to copy over the directory the first time and/or increase the scheduled time interval. This question is more for my interest as I was unable to find information on the net about this topic.
| Stop rsync scheduled task race condition (large directory, small time interval) |
If you move a file to a different filesystem, what happens under the hood is that the current contents of the file are copied and the original file is deleted. If the program was still writing to the file, it keeps writing to the now-deleted file. A deleted-but-opened file is in fact not deleted, but merely detached (it no longer has a name); the file is deleted for real when the program closes it. So you get the worst of both worlds: the file still uses as much disk space but you lose the remainder of the output.
You can press Ctrl+Z to suspend the foreground process, and resume it with the command bg or fg. All threads are suspended unless the program went through hoops to behave otherwise. (A program designed to spawn children over the network might behave otherwise. A single-process multi-thread program is highly likely to behave normally.) If the program consists of different processes, use the ps command to locate them all and something like kill -STOP 1234 1238 1239 to suspend them all (use kill -CONT … to resume them later).
If the program writes or even reads back and forth in the file, you can't remove its data under its nose. Moving the data at this stage might be doable but it would be difficult and dependent on how the program works. But given your description, the program probably just keeps appending to each file, in which case removing some data at the beginning is doable.
Don't edit the files: this is unlikely to do what you want. Most editors work by saving a new file and moving it in place of the old ones (this is more robust in case of a crash while saving). You can save disk space by truncating the beginning of the file. First, copy the file to save the data elsewhere. Then truncate the file to length 0. The program will keep appending at the position where it was before; if that position was 12345 then as soon as the program appends another byte the file will start with 12345 null bytes. Most of these null bytes will not take up any disk space: the file will be a sparse file.
# Suspend the program first, otherwise you'll lose output produced between cp and truncation!
for x in *.out; do
cp "$x" /elsewhere/
: >|"$x" # truncate $x to size 0
doneOnce the program has finished, you can append the remaining data to the files saved elsewhere. The tail utility can copy a file omitting the first N bytes; note that the argument is one plus the number of bytes to omit.
for x in *.out; do
existing_size=$(stat -c %s "/elsewhere/$x")
tail -c +$((existing_size+1)) "$x" >>"/elsewhere/$x"
doneIf you have rsync 3.0.0 or above, you can use
rsync --append *.out /elsewhere/Note that older rsync versions would overwrite the existing portion of the files with the newly-appeared null bytes from the source! Check your rsync versions before doing this.
|
Due to an unpredicted scenario I am currently in need of finding a solution to the fact that an application (which I do not wish to kill) is slowly hogging the entire disk space. To give more contextI have an application in Python that uses multiprocessing.Pool to start 5 threads. Each thread writes some data to its own file.
The program is running on Linux and I do not have root access to the machine.
The program is CPU intensive and has been running for months. It still has a few days to write all the data.
40% of the data in the files is redundant and can be removed after a quick test.
The system on which the program is running only has 30GB of remaining disk space and at the current rate of work it will surely be hogged before the program finishes.Given the above points I see the following solutions with respective problemsGiven that the process number i is writing to file_i, is it safe to move file_i to an external location? Will the OS simply create a new instance of file_i and write to it? I assume moving the file would remove it and the process would end up writing to a "dead" file?
Is there a "command line" way to stop 4 of the 5 spawned workers and wait until one of them finishes and then resume their work? (I am sure one single worker thread would avoid hogging the disk)
Suppose I use CTRL+Z to freeze the main process. Will this stop all the other processes spawned by multiprocessing.Pool? If yes, can I then safely edit the files as to remove the redundant lines?Given the three options that I see, would any of them work in this context? If not, is there a better way to handle this problem? I would really like to avoid the scenario in which the program crashes just few days before its finish.
| Linux - preventing an application from failing due to lack of disk space |
Time-sliced threads are threads executed by a single CPU core without truly executing them at the same time (by switching between threads over and over again).
This is the opposite of simultaneous multithreading, when multiple CPU cores execute many threads.
Interrupts interrupt thread execution no matter of technology, and when interrupt handling code exits, control is given back to thread code.
|
What does it mean when threads are time-sliced? Does that mean they work as interrupts, don't exit while routine is not finished? Or it executes one instruction from one thread then one instruction from second thread and so on?
| Threads vs interrupts |
The error is typically caused by too many ssh/scp starting at the same time. That is a bit odd as you at most run 4. That leads me to believe /etc/ssh/sshd_config:MaxStartups and MaxSessions on $FILERS_LOCATION_1+2 is set too low.
Luckily we can ask GNU Parallel to retry if a command fails:
do_Copy() {
el=$1
PRIMSEC=$2
scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/.
}
export -f do_Copyparallel --retries 10 -j 2 do_Copy {} $PRIMARY ::: "${PRIMARY_PARTITION[@]}" &
parallel --retries 10 -j 2 do_Copy {} $SECONDARY ::: "${SECONDARY_PARTITION[@]}" &
waitecho "All files copied." |
I am trying to copy files from machineB and machineC into machineA as I am running my below shell script on machineA.
If the files is not there in machineB then it should be there in machineC for sure so I will try copying the files from machineB first, if it is not there in machineB then I will try copying the same files from machineC.
I am copying the files in parallel using GNU Parallel library and it is working fine. Currently I am copying two files in parallel.
Earlier, I was copying the PRIMARY_PARTITION files in PRIMARY folder using GNU parallel and once that is done, then onnly I was copying the SECONDARY_PARTITION files in SECONDARY folder using same GNU parallel so it is sequential as of now w.r.t PRIMARY and SECONDARY folder.
Now I decided to copy files in PRIMARY and SECONDARY folder simultaneously. Meaning, I will copy two files in PRIMARY folder along with two files in SECONDARY folder simultaneously.
Below is my shell script which I have -
#!/bin/bashexport PRIMARY=/test01/primary
export SECONDARY=/test02/secondary
readonly FILERS_LOCATION=(machineB machineC)
export FILERS_LOCATION_1=${FILERS_LOCATION[0]}
export FILERS_LOCATION_2=${FILERS_LOCATION[1]}
PRIMARY_PARTITION=(550 274 2 546 278) # this will have more file numbers
SECONDARY_PARTITION=(1643 1103 1372 1096 1369 1568) # this will have more file numbersexport dir3=/testing/snapshot/20140103find "$PRIMARY" -mindepth 1 -delete
find "$SECONDARY" -mindepth 1 -deletedo_CopyInPrimary() {
el=$1
scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMARY/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMARY/.
}
export -f do_CopyInPrimarydo_CopyInSecondary() {
el=$1
scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $SECONDARY/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $SECONDARY/.
}
export -f do_CopyInSecondaryparallel -j 2 do_CopyInPrimary ::: "${PRIMARY_PARTITION[@]}" &
parallel -j 2 do_CopyInSecondary ::: "${SECONDARY_PARTITION[@]}" &
waitecho "All files copied."Problem Statement:-
With the above script at some point I am getting this exception -
ssh_exchange_identification: Connection closed by remote host
ssh_exchange_identification: Connection closed by remote host
ssh_exchange_identification: Connection closed by remote hostIs there any better way of doing the same thing as the way I am doing currently? I guess, I can still use GNU Parallel to make it work?
| How to copy in two folders simultaneously using GNU parallel by spawning multiple threads? |
All three entries are defined close together in the kernel source: comm, stat, and status. Working forwards from there, comm is handled by comm_show which calls proc_task_name to determine the task’s name. stat is handled by proc_tgid_stat, which is a thin wrapper around do_task_stat, which calls proc_task_name to determine the task’s name. status is handled by proc_pid_status, which also calls proc_task_name to determine the task’s name.
So yes, comm, the “Name” line of status and the second field of stat all show the same value. The only variations are whether the value is escaped or not: status escapes it (replacing special characters), the others don’t.
|
A Linux thread or forked process may change its name and/or its commandline as visible by ps or in the /proc filesystem.
When using the python-setproctitle package, the same change occurs on /proc/pid/cmdline, /proc/pid/comm, the Name: line of /proc/pid/status and in the second field of /proc/pid/stat, where only cmdline is showing the full length and the other three locations are showing the first 15 chars of the changed name.
When watching multithreaded ruby processes, it looks like the /proc/pid/cmdline remains unchanged but the other three locations are showing a thread name, truncated to 15 chars.
man prctl tells that /proc/pid/comm is modified by the PR_SET_NAME operation of the prctl syscall but it does not say anything about /proc/pid/status and /proc/pid/stat.
man proc says /proc/pid/comm provides a superset of prctl PR_SET_NAME which is not explained anymore.
And it tells that the second field of /proc/pid/stat would still be available even if the process gets swapped out.
When watching JVM processes, all the mentioned locations give identical contents for all threads (the three places other than cmdline all showing java), but jcmd pid Thread.print still shows different thread names for the existing threads, so it looks like Java threads are using some non-standard mechanism to change their name.
Is /proc/pid/comm always identical to the Name: line of /proc/pid/status and the second field of /proc/pid/stat or are there circumstances where one of these three places is offering different contents ?
Please provide an (easy to reproduce) example if differences are possible.
| Thread Name: Is /proc/pid/comm always identical to the Name: line of /proc/pid/status and the second field of /proc/pid/stat? |
Even with SSDs the bottleneck of splitting files is I/O. Having several processes / threads for that will not gain performance and often be much slower.
In addition if you want to split on newlines only then it is not clear in advance from where to where each thread has to copy. You would probably have to write a special tool for that.
The situation might be different if another action is needed like e.g. splitting and compressing. In that case the use of several cores might help but then I/O is not the bottleneck (depending on drive and CPU speed).
|
So I have a 100GB text files And I want to split it into 10000 files.
I used to do such tasks with something like:
split -l <number of lines> -d --additional-suffix=.txt bigfile small_files_prefixBut I tried to do that with this one and I monitored my system and realized that it wasn't using much memory or CPU so I realized that it's just reading the file from beginning to end with one thread.
Is there any low level(or very high performance) tool that can do such a task with multiple threads.
I would even prefer to copy the file if necessary and take advantage of my multiple cores if possibly faster(I don't think so!).
| How to split a file to multiple files with multiple threads? |
From man pthreads in my computer
In addition to the **main (initial) thread**, and the threads that the
program creates using pthread_create(3), the implementation creates a
"manager" thread. This thread handles thread creation and termination.
(Problems can result if this thread is inad‐ vertently killed.) |
Is a Linux process considered a thread?
For example, if I write a simple c program that calls pthread_create to create a new thread in main(), does that mean that I now have 2 threads, one for main() and the newly created one? Or does only the spawned thread count as a thread but not the main() process?
I was wondering because by calling pthread_join to join the spawned thread to main() it seems like I'm joining threads together, thus implying that the main process is a thread.
Please also correct me if I use the wrong terminology. :)
| Is each process considered a thread? |
Starting with version 0.9.32 (released 8 june 2011), uClibc is supporting NPTL for the following architectures: arm, i386, mips, powerpc, sh, sh64, x86_64.
Actually, both are an implementation of pthreads and will provide libpthread.so.
|
I recently attended an embedded Linux course that stated that uClibc does not support the use of pthreads and that it only supports linuxthreads. Furthermore, the course instructor implied that linuxthreads were next to useless. However, when reading a number of online articles, the implication is that they are in fact supported. Furthermore, when building a root file system and kernel image for a target embedded device using buildroot, I can see that I have libpthread-0.9.33.2.so and libpthread.so.0 files in the /lib directory of my target root file system. I am really confused about the nature of the conflicting information I have received and would be very grateful if anyone could actually clarify the situation for me.
| Does uClibc support using pthreads? |
Sometimes process id=thread id.
Show my code.
python3 mthreads.py
7761
cat /proc/7761/status|grep Threads
Threads: 2pstree -p 7761
python3(7761)───{python3}(7762)LWP means light weight process (thread) ID of the dispatchable entity (alias spid, tid) ,NLWP means number of lwps (threads) in the process in man ps page.
ps -p 7761 -f -L
UID PID PPID LWP C NLWP STIME TTY TIME CMD
user 7761 2305 7761 48 2 19:28 pts/1 00:00:09 python3 mthreads.py
user 7761 2305 7762 51 2 19:28 pts/1 00:00:09 python3 mthreads.pyprcess id--7761 contains two threads,one thread id is 7761 same value as process id,other thread id is 7762.
|
Environment: OS --debian + python3.
All the output info below ommit unimportant.
Get my computer's cpu info with cat /proc/cpuinfo :
cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model name : Intel(R) Celeron(R) CPU G1840 @ 2.80GHz
physical id : 0
siblings : 2
core id : 0
cpu cores : 2processor : 1
vendor_id : GenuineIntel
cpu family : 6
model name : Intel(R) Celeron(R) CPU G1840 @ 2.80GHz
physical id : 0
siblings : 2
core id : 1
cpu cores : 2Here is mthreads.py to be tested.
import os
import threading
print(os.getpid())
def dead_loop():
while True:
passt = threading.Thread(target=dead_loop)
t.start()dead_loop()t.join()Run it in a terminal with python3 mthreads.py,get the output 3455 which is the process id of python3 mthreads.py.
cat /proc/3455/status
Name: python3
Umask: 0022
State: S (sleeping)
Tgid: 3455
Ngid: 0
Pid: 3455
PPid: 2205
Threads: 2
Cpus_allowed: 3
Cpus_allowed_list: 0-1Run it in terminal.
python3 mthreads.py
34551.There are 2 cpu in my pc,why the Cpus_allowed is 3 ,more than my cpu?
pstree 3455 -p
python3(3455)───{python3}(3456)2.There aer 2 threads running now, 3455 is the process id ,3456 is the thread id , which is the other thread id? How to get the second thread id number?
3.I want to know which process id is running on which cpu (cpu0 ,cpu1 )? | How to comprehend Cpus_allowed and thread id number? |
Adjust -jXXX% as needed:
PARALLEL=-j200%
export PARALLELarin() {
#to get network id from arin.net
i="$@"
xidel http://whois.arin.net/rest/ip/$i -e "//table/tbody/tr[3]/td[2] " |
sed 's/\/[0-9]\{1,2\}/\n/g'
}
export -f ariniptrac() {
# to get other information from ip-tracker.org
j="$@"
xidel http://www.ip-tracker.org/locator/ip-lookup.php?ip=$j -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[2]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[3]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[4]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[5]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[6]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[7]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[8]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[9]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[10]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[11]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[12]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[13]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[14]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[15]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[16]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[17]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[18]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[19]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[20]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[21]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[22]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[23]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[24]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[25]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[26]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[27]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[28]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[29]"
}
export -f iptracegrep -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}" test-data.csv | sort | uniq |
parallel arin |
sort | uniq | egrep -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}" |
parallel iptrac > abcd |
I have a test file that looks like this
5002 2014-11-24 12:59:37.112 2014-11-24 12:59:37.112 0.000 UDP ...... 23.234.22.106 48104 101 0 0 8.8.8.8 53 68.0 1.0 1 0.0 0 68 0 48Each line contains a source ip and destination ip. Here, source ip is 23.234.22.106 and destination ip is 8.8.8.8. I am doing ip lookup for each ip address and then scraping the webpage using xidel. Here is the script.
egrep -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}" test-data.csv | sort | uniq | while read i #to get network id from arin.net
do
xidel http://whois.arin.net/rest/ip/$i -e "//table/tbody/tr[3]/td[2] " | sed 's/\/[0-9]\{1,2\}/\n/g'
done | sort | uniq | egrep -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}" |
while read j ############## to get other information from ip-tracker.org
do
xidel http://www.ip-tracker.org/locator/ip-lookup.php?ip=$j -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[2]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[3]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[4]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[5]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[6]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[7]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[8]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[9]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[10]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[11]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[12]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[13]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[14]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[15]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[16]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[17]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[18]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[19]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[20]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[21]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[22]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[23]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[24]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[25]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[26]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[27]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[28]" -e "//table/tbody/tr[3]/td[2]/table/tbody/tr[29]"
done > abcdThe first xidel is used to scrap arin and second xidel is used to scrap this
The output of first xidel is network id. The ip lookup is done based on network id. The output of second xidel is like this
IP Address: 8.8.8.0
[IP Blacklist Check]
Reverse DNS:** server can't find 0.8.8.8.in-addr.arpa: SERVFAIL
Hostname: 8.8.8.0
IP Lookup Location For IP Address: 8.8.8.0
Continent:North America (NA)
Country: United States (US)
Capital:Washington
State:California
City Location:Mountain View
Postal:94040
Area:650
Metro:807
ISP:Level 3 Communications
Organization:Level 3 Communications
AS Number:AS15169 Google Inc.
Time Zone: America/Los_Angeles
Local Time:10:51:40
Timezone GMT offset:-25200
Sunrise / Sunset:06:26 / 19:48
Extra IP Lookup Finder Info for IP Address: 8.8.8.0
Continent Lat/Lon: 46.07305 / -100.546
Country Lat/Lon: 38 / -98
City Lat/Lon: (37.3845) / (-122.0881)
IP Language: English
IP Address Speed:Dialup Internet Speed
[
Check Internet Speed]
IP Currency:United States dollar($) (USD)
IDD Code:+1As of now, it takes 6 hours to complete this task when there are 1.5 million lines in my test file. This is because the script is running serially.
Is there any way I can divide this task so that the script runs in parallel and the time is reduced significantly. Any help with this would be appreciated.
P.S: I am using a VM with 1 processor and 10 GB RAM
| Multi processing / Multi threading in BASH |
ionice can take a process group ID as an argument (-P switch), which, obviously, affects all processes (and threads) in the given process group. Once can find the process group ID by looking at the 5th field of /proc/<PID>/stat (or using ps). This setting is a bit more coarse than what I really wanted, but works well enough.
|
I have a program that spawns multiple threads, all of which do fairly intensive IO, running on the background. I want to set the scheduling class to idle so that it doesn't clog up the system; however, ionice -c3 -p <PID>, where <PID> is the process ID, does not have the desired effect. Although the scheduling class for process <PID> is changed, when I launched iotop, all the threads it had spawned still had the default priority (best-effort level 4).
How do I change the IO priority of a program and all the threads or processes it has spawned on Linux?
| Set ionice for a multi-threaded application |
The problem was related to SLURM and PBS setting the core affinity based on the number of requested cores.
In SLURM adding the following line enables the use of all cores:
#SBATCH --cpus-per-task=8 |
I found all processes on my machine to only run on a single core and their core affinity set to 0.
Here is a small python script which reproduces this for me:
import multiprocessing
import numpy as npdef do_a_lot_of_compute(a):
for i in range(1000):
a = a * np.random.randn(123789)
return aif __name__ == '__main__':
with multiprocessing.Pool() as pool:
pool.map(do_a_lot_of_compute, np.arange(10000))htop looks like this:And the core affinities are:
pid 15977's current affinity list: 0
pid 15978's current affinity list: 0
pid 15979's current affinity list: 0
pid 15980's current affinity list: 0
pid 15981's current affinity list: 0
pid 15982's current affinity list: 0
pid 15983's current affinity list: 0
pid 15984's current affinity list: 0
pid 15985's current affinity list: 0So my question boils down to: Why are the core affinities all set to 0? There are no OMP or KMP environment variables set.
| All processes running on the same core |
The solution was the tip that @devnull gave at the comments: Execute each funcion on background
# Trata comentários na lista de switches
egrep -v '(^#|^\s*$|^\s*\t*#)' $LISTA_SWITCHES | while read IP SWNOME SERVER TIPO
do
if [ "$TIPO" = core ]; then
pc6248 &
elif [ "$TIPO" = dep ]; then
pc3548 &
elif [ "$TIPO" = rfs ]; then
rfs6000 &
else
echo "$(date "+%d/%m/%Y-%T") - Switch $SWNOME possui tipo marciano de switch" >> $LOG_FILE
fi
doneNow, after 20 seconds about 50 switches have the backup finished :)
|
I created a bash function to "automagically" connect on our switches and retrive their startup-config using the expect command. I have to use expect because this switch does not accept the ssh user@host fashion and ask me again for the User and Password tuple.
This is the function that i created to manage those backups
main_pc3548(){
/usr/bin/env expect <<-END3548
spawn ssh -o StrictHostKeyChecking=no -o LogLevel=quiet $IP
expect "User Name:"
send "$USER\r"
expect "Password:"
send "$PASS\r"
expect "*# "
send "copy startup-config tftp://$SERVER/$SWNAME.cfg.bkp\r"
sleep 8
END3548
}This block of code will separate my switch types, and call main_pc3548() when my switch list have this switch model:
egrep -v '(^#|^\s*$|^\s*\t*#)' $LISTA_SWITCHES | while read IP SWNAME SERVER TIPO
do
if [ "$TIPO" = core ]; then
main_pc6248
elif [ "$TIPO" = dep ]; then
main_pc3548
else
echo "$(date "+%d/%m/%Y-%T") - Switch $SWNOME Have a martian type of switch" >> $LOG_FILE
fi
doneThe rest of the script reads a pretty lengthy file with information about the Switch IP, the TFTP ip address, the Switch name, and waiting 8 seconds each switch consumes a lot of time. This sleep is needed to avoid slow connections to break the tftp copy so, here it comes my question:
Is there a easy way to "multithread" this function to gain performance?
| Multiplex bash function execution |
This is not generally possible without changing the code.
A multi-threaded program will make use of the processors on a single computer. As soon as you want to run the same program across a network of connected machines and have the various instances of the program communicate with each other, the code must do explicit message passing between the multiple copies of the program running on the different machines in the cluster.
There are libraries for doing this. A fairly well known standard is called Message Passing Interface, or MPI for short, and implementations of MPI exists for most free Unices.
If the processing that the program is doing is embarrassingly parallel, meaning multiple copies of the program would be able to process chunks of the input data without communicating with each other, then this may be an easier problem to solve and could possibly be done using GNU parallel.
In the end it comes down to what the program is actually doing.
|
I have a c++ program that is multithreaded. I believe the throughput would increase if I could run it in the other computers connected by a switch.
All of then are using the same OS(Ubuntu).
Is there a way I can do it without changing the code?
If I need to change the code what should I look for?
| Run C++ program across computer on network |
The mutex is a red herring -- it is local to the function, and so it's not actually locking anything since there ends up being a separate mutex for each thread. In order to actually lock, you would need to move the mutex variable out of create_image.
However, the writes to the image are independent, so it locking isn't actually needed. That is, since each call to create_image is to a separate region, the writes do not overlap. You guarantee the changes will be recorded by joining the threads to wait for their completion.
The problem is actually rand(). From my testing, it has its own internal mutex locking which is causing all the slowdown. Changing from rand() to rand_r(&seed) makes all the difference. The more threads in use, the more expensive the locking becomes (per call), and so you see a slowdown.
Having said that, on my CPU, the creation of the PNG is the dominant cost in this program. Without writing the PNG image, the program runs in under 2s (single thread) and scales nearly linearly with the number of cores used. With writing the PNG image, that time jumps to over 8s, so writing the PNG image is taking much longer than creating the image.
Here is what I came up with:
#include <iostream>
#include <vector>
#include <thread>
#include <mutex>
#include <png++/png.hpp>
#include <time.h>std::vector<int> bounds(int max, int parts)
{
std::vector<int> interval;
int gap = max / parts;
int left = max % parts;
int nr1 = 0;
int nr2; interval.push_back(nr1);
for (int i = 0; i < parts; i++)
{
nr2 = nr1 + gap;
if (i == parts - 1)
nr2 += left;
nr1 = nr2;
interval.push_back(nr2);
}
return interval;
}void create_image(png::image<png::rgb_pixel> &image, int start, int end)
{
unsigned int seed = time(NULL);
for (int i = start; i < end; i++)
for (int j = 0; j < image.get_height(); j++)
image[i][j] = png::rgb_pixel(rand_r(&seed) % 256, 0, rand_r(&seed) % 256);
}int main()
{
png::image<png::rgb_pixel> png_image(6000, 6000); //Creating Image
int parts = 1; //amount of parallel threads
std::vector<int> my_vector = bounds(png_image.get_width(), parts); //interval vector
std::vector<std::thread> workers; //threads time_t start, end;
time(&start); //measuring time
for (int i = 0; i < parts; i++)
{
workers.push_back(std::thread(create_image, std::ref(png_image), my_vector[i], my_vector[i + 1]));
}
for (int i = 0; i < parts; i++)
workers[i].join(); png_image.write("test.png");
time(&end);
std::cout << (end - start) << " seconds\n"; return 0;
} | I am new to threading , and I wanted to test my newly acquired skills, with a simple task, create an image using multiple threads, the interesting part is that , on a single thread , the program runs faster , than using 4 threads (which is my most efficient, parallel thread runnning capacity I believe ) I have an i3 processor,using ubuntu 17,and my std::thread::hardware_concurrency is 4.
my code :
#include <iostream>
#include <vector>
#include <thread>
#include <mutex>
#include <png++/png.hpp>
#include <time.h>std::vector<int> bounds(int max, int parts)
{
std::vector<int> interval;
int gap = max / parts;
int left = max % parts;
int nr1 = 0;
int nr2; interval.push_back(nr1);
for (int i = 0; i < parts; i++)
{
nr2 = nr1 + gap;
if (i == parts - 1)
nr2 += left;
nr1 = nr2;
interval.push_back(nr2);
}
return interval;
}void create_image(png::image<png::rgb_pixel> &image, int start, int end)
{
std::mutex my_mutex;
std::lock_guard<std::mutex> locker(my_mutex);
srand(time(NULL));
for (int i = start; i < end; i++)
for (int j = 0; j < image.get_height(); j++)
image[i][j] = png::rgb_pixel(rand() % 256, 0, rand() % 256);
}int main()
{
png::image<png::rgb_pixel> png_image(6000, 6000); //Creating Image
int parts = 1; //amount of parallel threads
std::vector<int> my_vector = bounds(png_image.get_width(), parts); //interval vector
std::vector<std::thread> workers; //threads time_t start, end;
time(&start); //measuring time
for (int i = 0; i < parts - 1; i++)
{
workers.push_back(std::thread(create_image, std::ref(png_image), my_vector[i], my_vector[i + 1]));
}
for (int i = 0; i < parts - 1; i++)
workers[i].join(); create_image(png_image, my_vector[parts - 1], my_vector[parts]); png_image.write("test.png");
time(&end);
std::cout << (end - start) << " seconds\n"; return 0;
}To build this, run g++ file.cpp -o test -lpng -pthread (with png++).
| Why is my program slower, despite using more threads? [closed] |
Because your graph showing the global efficiency provides the correct answer to your quest, I'll try to focus on explanations.
A/ EFFICIENCY \ JOBS PLACEMENT
Theoretically, (Assuming all CPUs idle at make launch time and no other task running and no i job has already completed when launching the n-th > i), we may expect CFS to distribute the 1,2,3,4,5,6,7,8 jobs to CPU 0,2,4,6 (because no benefits from cache sharing) then 1,3,5,7 (still no benefits from cache sharing but because of cache being shared between siblings, increase of lock contention hence negative impact on global efficiency)
Could this be enough to explain the lack of improvement of global efficiency starting from job 5 ?
B/ PAGE FAULTS
As explained by Frédéric Loyer, major page faults are expected at job launch time (due to the necessary read system calls). Your graph shows the increase is almost constant from 5 to 8 jobs.
The significant increase at -j4 on your 4+4 core (corroborated by the significant increase at -j2 on your 2+2 core) appears to me more intriguing.
Could this be the witness of the rescheduling of one job's thread on whatever > 4 cpu because of whatever sudden activity of some <=4 cpu caused by whatever other task ?
The constant amount of page faults for -j(n>8) being explained by the fact that all cpus that can be elected have already the appropriate mapping.BTW : Just in order to justify my request for misc. mitigations info in OPs comments, I wanted to first make sure that all of your cores were fully operational. They appear to be.
|
I am running a benchmark to figure out the number of jobs I should allow GNU Make to use in order to have optimal compile time. To do so, I am compiling Glibc with make -j<N> with N an integer from 1 to 17. I did this 35 times per choice of N so far (35*17=595 times in total). I am also running it with GNU Time to determine the time spent and resources used by Make.
When I was analyzing the resulting data, I noticed something a little peculiar. There is a very noticeable spike in the number of major page faults when I reach -j8.I should also note that 8 is the number of CPU cores (or number of hyper-threads to be more specific) on my computer.
I can also notice the same thing, but less pronounced, in the number of voluntary context switches.To make sure my data wasn't biased or anything, I ran the test two more times and I still get the same result.
I am running artix linux with linux kernel 5.15.12.
What is the reason behind these spikes?
EDIT: I've done the same experiment again on a 4 cores PC. And I can observe the same phenomenon, at the 4 jobs mark this time around.Also, notice the jump in major page faults in the 2 jobs mark.EDIT2:
@FrédéricLoyer suggested comparing page faults with the efficiency (inverse of the elapsed time). Here is a box plot of exactly that:We can see that the efficiency is getting better as we go from 1 job to 4 jobs. But it stays basically the same for bigger numbers of jobs.
I should also mention that my system has enough memory so that even with the maximum number of jobs, I do not run out of memory. I am also recording the PRSS (peak resident set size) and here is a box plot of it.We can see that the number of jobs doesn't impact memory usage at all.
EDIT3: As MC68020 suggested, here are the plots for TLBS (Transaction Lookaside Buffer Shootdown) values for 4 cores and 8 cores systems, respectively: | Spike in number of page faults with make -j`nproc` |
From man xz:Memory usage
Especially users of older systems may find the possibility of
very large memory usage annoying. To prevent uncomfortable surprises,
xz has a built-in memory usage limiter, which is disabled by default.
The memory usage limiter can be enabled with the command line option
--memlimit=limit. Often it is more convenient to enable the limiter
by default by setting the environment variable XZ_DEFAULTS. |
I'm trying to compress a large archive with multi-threading enabled, however, my system keeps freezing up and runs out of memory.
OS: Manjaro 21.1.0 Pahvo
Kernel: x86_64 Linux 5.13.1-3-MANJARO
Shell: bash 5.1.9
RAM: 16GB|swapon|
NAME TYPE SIZE USED PRIO
/swapfile file 32G 0B -2I've tried this with a /swapfile 2x the amount of RAM I have (32GB) but the system would always freeze once >90% of total RAM has been used, and would seem to not make use of the /swapfile.
|xz --info-memory|
Total amount of physical memory (RAM) : 15910 MiB
Memory usage limit for compression: Disabled
Memory usage limit for decompression: DisabledI'm new to using xz so please bear with me, but is there a way to globally enable the memory usage limiter and for the Total amount of physical memory (RAM) to take into account the space made available by /swapfile?
| xz: OOM when compressing 1TB .tar |
Usually I/O is the limit. It does not make sense to have so many threads that they are waiting for I/O.
You might define the optimum ratio so that n CPU cores are working full time and I/O is at 100%. The optimum number of threads is then defined by the ratio of the time it takes to process a file to the time it takes to read the input and write the output.
Examples:If it takes longer to read and write a file than to process it then one thread would be enough. It may make sense to have a second thread / process to ensure that there are always I/O requests available. That second thread should run at idle I/O priority, though.
If processing a file takes ten times as long as the I/O for this file then ten threads would be the optimum. | How many threads should be used to process a million files? How yould you justify your answer? This is a question from an OS exam from last year and I'm courious how you guys think. I think that 10.000 threads and each one of them to process 100 files would be a good ratio.
| Threads to process a million files [closed] |
In Linux there is a scheduler.
Some systems will push work to faster/cooler/more-efficient cores but the default behavior is an ordered stack.
The software you are running needs to take advantage of multiple cores for any benefit to be had, so it may be that your workload can only be split into 32 threads by your choice of software (or configuration).
|
lscpu gives:
Thread(s) per core: 2
Core(s) per socket: 32When running an intensive 32-threads process, why does htop show almost 100% CPU activity on #1-32, but very little activity on #33-64? Why aren't the process's 32 threads distributed evenly among CPUs #1-64? | Distribution of threads among CPUs? |
There is far too little context here to give a good answer, but for most reasonable contexts the answer is "probably yes". The operating system itself runs many things in parallell on that single core, after all, and you'd be pretty darn annoyed if you had to wait for some web page to finish loading before your mouse pointer would move.
| What are the advantages of using threads on single core, does that makes sense to use multithreading on single core?
| Two or more threads on single core [closed] |
Doing several copies in parallel is rarely useful: whether the limiting factor is network bandwidth or disk bandwidth, you'll end up with N parallel streams, each going at 1/N times the speed.
On the other hand, when you're copying from or to multiple sources (here B and C), then there is an advantage to doing the copies in parallel if the bottleneck is in on the side of B and C (rather than on the common side). So you can try doing the copies in parallel:
rsync -avz david@${FILERS_LOCATION[0]}"${primary_files}" $PRIMARY/ &
rsync -avz david@${FILERS_LOCATION[1]}"${primary_files}" $PRIMARY/ &
waitNote that the output from the two rsync commands will be intermixed; you may want to send it to separate files.
log_base=$(date +%Y%m%d-%H%M%S-$$)
rsync -avz david@${FILERS_LOCATION[0]}"${primary_files}" $PRIMARY/ >$log_base-B.log &
rsync -avz david@${FILERS_LOCATION[1]}"${primary_files}" $PRIMARY/ >$log_base-C.log &
waitYou're using several SSH connections to the same destination in your script. Establishing an SSH connection has an unavoidable latency. You can save a bit of time by leaving the connection open and reusing it, which is easy thanks to master connections.
|
I am running my shell script on machineA which copies the files from machineB and machineC to machineA.
If the file is not there in machineB, then it should be there in machineC for sure. So I will try to copy file from machineB first, if it is not there in machineB then I will go to machineC to copy the same files.
In machineB and machineC there will be a folder like this YYYYMMDD inside this folder -
/data/pe_t1_snapshotSo whatever date is the latest date in this format YYYYMMDD inside the above folder - I will pick that folder as the full path from where I need to start copying the files -
so suppose if this is the latest date folder 20140317 inside /data/pe_t1_snapshot then this will be the full path for me -
/data/pe_t1_snapshot/20140317from where I need to start copying the files in machineB and machineC. I need to copy around 400 files in machineA from machineB and machineC and each file size is 2.5 GB.
Earlier, I was trying to copy the files one by one in machineA which is really slow. Is there any way, I can copy "three" files at once in machineA using threads in bash shell script?
Below is my shell script which copies the file one by one in machineA from machineB and machineC.
#!/usr/bin/env bashreadonly PRIMARY=/export/home/david/dist/primary
readonly FILERS_LOCATION=(machineB machineC)
readonly MEMORY_MAPPED_LOCATION=/data/pe_t1_snapshotPRIMARY_PARTITION=(0 548 272 4 544 276 8 556 280 12 552 284 16 256 564 20 260 560 24 264 572) # this will have more file numbers around 200dir1=$(ssh -o "StrictHostKeyChecking no" david@${FILERS_LOCATION[0]} ls -dt1 "$MEMORY_MAPPED_LOCATION"/[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9] | head -n1)
dir2=$(ssh -o "StrictHostKeyChecking no" david@${FILERS_LOCATION[1]} ls -dt1 "$MEMORY_MAPPED_LOCATION"/[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9] | head -n1)## Build your list of filenames before the loop.
for n in "${PRIMARY_PARTITION[@]}"
do
primary_files="$primary_files :$dir1"/t1_weekly_1680_"$n"_200003_5.data
doneif [ "$dir1" = "$dir2" ]
then
find "$PRIMARY" -mindepth 1 -delete
rsync -avz david@${FILERS_LOCATION[0]}"${primary_files}" $PRIMARY/ 2>/dev/null
rsync -avz david@${FILERS_LOCATION[1]}"${primary_files}" $PRIMARY/ 2>/dev/null
fi So I am thinking instead of copying one file at a time, why not just copy "three" files at once and as soon these three files are done, I will move to another three files in the list to copy at same time?
I tried opening three putty instances and was copying one file from those three instances at the same time. All the three files were copied in ~50 seconds so that was fast for me. And because of this reason, I am trying to copy three files at once instead of one file at a time.
Is this possible to do? If yes, then can anyone provide an example on this? I just wanted to give a shot and see how this is working out.
@terdon helped me with the above solution but I wanted to try copying three files at once to see how it will behave.
Update:-
Below is the simplified version of the above shell script. It will try to copy files from machineB and machineC into machineA as I am running the below shell script on machineA. It will to try copy file numbers which are present in PRIMARY_PARTITION.
#!/usr/bin/env bashreadonly PRIMARY=/export/home/david/dist/primary
readonly FILERS_LOCATION=(machineB machineC)
readonly MEMORY_MAPPED_LOCATION=/data/pe_t1_snapshotPRIMARY_PARTITION=(0 548 272 4 544 276 8 556 280 12 552 284 16 256 564 20 260 560 24 264 572) # this will have more file numbers around 200dir1=/data/pe_t1_snapshot/20140414
dir2=/data/pe_t1_snapshot/20140414## Build your list of filenames before the loop.
for n in "${PRIMARY_PARTITION[@]}"
do
primary_files="$primary_files :$dir1"/t1_weekly_1680_"$n"_200003_5.data
doneif [ "$dir1" = "$dir2" ]
then
# delete the files first and then copy it.
find "$PRIMARY" -mindepth 1 -delete
rsync -avz david@${FILERS_LOCATION[0]}"${primary_files}" $PRIMARY/
rsync -avz david@${FILERS_LOCATION[1]}"${primary_files}" $PRIMARY/
fi | How to copy three files at once instead of one file at a time in bash shell scripting? |
There are several alternatives here:Add the --prefix=/usr/local to the configure script (assuming this is what PHP uses) or otherwise ensure that your PHP is installed to /usr/local. This would mean that you would have your own build of PHP installed alongside the system one. Since, for example, /usr/local/bin takes precedence over /usr/bin in PATH, your own build will be used in many of the cases (particularly when starting from the command line). By default this shouldn't interfere with the system packages which may have been built against a particular version of PHP and be broken if they try to use libraries provided by your own build. With this approach though you do need to pay some attention to which programs are using which libraries, however it should be possible to do this without problems.
Find a packaged version of PHP that is closer to your needs. If it is a newer version, there are plenty of Ubuntu ppas that offer this (preferably look in official 'backport' repositories first since anyone can create a possibly broken ppa). Note that this may also cause conflicts which force you to install other packages from the ppa (or leave you screwed if they are unavailable).
Download the source package and recompile it. This might be what you need to do if you want custom build options. There are many tutorials throughout the internet for this, but basically you would add the necessary deb-src lines to your sources.list file (or sources.list.d) and download with apt-get source (or just download the necessary files directly from packages.ubuntu.com). Extract with archives dpkg-source Then you would tweak the build options within the package debian directory, build with debuild or dpkg-buildpackage and install with dpkg -i. You can even do this with a package from a ppa if you also need a newer version (although same caveats apply).
Finally the most difficult option is to create your own package. Starting with one of the above source packages and using different version of the upstream source than is otherwise available is a good place to start. However, here is completely up to you to ensure that the package plays nicely with the rest of your system. |
I'm trying to install PHP from source code on my Ubuntu 12.04 VPS.
I'm installing PHP like this:Download the latest version from the php.net website.
Configure it using the parameters below.
Install any dependencies when necessary. (libxxxxx-dev)
Then do a make
Then a make install
Move the php.ini file and the fpm configuration files in the right placeI'm using these parameters in the configure command:
--enable-intl
--with-openssl
--without-pear
--with-gd
--with-jpeg-dir=/usr
--with-png-dir=/usr
--with-freetype-dir=/usr
--with-freetype
--enable-exif
--enable-zip
--with-zlib
--with-zlib-dir=/usr
--with-mcrypt=/usr
--with-pdo-sqlite
--enable-soap
--enable-xmlreader
--with-xsl
--enable-ftp
--with-curl=/usr
--with-tidy
--with-xmlrpc
--enable-sysvsem
--enable-sysvshm
--enable-shmop
--with-mysql=mysqlnd
--with-mysqli=mysqlnd
--with-pdo-mysql=mysqlnd
--enable-pcntl
--with-readline
--enable-mbstring
--with-curl
--with-pgsql
--with-pdo-pgsql
--with-gettext
--enable-sockets
--with-bz2
--enable-bcmath
--enable-calendar
--with-libdir=lib
--enable-maintainer-zts
--with-gmp
--enable-fpmNow, this goes all well and good. The version works as expected and during the installation I had no trouble. Now the fun stuff comes. Whenever I want to install something like mcrypt (for example) I would normally type apt-get install php5-mcrypt but when I do, and take a look at the dependencies I see that PHP5 is still listed as one and will be installed once I hit y (overwriting my own version).
Now the real core of my question is, how do I let Ubuntu know that I already have PHP5 installed so that it does not attempt to install PHP5 as a dependency anymore. Do I have to change something in the configure process? Do I have to install PHP5 using the apt-get manner first, remove it manually, and install my own version of PHP after.
It is worth mentioning that I need this custom PHP build to make Pthreads work since there isn't a repo that offers a ZTS version of PHP (yet, please make one, somebody?).
| Make Ubuntu acknowledge that a custom built version of PHP is installed |
As it turns out the .jar file I downloaded was single threaded, but Java was using multithreaded garbage collection. To change the number of threads that Java uses for GC, I use java -XX:ParallelGCThreads=2 which fixed the problem.
|
I have downloaded a .jar file and am using java with it, and it seems multithreaded, which is great ... unless I don't want it to be multithreaded, or unless I want to use only N threads with it.
Is there a way, in java, to specify how many threads you want to run a .jar file with without having access to the source code?
| Limit max thread use for multithreaded java app |
It should not not take that long for one single case unless you tell it to zeroise the partition and check for bad sectors (and this is the default at least in my version). It is a good idea to check for bad sectors, but you can skip it with the option -f
sudo mkfs.ntfs -f /dev/zd16 -c 8192 |
On my machine mkfs.ntfs is slow and results in massive use of resources, preventing me from using the machine for anything else. According to top it (or rather directly related zvol processes) is using 80-90% of every thread available, even threads that were already in use by other processes (such as virtual machines).
Is this massive resource use by mkfs.ntfs normal? And if so, is there any way to limit the number of threads that mkfs.ntfs uses? I am thinking that if I could limit it to just a few threads/cores, then other processes would have resources so that I can keep working.
Edit with additional info.
I am using Ubuntu 20.04 as my host OS, and the volume I am formatting is a ZFS zvol. This zvol shares a mirrored VDEV with an ext4 partition, off of which I run Kubuntu.
To make the zvol I ran
sudo zfs create -V 400G -o compression=lz4 -o volblocksize=8k -s nvme-tank/ntfs-zvolAfter the suggestions in the comments, I tried using nice to de-prioritize the command. It helped a little, but still caused extreme lagginess in the VM I was using.
nice -n19 sudo mkfs.ntfs /dev/zd16 -c 8192And this is top. The zvol processes only occur during the mkfs command, so I assume they are directly related.: | massive resource consumption during mkfs.ntfs on a zvol, why (and how can I limit this)? |
Multi-tasking systems handle multiple processes and threads regardless of the number of processors or cores installed in the system, and the number of "threads" they handle. Multi-tasking works using time-slicing: the kernel and every running process or thread each get to spend some time running, and then the system switches to the next runnable thread. The switches happen very frequently, which gives the impression everything is running in parallel even when it's not.
All this happens without any change to the APIs etc. Multi-core systems need to be able to run more threads than they physically support anyway, the single-core case is just an instance of that.
Describing a CPU as single-threaded refers to simultaneous multithreading (SMT, or hyper-threading in the Intel world), not the CPU's ability to run multiple threads (or processes, or tasks). Adding SMT features to a CPU doesn't add any instructions to help running threads, it just allows better use of the hardware in some circumstances.
|
The motivation behind this question arises from exploring the Intel Galileo gen2 board which has a single threaded processor.I'm looking for a conceptual explanation on what does that mean for all the userspace applications that rely on the existence of threading?
Does this mean that the kernel needs to be patched so that the system calls for the threading invocation are emulated in software instead of relying on the CPU threading support? | Multithreaded applications on a single threaded CPU? |
It depends.
In general running one software thread per CPU thread will give the best performance. I regularly see speedups of 10% over running one software thread per CPU core - so instead of having one software thread running at 100%, I have two software threads each running at 55%.
But I have also seen better performance running fewer processes than CPU cores if multiple cores share the same cache. This, however, is exceptionally rare.
Normally you should use all the 16 CPU threads, but the only way to know for sure on your system is to measure.
| Because of hyper-threading, my CPU has 2 logical processors per core. If I understand the premise of hyper-threading correctly, it allows each core to have a separate cache and instruction pointer for 2 separate threads simultaneously, but does not allow for simultaneous execution of 2 threads by a single core. As such, it basically just mitigates the high overhead of thread swapping, meaning that the negative performance impact that occurs from having more threads running than cores to run them is reduced. I would, however, still expect that there would be some overhead involved, and some negative performance impact when running, for example, 16 threads on a system with only 8 physical cores.
It appears that the software running in my system believes I have 16 CPU cores, due to my hyper-threading with 8 physical cores. There is some software, such as certain build systems, that default to using all available cores in order to maximize parallelization. I know that I can specify the number of threads through arguments for the software I am thinking of. Am I correct in thinking that going beyond 8 threads will have no benefit to performance? Am I correct in thinking that going beyond 8 threads will actually impede performance? Should I, therefore, instruct the programs to use no more than 8 threads?
| Does a process filling all logical cores have a negative impact on performance? [closed] |
no such relationship exists, at least directly.
Remember that a nice value is a priority. The scheduling ends up the same whether you have N threads of niceness 0, or N threads of niceness 10, or N threads of niceness -10.
Whether or not a system remains responsive depends on how much time it has to care about slow things, like user inputs. If you have 1000 processes, just each waiting for a network packet or data to be delivered from storage, that will make little difference to your system behaving snazzy for you as human user. They exist, but they are not taking scheduling time away from what needs to happen for that snazziness. No matter whether they are the same, lower or higher niceness than the processes you interact with – what matters is whether they are contending for the same resources.
So, niceness really only plays a minor role in this; it's a tool to define who gets to run next, when there's more tasks ready to do work than there's CPU cores. How many tasks are ready to work, and whether they collide with tasks that might be having (PRIO_RR) guarantees on how many times a second they're activated when ready is simply a different question.
|
When I "flood" my CPU with 8 high priority (nice=-20) OS threads (the number of cores I have), operation becomes "halty" for obvious reasons, but is still usable.
Note that when I say "high-priority thread" I mean a thread that was spawned by the same high-priority process.
However doing something like 64 threads will completely make my PC unusable. What is the relationship between max priority threads and their distribution between cores? Can we figure out roughly how many threads I need to spawn to completely flood the CPU for a given nice value?
| Relationship between number of cores and ability to run processes with higher nice values? |
That's what Linux has event hooks for, and you can use them with perf
Gathering Statistics
I'd start with something simple:
sudo perf state -e sched:sched_switch yourprogram Try this:
busyloop.c
#include <stdint.h>
int main()
{
for (volatile uint_fast64_t i = 0; i < (1ULL<<34); ++i) {
}
}compile:
gcc -O3 -o busyloop busyloop.crun:
$ sudo perf stat -e sched:sched_switch ./busyloop Performance counter stats for './busyloop': 134 sched:sched_switch 26.534402995 seconds time elapsed 26.496337000 seconds user
0.000996000 seconds sysThere you have you your answer – kind of. That's the times scheduling switched. That does include your process by itself entering the kernel, instead of getting interrupted.
Subtract the number you get when specifying both sched_switch and syscall enter counting:
$ sudo perf stat -e sched:sched_switch,raw_syscalls:sys_enter ./busyloop Performance counter stats for './busyloop': 765 sched:sched_switch
30 raw_syscalls:sys_enter 26.528107216 seconds time elapsed 26.473054000 seconds user
0.002994000 seconds sysIn this more loaded example run, there were 765 context switches, but only 30 of these were caused by the process doing a syscall by itself.
Live Statistics
What can be done with perf stat can generally also be observed live with the same command line options using perf top
./busyloop &
sudo perf top -e sched:sched_switch,raw_syscalls:sys_enter -p $!(perf top, perf record, perf report without any specified events to observe do something very awesome – they observe how often in a more-or-less regular sampling your CPU cores are in which function. This is an excellent first step in optimizing software and systems in real workload situations. But that leads to far here!)
In-Depth Tracing
Should you want something that is veeery detailed, perf sched is a surprisingly mighty tool.
$ # make a recording
$ sudo perf sched record -- ./busyloop
[ perf record: Woken up 75 times to write data ]
[ perf record: Captured and wrote 161,058 MB perf.data (1458691 samples) ]
$ # evaluate the recording
$ sudo perf sched map
*A0 179778.272339 secs A0 => migration/0:17
*. 179778.272347 secs . => swapper:0
. *B0 179778.272413 secs B0 => migration/1:21
. *. 179778.272419 secs
. . *C0 179778.272494 secs C0 => migration/2:26
. . *. 179778.272503 secs
. . . *D0 179778.272576 secs D0 => migration/3:31
. . . *. 179778.272585 secs
. . . . *E0 179778.272659 secs E0 => migration/4:36
. . . . *. 179778.272670 secs
. . . . . *F0 179778.272714 secs F0 => migration/5:41
. . . . . *. 179778.272733 secs
. . . . . . *G0 179778.272815 secs G0 => migration/6:46
. . . . . . *. 179778.272821 secs
. . . . . . *H0 179778.272948 secs H0 => perf-exec:1315372
………Here, every column is a CPU core, and each row is an event. man perf-sched can explain these in more detail than I could. perf schedule latency takes a recording and tells you how much latency the execution of each task experienced in scheduling.
$ sudo perf sched latency | grep busyloop
busyloop:1315372 | 26548.094 ms | 677 | avg: 0.055 ms | max: 0.660 ms | max start: 179800.883053 s | max end: 179800.883713 sPretty neat!
|
I wrote a simple program with a thread which runs on a CPU core. It spins kind of aggressively, and it takes 100% of the CPU core. I can see that with top + 1.
After N minutes, I would like to be able to know:
How many times has the kernel preempted (interrupted) my running thread?
| How many times has my process been preempted? |
In the “Age” paragraph a few lines above your quote, the paper gives four references:Linux has struggled for a decade to
fully leverage multi-cores [14, 20, 22, 34].Those references are, respectively:Scaling in the Linux Networking Stack (part of the kernel documentation), which describes various techniques to improve networking performance on multiprocessor systems;
Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev,
M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich. 2010. An Analysis of
Linux Scalability to Many Cores. In Proceedings of the 9th USENIX Conference on
Operating Systems Design and Implementation (Vancouver, BC, Canada) (OSDI’10).
USENIX Association, USA, 1–16.
Chansup Byun, Jeremy Kepner, William Arcand, David Bestor, William Bergeron,
Matthew Hubbell, Vijay Gadepally, Michael Houle, Michael Jones, Anne Klein,
et al. 2019. Optimizing Xeon Phi for Interactive Data Analysis. In 2019 IEEE High
Performance Extreme Computing Conference (HPEC). IEEE, 1–6.
Jean-Pierre Lozi, Baptiste Lepers, Justin Funston, Fabien Gaud, Vivien Quéma,
and Alexandra Fedorova. 2016. The Linux Scheduler: A Decade of Wasted Cores.
In Proceedings of the Eleventh European Conference on Computer Systems (London,
United Kingdom) (EuroSys ’16). Association for Computing Machinery, New York,
NY, USA, Article 1, 16 pages. https://doi.org/10.1145/2901318.2901326I’m not familiar with all four references, but the first hints at limitations in networking, and the last gives examples of scheduler bugs which cause available cores to remain idle even though there are runnable threads waiting.
So the answer to your question about the meaning of the comment seems to be that there have been documented instances of less-than-ideal (to put things mildly) performance in the Linux kernel when run on systems with multiple processors (or cores).
This is a matter of opinion, but I think the main reasons multiprocessor support has proven difficult (in both UNIX and Linux) is that concurrency is difficult for many humans to reason about, and that both systems started out as non-concurrent systems and had support for concurrent architectures added without a major redesign — in Linux, using the infamous Big Kernel Lock at first. I don’t think the question of whether the architecture of UNIX or Linux itself introduces fundamental problems in correctly supporting concurrent platforms has been settled, at least in symmetric multiprocessing systems.
(You might find It's Time for Operating Systems to Rediscover Hardware interesting.)
|
I am reading the following paper. In the paper, the authors argue that Unix/Linux "has struggled for a decade to support multiprocessors in a single node" in the last paragraph of the first page. I don't understand what this sentence actually means. Why is it so hard to support multiprocessors in Unix/Linux? Is it the architecture of Unix/Linux that makes that hard?
| Linux multiprocessors support |
It sounds like you want a work queue. You could populate that queue with the collection of files that need to be processed, with a function to dequeue an item from the queue that does the necessary locking to prevent races between threads. Then start howmany ever threads you want. Each thread will dequeue an item from the queue, process it, then dequeue the next item. When the queue becomes empty, the thread can either block waiting for more input, or if you know there will be no more input, the thread can then terminate.
Here's a simple example:
#include <cstdio>
#include <mutex>
#include <queue>
#include <thread>template<typename T>
class ThreadSafeQueue {
public:
void enqueue(const T& element)
{
std::lock_guard<std::mutex> lock(m_mutex); m_queue.push(element);
} bool dequeue(T& value)
{
std::lock_guard<std::mutex> lock(m_mutex); if (m_queue.empty()) {
return false;
} value = m_queue.front();
m_queue.pop(); return true;
}private:
std::mutex m_mutex;
std::queue<T> m_queue;
};static void threadEntry(const int threadNumber, ThreadSafeQueue<std::string>* const queue)
{
std::string filename; while (queue->dequeue(filename)) {
printf("Thread %d processing file '%s'\n", threadNumber, filename.c_str());
}
}int main()
{
ThreadSafeQueue<std::string> queue; // Populate queue
for (int i = 0; i < 100000; ++i) {
queue.enqueue("filename_" + std::to_string(i) + ".txt");
} const size_t NUM_THREADS = 4; // Spin up some threads
std::thread threads[NUM_THREADS];
for (int i = 0; i < NUM_THREADS; ++i) {
threads[i] = std::thread(threadEntry, i, &queue);
} // Wait for threads to finish
for (int i = 0; i < NUM_THREADS; ++i) {
threads[i].join();
} return 0;
}Compile with:
$ g++ example.cpp -pthreadThe program defines ThreadSafeQueue -- a queue with internal locking to enable multiple threads to access it concurrently.
The main function begins by populating the queue. It then starts 4 threads. Each thread reads a value from the queue and "processes" it (here, by printing a message to standard output). When the queue is empty, the threads terminates. The main function waits for the threads to terminate before returning.
Note that this design assumes that all elements are populated in the queue before the threads start. With some changes, it could be extended to support handling new work while the threads are running.
|
I have a function that has to process all files in a set of directories (anything between 5-300 files). The number of parallel threads to be used is user-specified (usually 4). The idea is to start the function in 4 separate threads. When one thread returns, I have to start processing the next (5th) file and so on till all files are complete.
On Windows, WaitForMultipleObjects() with bWaitAll=False helps me here. I have a structure that can be populated, and populated into an array
map<UINT, string>::iterator iter = m_FileList.begin();
string outputPath = GetOutputPath();
void ***threadArgs = (void***)malloc(sizeof(void**)*numThreads);
HANDLE *hdl = (HANDLE*)malloc(sizeof(HANDLE)*numThreads);
DWORD *thr = (DWORD*)malloc(sizeof(DWORD)*numThreads);for (int t = 0; iter != m_FileList.end() && t < numThreads; t++, iter++)
{
threadArgs[t] = prepThreadData(t, iter->second, opPath);
printf("main: starting thread :%d %s outputPath: %s\n", t, iter->second.c_str(), threadArgs[t][2]);
hdl[t] = CreateThread(NULL, 0, fileProc, (void*)threadArgs[t], 0, &thr[t]);
if (hdl[t] == NULL)
{
err = GetLastError();
printf("main: thread failed %x %x %s %s\n", err, iter->second.c_str(), threadArgs[t][2]);
}
}for (;iter != m_FileList.end(); iter++)
{
int t = (int)WaitForMultipleObjects(numThreads, hdl, FALSE, INFINITE);
if (t == WAIT_FAILED)
{
err = GetLastError();
printf("main: thread failed %x %x\n", t, err);
}
if (t - WAIT_OBJECT_0 >= 0 && t - WAIT_OBJECT_0 < numThreads)
{
free(threadArgs[t][1]);
free(threadArgs[t][2]);
free(threadArgs[t]);
threadArgs[t] = prepThreadData(t, iter->second, opPath);
printf("main: starting thread :%d %s outputPath: %s\n", t, iter->second.c_str(), threadArgs[t][2]);
hdl[t] = CreateThread(NULL, 0, fileProc, (void*)threadArgs[t], 0, &thr[t]);
if (hdl[t] == NULL)
{
err = GetLastError();
printf("main: thread failed %x %x %s %s\n", err, iter->second.c_str(), threadArgs[t][2]);
}
}
}
if (WAIT_FAILED == WaitForMultipleObjects(numThreads - 1, hdl, TRUE, INFINITE))
{
err = GetLastError();
printf("main: thread failed %x %x\n", err);
}My problem now is to get similar functionality using pthreads. The best I can think of is to use semaphores and when one of them is available, spawn a new thread, and instead of using threadArgs array, I will use just one pointer that is allocated memory for every thread spawn. Also, for ease of memory management, the memory allocated for the threadArgs[t] will then be owned by the spawned thread.
Is there a better solution? Or is there something similar to WaitForMutlipleObjects() with pthreads?
To put it more concretely, if I replace CreateThread() with pthread_create(), what should I replace WaitForMultipleObjects() with?
| Schedule jobs from a queue onto multiple threads |
After some days of testing I've found out the following.
The Futexes come from sharing memory buffers between the threads ( unfortunately this is unavoidable ), the threads run math models on quite high frequencies. The futex are directly impacting the execution latency, but not linearly and it is more dependent if the frequency of data is high.
There is a possibility to avoid some allocations with memory pools or similar, since I know the size of the majority of data. That has a positive effect on execution and CPU load.
The LWP are cloned with a different PID from the parent PID, this is OK in linux but it does not work on pThreads. Regarding the performance it degrades but not significantly because of the LWP's. The shared memory resources are creating a far bigger problem.
Regarding building the app with jeMalloc, tcMalloc and locklessMalloc none of those libs give me a competitive edge. TcMalloc is great if the core number is 4 or higher, jeMalloc is good if the caches should be big. But the results for me +/- 1% from the base line for multiple running scenarios.
Regarding mapping more memory regions into the process that creates a big difference on the overall execution. FillBraden was right on that part, it hits us very hard when the execution starts or when the data streams increase the data amount. We upgraded the behavior there with memory pools.
One test series included to run the application with SCHED_RR this also improved the execution. The thing is it also grades the threads higher in regards of prio so that this is having a impact. The advantage is I am able to run just the cores without hyperthreading very reliably. The reason is the behavior of the application and models. For unknown reason hyperthreading messes things up quite a bit.
Forking the individual models help with identifying which threads belong to which model but it does not give us any advantage in the execution speed. This is definitively a trade of but also a solution in regarding identifying models threads that a running badly and fixing them.
|
I've been experimenting with lightweight processes. Basically calling the clone function and assigning a new PID to the cloned LWP. This works fine it lets me identify all the children threads of those LWP's. The tiny problem that I ran into is performance. It degrades quite a bit ( processing slows down by 30% ). Now I was reading that LWP's can be scheduled and priorities assigned to them (I did not try either). Would this help the performance?
One thing that I noticed when running strace that the Futex usage exploded by the factor of 8-10. Could the LWP's be a major the cause of that? The understandable part is the explosion in context switches, but I thought that LWP share the same memory space, so there should not be an explosion in the Futex usage.
Are there any tricks or best practices that should be followed when using LWP's or deciding to use them?
Would forking be a better or a worse option from the performance standpoint?
| Lightweight processes behavior with an new PID |
The first statement is true, but it is meaningless on Linux or on any systems. It is because most drivers or hardware handling is done by interrupts and it is impossible do it differently. For example, if a packet arrives from the network, the CPU will know it by an interrupt initiated by the network card.
But these interrupts are invisible for the user space processes, and it doesn't affect them.
Except rare, typically embedded scenarios, no Linux work without interrupt handling.
What can cause race condition, is the context switching, i.e. if the kernel gets away the CPU from a process and gives it to another one. It happens typically by an interrupt from the timer. If only a single process runs, or if the whole scheduler is somehow turned off (maybe it is possible in some embedded environment, but very atypical), then this doesn't happen, and you have a single-process system a'la DOS. It is true, that there is no possibility of a race condition because there is no multitasking.
In a multi-CPU system, if multiple CPUs are concurrently active, there is also the possibility for a race condition, because multiple threads can run concurrently even if there is no scheduler active. Note, also this scenario is very alien to Linux (or to any not embedded OS).The second sentence, "threads are not interrupting each other" is mainly true. Threads are essentially processes, using the same address space. Multiple threads typically don't interrupt each other. Maybe they could send signals to each other, but that is all. This statement is independent from the previous one.
| I'm learning about critical section of multithreading. I have a general statement:In a single CPU system, disable interrupt is a solution of race condition.But I also learn from another site thatThreads generally don't interrupt each other.So how can disable interrupt prevents race condition? Can this possible be explained in terms of Linux?
| Do threads interrupt each other in Linux? [closed] |
A Lightweight Process is (in Unix and Unix-like) a process that runs in user space over a single kernel thread and shares its address space and resources with other LWPs of the same user process.
A system call is an invocation of kernel functionality from user space. When a user process performs a system call, the call is handled by the LWP associated with the user process/thread and gets blocked while the call is handled down in the kernel (through the kernel thread associated with the LWP) and when the call is solved, the kernel thread and LWP are free again.
That is why the minimum number of LWPs required in a many to many threading model is the amount of concurrent blocking system calls, because blocking system calls and LWPs are 1:1 related (The LWP cannot do any other task when engaged by a blocking system call from the user thread)
|
A LWP is a data structure placed between user thread and kernel thread, and appears as a virtual processor to user thread library. So, the minimum number of LWP required in many to many model of threading is the number of concurrent blocking system calls.
Please explain why is it so?
| Why an LWP (Light Weight Process) is required for each concurrent system call in many to many thread model? |
On the 3 implementations I surveyed, Darwin, FreeBSD, and Linux, the main thread receives the signal. And if the main thread blocks it with a mask, no thread receives the signal.
|
I recently wrote a "study note" about Unix, and I made following proposition about multi-threaded processes:it will be almost impossible for the kernel to identify the thread that should receive SIGURG, when a TCP packet with "urgent" bit is receivedin the 3rd paragraph of section 1.1, and I'd like to fact check this.
The standard made no provision on this, and left the entire TCP URG flag, MSG_OOB, and SIGURG implementation and protocol -specific.
But what about existing practice? Would the operating system kernel send SIGURG to the threads blocked in a recv(2) call on the socket that received the TCP URG flag? Are there implementations capable of specifying a thread as the owner of a socket? Would there be other behavior?
| Which thread receives SIGURG? |
It is difficult to understand the requirement as specified, so first I will try to show where additional explanation may be needed.
You have tagged this post with "Migration", so I assume these programs already exist and are known to work on some non-Linux architecture. The concepts of inter-process communication and signalling are fairly universal, but stating the architecture and OS the programs presently run on would be very helpful.
I also see the tag "multithreading" although the text only mentions two distinct processes. Does either of the programs actually multithread?
I initially considered that "front" and "back" related to a foreground and background process started by a shell (also tagged). But that is not a true relationship between the processes themselves, only their relationship with their launching mechanism.
I believe you are referring to a "front-end" program that provides a GUI for the entry of parameters, and once these are passed to the "back-end" program it proceeds autonomously. It is also possible that the front-end may need to only be suspended, until the GUI can be used again to provide feedback or results.
A key question is the communication of parameters between the two programs. The methods I am aware of include: shared memory; piped streams (as unnamed pipes, named pipes, or sockets); and shared files. Signals are only suitable for events, not for data flows. The existing mechanism has to be understood so we can proceed.
It is not possible for a program to "take control" of another. There may exist a parent-child relationship (and either your front-end or back-end could be the parent), but the relationship cannot be reversed. The function of the parent is to pre-arrange the communication between parent and child: alternatively, the parent may launch two children (siblings) that can communicate with each other.
Either parent or child can signal the other, and in fact each signal can kill the other process. It is more usual to end the relationship by closing the channels of communication between them, which gives the ending program the capability of a tidy closedown.
If you already have these programs, it would be very helpful to know exactly how they interact at present, so that the closest match of Linux features can be recommended. There is no purpose in my suggesting a model that requires major changes in the existing code, when another approach could be a far better fit.
It is almost certain that Linux can provide the necessary environment for these programs to co-operate. The issue is that we still know nothing about the existing mechanisms that would need to be emulated, or indeed whether you have access to the source of the programs, and what language they are written in.
|
I have the following scenario, I have two programs running one in the background and one in front. The back program is doing some stuff for the front program. once the back program has done the necessary configuration it signals that it has finished the backup support for first program and the now the front program needs to be killed and the back program will take control of first program.
Help is highly how would I accomplish this scenario in Linux. Any direction or hint is highly appreciated.
| How to switch from one process to another process and kill the first process |
To see the word wrapping style you described, use nano's "soft wrapping": Esc+$.The Esc+L command you (and everyone) tried does "hard wrapping."
Note on keystroke notation - if you are new to Linux, the notation Esc+$ means press and release Esc and then press $. The full key press sequence then is Esc, Shift+4.
(It does not mean hold down escape while pressing $.)
Source: https://www.nano-editor.org/dist/v2.9/nano.html (search for --softwrap)Note on softwrap and formatting mistakes - If you are new to nano, be a little careful of softwrap. If you are editing a configuration file or something else that is sensitive to newlines or indents, formatting mistakes can be made. Until you get comfortable with softwrap’s behaviors, I suggest doing a quick check with softwrap off (do the key sequence again) before saving.
Note on the goodness provided by others in their answers below - because different operating systems and different versions of nano do things a little differently:If you like softwrap on all of the time, set it in your .nanorc, as described in x0a's answer below, as it is a bit more through than Prashant's.
If you have a Raspberry Pi, note chainsawmascara's answer about needing an extra keystroke for softwrap to go into effect.
If you have a Mac, like lodeOfCode's answer below, you can always update nano and here, and thus bask in the warm glow of softwrap!nano linewrap
|
When editing an authorised_keys file in Nano, I want to wrap long lines so that I can see the end of the lines (i.e tell whose key it is). Essentially I want it to look like the output of cat authorised_keys
So, I hit Esc + L which is the meta key for enabling long line wrapping on my platform and I see the message to say long line wrapping has been enabled but the lines do not wrap as I expect.
I'm using Terminal on OSX 10.8.5
| Long line wrapping in Nano |
Open the file with nano file.txt.
Now type Ctrl + _ and then Ctrl + V
|
I have some long log files. I can view the last lines with tail -n 50 file.txt, but sometimes I need to edit those last lines.
How do I jump straight to the end of a file when viewing it with nano?
| Nano - jump to end of file |
No, you can't give a running program permissions that it doesn't have when it starts, that would be the security hole known as 'privilege escalation'¹.
Two things you can do:Save to a temporary file in /tmp or wherever, close the editor, then dump the contents of temp file into the file you were editing. sudo cp $TMPFILE $FILE. Note that it is not recomended to use mv for this because of the change in file ownership and permissions it is likely to cause, you just want to replace the file content not the file placeholder itself.
Background the editor with Ctrl+z, change the file ownership or permissions so you can write to it, then use fg to get back to the editor and save. Don't forget to fix the permissions!¹ Some editors are actually able to do this by launching a new process with different permissions and passing the data off to that process for saving. See for example this related question for other solutions in advanced editors that allow writing the file buffer to a process pipe. Nano does not have the ability to launch a new process or pass data to other processes, so it's left out of this party.
|
A lot of the time I edit a file with nano, try to save and get a permission error because I forgot to run it as root. Is there some quick way I can become root with sudo from within the editor, without having to re-open and re-edit the file?
| Is it possible to save as root from nano after you've forgotten to start nano with sudo? |
You typed the XOFF character Ctrl-S. In a traditional terminal environment, XOFF would cause the terminal to pause it's output until you typed the XON character.
Nano ignores this because Nano is a full-screen editor, and pausing it's output is pretty much a nonsensical concept.
As to why the wording is what it is, you'd have to ask the original devs.
|
While trying to save a file out of Nano the other day, I got an error message saying "XOFF ignored, mumble mumble". I have no idea what that's supposed to mean. Any insights?
| What does "XOFF ignored, mumble mumble" error mean? |
Set the EDITOR and VISUAL environment variables to nano.
If you use bash, this is easiest done by editing your ~/.bashrc file and adding the two following lines:
export EDITOR=nano
export VISUAL="$EDITOR"to the bottom of the file. If the file does not exist, you may create it. Note that macOS users should probably modify the ~/.bash_profile file instead, as the abovementioned file is not used by default when starting a bash shell on this system.
If you use some other shell, modify that shell's startup files instead (e.g. ~/.zshrc for zsh).
You should set both variables as some tools use one, and others may use the other.
You will need to restart your terminal to have the changes take effect.
|
I have vim as default editor on my Mac and every time I run commands on Mac terminal, it automatically opens "vim".
How can I set up "nano" instead and make sure the terminal will open "nano" every time is needed?
| How can I set the default editor as nano on my Mac? |
The only thing coming close to what you want is option to display your current cursor position. You activate it by using --constantshow (manpage: Constantly show the cursor position) option or by pressing AltC on an open text file.
|
Is there a way to turn on line numbering for nano?
| Is there line numbering for nano? |
To do this on a CoreOS box, following the hints from the guide here:Boot up the CoreOS box and connect as the core user
Run the /bin/toolbox command to enter the stock Fedora container.
Install any software you need. To install nano in this case, it would be as simple as doing a dnf -y install nano (dnf has replaced yum)
Use nano to edit files. "But wait -- I'm in a container!" Don't worry -- the host's file system is mounted at /media/root when inside the container. So just save a sample text file at /media/root/home/core/test.txt, then exit the container, and finally go list the files in /home/core. Notice your test.txt file?If any part of this is too cryptic or confusing, please ask follow up questions. :-)
In the recent CoreOS 47.83.202103292105-0, the host is placed in /host instead of /media/root.
|
CoreOS does not include a package manager but my preferred text editor is nano, not vi or vim. Is there any way around this?
gcc is not available so its not possible to compile from source:
core@core-01 ~/nano-2.4.1 $ ./configure
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... no
checking whether make supports nested variables... no
checking for style of include used by make... none
checking for gcc... no
checking for cc... no
checking for cl.exe... no
configure: error: in `/home/core/nano-2.4.1':
configure: error: no acceptable C compiler found in $PATHTo put this in context, I was following this guide when I found I wanted to use nano.
| Is there any way to install Nano on CoreOS? |
Esc3 (or Alt+3) will comment or uncomment the selected lines in recent versions of the nano editor (the version shipped with macOS is too old; install a newer version with e.g. Homebrew). The default comment character used is # (valid in many scripting languages).
The comment character may be modified by the comment option in your ~/.nanorc file. This is from the manual:comment "string"
Use the given string for commenting and uncommenting lines. If the string contains a vertical bar or pipe character (|), this designates bracket-style comments; for example, "/*|*/" for CSS files. The characters before the pipe are prepended to the line and the characters after the pipe are appended at the end of the line. If no pipe character is present, the full string is prepended; for example, "#" for Python files. If empty double quotes are specified, the comment/uncomment functions are disabled; for example, "" for JSON. The default value is "#".See also the nanorc(5) manual on your system (man 5 nanorc).Since it may need to be explained:
There are three ways to select text in nano:Use EscA (or Alt+A) to start selecting, and then the same combination to stop selecting,
Use Shift and the arrow keys,
In a graphical environment, use Shift and the left mouse button (if nano is started with its -m option). |
I can able to select multiple lines using Esc+A. After this, what shortcut(s) should I use to comment/uncomment the selected lines?
| How to comment multiple lines in nano at once? |
Once you have selected the block, you can indent it using Alt + } (not the key, but whatever key combination is necessary to produce a closing curly bracket).
|
Selecting lines in nano can be achieved using Esc+A. With multiple lines selected, how do I then indent all those lines at once?
| How to indent multiple lines in nano |
I found my own answer and so I'm posting it here, in case it helps someone else.
In the root user's home directory, /root, there was a file alled .selected_editor, which still retained this content:
# Generated by /usr/bin/select-editor
SELECTED_EDITOR="/bin/nano"The content suggests that the command select-editor is used to select a new editor, but at any rate, I removed the file (being in a bad mood and feeling the urge to obliterate something) and was then given the option of selecting the editor again when running crontab -e, at which point I selected vim.basic, and all was fine after that. The new content of the file reflects that selection now:
# Generated by /usr/bin/select-editor
SELECTED_EDITOR="/usr/bin/vim.basic" |
Installed Debian Stretch (9.3). Installed Vim and removed Nano. Vim is selected as the default editor.
Every time I run crontab -e, I get these warnings:
root@franklin:~# crontab -e
no crontab for root - using an empty one
/usr/bin/sensible-editor: 25: /usr/bin/sensible-editor: /bin/nano: not found
/usr/bin/sensible-editor: 28: /usr/bin/sensible-editor: nano: not found
/usr/bin/sensible-editor: 31: /usr/bin/sensible-editor: nano-tiny: not found
No modification madeI've tried reconfiguring the sensible-utils package, but it gives no input (indicating success with whatever it's doing), but the warnings still appear.
root@franklin:~# dpkg-reconfigure sensible-utils
root@franklin:~# Although these warnings don't prevent me from doing anything, I find them quite annoying. How can I get rid of them?
| How to get rid of "nano not found" warnings, without installing nano? |
According to Nano Keyboard Commands, you can do this with AltT:
M-T Cut from the cursor position to the end of the filewhere the M is "alt" (referring to the ESC key). In the documentation, "cut" is another way of saying delete or remove, e.g.,
^K Cut the current line and store it in the cutbuffer |
When using GNU's Nano Editor, is it possible to delete from the actual cursor position to the end of the text file?
My workaround for now: keep pressed CtrlK (the delete full line hotkey). But this method is not so confortable on slow remote connections (telnet, SSH... etc).
| Nano Editor: Delete to the end of the file |
In nano to search and replace:Press Ctrl + \
Enter your search string and hit return
Enter your replacement string and hit return
Press A to replace all instancesTo replace tab characters you need to put nano in verbatim mode: Alt+Shift+V. Once in verbatim mode, you can type any character in it'll be be accepted literally when in verbatim mode, then hit return.
References3.8. Tell me more about this verbatim input stuff!
Nano global search and replace tabs to spaces or spaces to tabs
Is it possible to easily switch between tabs and spaces in nano? |
How can I search and replace horizontal tabs in nano? I've been trying to use [\t] in regex mode, but this only matches every occurrence of the character t.
I've just been using sed 's/\t//g' file, which works fine, but I would still be interested in a nano solution.
| Find and replace "tabs" using search and replace in nano |
If it is not configured for "tiny", nano can display printable characters for tab and space, but it has no special provision for newline.
This is documented in the manual:set whitespace "string"
Set the two characters used to indicate the presence of tabs and spaces. They must be single-column characters. The default pair for a UTF-8 locale is "»·", and for other locales ">.".and can be enabled/disabled while editing:Whitespace Display Toggle (Meta-P)
toggles whitespace display mode if you have a "whitespace" option in your nanorc. See Nanorc Files, for more info. |
Is there a way to show or toggle non printing characters like newline or tab in nano?
At first let's assume the file is plain ascii.
| How to show non printing characters in nano |
Use neither: enter a filename and press Enter, and the file will be saved with the default Unix line-endings (which is what you want on Linux).
If nano tells you it’s going to use DOS or Mac format (which happens if it loaded a file in DOS or Mac format), i.e. you see
File Name to Write [DOS Format]:or
File Name to Write [Mac Format]:press AltD or AltM respectively to deselect DOS or Mac format, which effectively selects the default Unix format.
|
Which format (Mac or DOS) should I use on Linux PCs/Clusters?
I know the difference:DOS format uses "carriage return" (CR or \r) then "line feed" (LF or \n).
Mac format uses "carriage return" (CR or \r)
Unix uses "line feed" (LF or \n)I also know how to select the option:AltM for Mac format
AltD for DOS formatBut there is no UNIX format.
Then save the file with Enter. | GNU nano 2: DOS Format or Mac Format on Linux |
The shortcut that toggles tabstospaces is Meta+O (the letter O, not the number 0). (In earlier versions, it was Shift+Alt+Q or Meta+Q.)
You will see the prompt changing to:
[ Conversion of typed tabs to spaces disabled ]or
[ Conversion of typed tabs to spaces enabled ]respectively.
Since version 1.3.1, you can also insert a literal tab if you enter Verbatim Input mode with Shift+Alt+V (or Meta+V).
If you then type Tab, nano will insert a literal tab character, irrespective of your .nanorc settings. It will then revert to regular input mode (so you'll have to enter Verbatim Input mode again if you need to type a second literal tab and so on).
You can also add your own Verbatim Input mode shortcut to .nanorc, e.g. Ctrl+T:
#Edit
bind ^T verbatim main |
Normally I want nano to replace tabs with spaces, so I use set tabstospaces in my .nanorc file. Occasionally I'd like to use nano to make a quick edit to makefiles where I need real tab characters.
Is there any way to dynamically toggle tabstospaces? Most of the other options have keys to toggle them, but I can't find something for this. I've also tried using ^I (which by default is bound to the tab function) to insert a tab, but that honors the tabstospaces setting.
My current workaround is to take set tabstospaces out of my .nanorc file and to add shell aliases instead:
alias nanotabs="$(which nano)"
alias nano="$(which nano) --tabstospaces" | Is it possible to easily switch between tabs and spaces in nano? |
The feature wasn't added until version 2.2http://www.nano-editor.org/dist/v2.2/TODO
For version 2.2:Allow nano to work like a pager (read from stdin) [DONE]and CentOS6 uses nano-2.0.9-7 (http://mirror.centos.org/centos/6/os/x86_64/Packages/)
If you decided you want the latest version, you can download from the upstream site (http://www.nano-editor.org/download.php) and then follow the Fedora guide to build your own RPM. (http://fedoraproject.org/wiki/How_to_create_an_RPM_package)
|
Why does ls | nano - open the editor in Ubuntu but close the editor and save a file to -.save in CentOS?
How can I get nano in CentOS to remain open when reading stdin?
| Piped input to nano |
Check with tools like ps and htop whether this other nano instance is still running. If it's not, there's most likely a hidden dotfile in the same folder which leads nano to believe that the other instance is still running (at least vim works this way, I don't use nano; try ls -lA and look for a file that begins with .server.js or something like that.
|
I am on Ubuntu 15.10 x64. When I am trying to edit server.js file, it is opening a blank nano editor and displaying
"File server.js is being edited (by root with nano 2.4.2, PID xxxx); continue?"
with options - Yes, No, Cancel.
I copied a backup file on this file but still I am getting the same message.
Could you please suggest how to resolve this.
| File server.js is being edited (by root with nano 2.4.2, PID xxxx); continue? |
If you press Ctrl+U immediately after Ctrl+J, the justification is undone. Nano in fact tells you (the ^U shortcut description at the bottom changes from UnCut Text to UnJustify). No, I won't blame you for not noticing that. You can't unjustify if you've typed anything after Ctrl+J. Yes, that's pretty underwhelming (far from a general undo).
|
I enjoy using nano as a respite from my usual GTK-based text editor. I like the simplicity of the interface, and using CTRL-K is the fastest way I know of to edit down long textfiles.
However, I have one major gripe: whenever I justify text using CTRL-J, the editor prints the smug little message Can now UnJustify! -- yet I have not been able to find a way to unjustify text. Pressing M-U (which a Google search could reveal, M-U not being mentioned at all in the program's help files) simply seems to cause a glitch. The keyboard becomes unresponsive. Am I missing something?
| How to 'UnJustify!' text in GNU nano |
Save this file to ~/.nanorc and ctrl+] cuts the word to the left, and ctrl+\ cuts right
This works for me in nano version 2.5
bind ^] cutwordleft main
bind ^\\ cutwordright mainThis works for me in nano version 2.9.3
bind ^] cutwordleft main
bind ^\ cutwordright main |
How to delete the complete word where cursor is positioned in nano text editor? Or if cursor is on white space, I assume it should delete the next word?
Nano help shows these two functions but they are not bound to any shortcuts:
Cut backward from cursor to word start
Cut forward from cursor to next word startThose don't appear to be what I'm looking for, but if nothing else is available, I'd like to know how to use them (especially with a shortcut key).
| How to delete current word in nano text editor? |
So apparently there was an easy solution to this. I just needed to update first:
# apt-get update
# apt-get install nano |
I have a docker container running debian linux (it's using the postgres image) that I'd like to install nano on so I can edit files. However, when I try
# apt-get install nanothe output I get is
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package nanoI am following this documentation. What step am I missing here?
| Trouble installing nano |
I suspect the intended use-case for rnano (or nano -R) is to provide an editor usable in privileged scenarios, or with untrusted keyboard input. For example, if you want to give someone else an editor using your account — they wouldn’t be able to access other files. Likewise, it would be useful to limit the danger of sudo access to an editor; but in such cases sudoedit should be used instead.
Restricted mode was added in version 1.3.3, released in June 2004. The initial patch from February 2004 doesn’t give much rationale:Recently I needed a small text editor that could be secured in such a way that it would only edit a specific file or files, and not allow the user access to the underlying filesystem or a command shell. I chose to use nano, since it was easy to use. However, I had to patch it a bit, since while it has a chroot-like mode, this wasn't sufficiently secure.David Lawrence Ramsey extended the patch quite a bit before merging it in April 2004.
Other editors have restricted modes, with varying definitions of “restricted”; for example, Vim 5 added a restricted mode in 2002 (invoked using -Z or rvim, same as the original rnano patch) which prevents any access to an external shell.
|
I like nano and use it a lot. A related application is rnano.
From the rnano manual:
DESCRIPTION
rnano runs the nano editor in restricted mode. This allows editing
only the specified file or files, and doesn't allow the user access to
the filesystem nor to a command shell. In restricted mode, nano will: • not allow suspending; • not allow saving the current buffer under a different name; • not allow inserting another file or opening a new buffer; • not allow appending or prepending to any file; • not make backup files nor do spell checking.When should I be using rnano?
| When should "rnano" be used in place of "nano"? |
The system-wide nanorc file is at /etc/nanorc
You can also add a .nanorc file to /etc/skel so all new users have a local nanorc file added to their home folder.
|
I've recently started using nano quite a bit for code editing.
I've written custom syntax files, and I can include them in my local ~/.nanorc.
However, I do work across multiple accounts, so I manually have to apply the include to each user's .nanorc.
Is there a system-wide nanorc file I can edit, so the changes take effect for all users?
| Is there a global nanorc? |
It should be pointed out that Mac OS X uses \n a.k.a linefeed (0x0A) now, just like all other *nix systems. Only Mac OS versions 9 and older used \r (CR).
Reference: Wikipedia on newlines.
|
Obviously there are at least two newline types: DOS and Unix. But does OS X have its own plaintext 'format'?
I opened a text file in nano and was surprised to see: [ Read 26793 lines (Converted from Mac format) ]
What is Mac format, how is it any different from a file written with a Unix tool like nano, and why does it need to be converted from to be read with nano on a Mac?
| Does OS X have its own line format? |
As of 2022, Python 2 is no longer supported. Here is what works for me on ranger 1.9.3 on macOS via Homebrew.
map ef shell [[ -n $TMUX ]] && tmux split-window -h vim %for
map ef eval exec('try: from shlex import quote\nexcept ImportError: from pipes import quote\nif "TMUX" in os.environ: fm.run("tmux splitw -h vim " + quote(fm.thisfile.basename))')It is based on the official ranger wiki with minor tweaks:For some reason, I don't have the rifle command, so I use vim instead.
Added checking for $TMUX env, so only open a new tmux pane if ranger is under a tmux session already, as requested in the comment thread.Note the first way depends on bash (need to tweak [[ part if other shells), and the second way depends on Python shlex or pipes module.
Historical Info Below
To open the current selected file in ranger in a new pane (to the right) in an ad-hoc manner, you can first go to ranger's command line (by pressing :) and then type shell tmux splitw -h vim %f following by the <Enter> key.
Note: these methods below do not work with filenames with space characters!
To achieve this with some key binding, you can set it in a configuration file of ranger. For ranger 1.6+, key bindings are specified in rc.conf. So in ~/.config/ranger/rc.conf, use something like this:
map ef eval if 'TMUX' in os.environ.keys(): fm.execute_console("shell tmux splitw -h 'vim " + fm.thisfile.basename + "'")While with ranger 1.4 you need a file ~/.config/ranger/keys.py with the following contents:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Customized key bindings.from ranger.api.keys import *map = keymanager.get_context('browser')
@map("ef")
def edit_file_in_new_tmux_pane(arg):
command = "shell tmux splitw -h 'vim " + arg.fm.env.cf.basename + "'"
if 'TMUX' in os.environ.keys(): arg.fm.execute_console(command)With the above setting when you press ef in the ranger's browser, it will open a new tmux pane with vim editing the selected file.
The code is simply for demo, and it might need to involve with more safeguarding, such as checking for file type, etc.
Credit goes to ranger's help manual and $(pythonpkginstalldir)/ranger/defaults/rc.conf ($(pythonpkginstalldir)/ranger/defaults/keys.py for ranger 1.4). They are really helpful.
|
Here we have some amazing tools: tmux, ranger, vim... Would be amazing to configure ranger to open the files (when text editable) in a tmux newpane? Is that easy and how it is done?
| Tmux ranger integration: opening text files in new panes |
nano is small. In this case, it limits the choices to the 8 predefined ANSI colors (plus bright/bold) so that it can use the predefined symbols from curses.h (such as COLOR_BLUE) as a guide to naming.
Many terminals support 256 predefined colors; nano can't take advantage of them, but Vim can.
Terminals which allow directly specifying the R/G/B content of a color are an exception rather than a rule—unlike GUIs.
Some terminals (including Xterm, which I maintain) support the escape sequence \e]4;N;#RRGGBB\a to change palette color N to the specified RGB value, and \e[38;2;R;G;Bm to set the foreground color to the closest approximation in the palette of the specified RGB value (use 48 instead of 38 for the background color). However, changing a palette color is not useful for nano, because it is taking advantage of the existing palette, in contrast with (the much larger) Vim, which can do this with an add-on.
On writing the above in December 2015, the most recent release of nano was version 2.4.2 (July 2015). At that point, nano was 23336 lines (7657 statements) in C, which was a small fraction of the 131621 lines of text files (counting the ".po" message files). At the moment (October 2021, six years later), the program size is about the same size (fewer lines, more statements), but the other text files have roughly doubled the size of its source-tree (253036 lines). It's not exactly "small" any longer (but nowhere near the size of vim). A couple of weeks before releasing nano 5.0 in July 2020, the developer added eight names for entries in xterm's 256-color palette, in src/rcfile.c:const char hues[COLORCOUNT][8] = { "red", "green", "blue",
"yellow", "cyan", "magenta",
"white", "black", "normal",
"pink", "purple", "mauve",
"lagoon", "mint", "lime",
"peach", "orange", "latte",
"grey", "gray" };short indices[COLORCOUNT] = { COLOR_RED, COLOR_GREEN, COLOR_BLUE,
COLOR_YELLOW, COLOR_CYAN, COLOR_MAGENTA,
COLOR_WHITE, COLOR_BLACK, THE_DEFAULT,
204, 163, 134, 38, 48, 148, 215, 208, 137,
COLOR_BLACK + 8, COLOR_BLACK + 8 };That doesn't appear to be extensible (but at least it uses ncurses). However, it does not address OP's question because it does not provide a hex or RGB method for configuring nano. In developing ncurses, I created an example which reads the X11 rgb.txt file, as part of making the program display X pixmap files in color. But it also reads and displays using a data file for xterm's 256-color palette. For screenshots, see the discussion of the picsmap program.
|
I enabled syntax highlight in nano (PHP), but not happy with the default, I would like for example to have the comments displayed in very light grey.
However, the documentation I found seems to suggest I can only write colors like "yellow", "red" etc.
Is there a way to specify a color by its hex/RGB code?
Is there a limitation in the number of colors bash/nano and so on can display? Obviously I am not very experienced with the Linux world.
Same question for VIM, I might switch to VIM if that is not possible.
| Can I/how to specify colors in hex or RGB in nano syntax highlight config? |
I'm pretty sure you cannot do that in nano. The closest you could get would be line wrapping, precisely "soft wrapping": Esc+$.
This will wrap lines so you could see them all on the screen.
Source: https://www.nano-editor.org/dist/v2.9/nano.html (search for --softwrap)
You could get this kind of behaviour with vim, though, the editor is more configurable. See: https://ddrscott.github.io/blog/2016/sidescroll/
|
I am using nano on files with long lines.
How could I scroll the nano window horizontally?
Ex:
┌────────────┐
|Lorem ipsum |dolor sit amet, consectetur adipiscing elit,
|sed do eiusm|od tempor incididunt ut labore et dolore magna aliqua.
|Dolor sed vi|verra ipsum nunc aliquet bibendum enim.
|In massa tem|por nec feugiat. Nunc aliquet bibendum enim facilisis gravida.
└────────────┘I would like scrolling right side:
┌────────────┐
Lorem ipsum |dolor sit am|et, consectetur adipiscing elit,
sed do eiusm|od tempor in|cididunt ut labore et dolore magna aliqua.
Dolor sed vi|verra ipsum |nunc aliquet bibendum enim.
In massa tem|por nec feug|iat. Nunc aliquet bibendum enim facilisis gravida.
└────────────┘ | Horizontal scrolling in nano editor? |
As other answers have already explained, Ctrl+C doesn't kill Nano because the input of nano is still coming from the terminal, and the terminal is still nano's controlling terminal, so Nano is putting the terminal in raw mode where control characters such as Ctrl+C are transmitted to the program and not intercepted by the terminal to generate signals.
When intercepted by the terminal, Ctrl+C generates a SIGINT signal. If you know the process ID of nano (you can find out with ps u -C nano (Linux ps syntax) or pgrep nano or other process listing utility), you can send this signal with kill -INT 12345 where 12345 is the PID. However, SIGINT conventionally means “return to main loop”, and Nano doesn't exit when it receives SIGINT. Instead, send SIGTERM, which means “terminate gracefully”; this is the default signal, so you can just run kill 12345. Another possibility is kill -HUP 12345; SIGHUP means “you no longer have a terminal, quit gracefully unless you can live without”. If all else fails, send SIGKILL (kill -KILL 12345, or famously kill -9 12345), which kills the program whether it wants to die or not.
Many programs, including Nano, recognize Ctrl+Z to suspend. This is the same sequence that sends the SIGTSTP signal. If the program recognizes this control key, you get back a shell prompt, and since the program becomes a background job, you can easily kill it with kill %% (which sends a signal to the job that has last been put into the background).
With Nano, there is an alternate way: send it its exit key sequence, i.e. Ctrl+X followed if necessary by N for “don't save”. But as a general matter, remember this:Try Ctrl+Z followed by kill %%, and if this doesn't kill the program kill -9 %%.
If Ctrl+Z didn't work, switch to another terminal, find out the process ID (you can use ps -t pts/42 to list the processes running on the terminal /dev/pts/42) and kill it. |
In my attempt to get unique entries (read lines) out of a simple text file, I accidentally executed nano SomeTextFile | uniq.
This "instruction" renders the shell (bash) completely (?) unresponsive/non-usable -- tested within from Yakuake and Konsole. I had to retrieve the process id (PID) (by executing ps aux | grep nano) and manually sudo kill -9 the PID in question.
Anyhow, couldn't (or shouldn't?) the above return some error message? Why doesn't Ctrl+C kill this pipeline? Is there an easier or cleaner way to stop it than kill -9?
| Accidental `nano SomeFile | uniq` renders the shell unresponsive |
You can enable this for all filetypes which don't already have syntax highlighting defined by adding the following lines to .nanorc:
syntax "default"
color ,green "[[:space:]]+$"syntax "default" sets the subsequent definitions for default syntax highlighting (i.e., where a filetype hasn't already been matched by some other highlighting definition). color ,green "[[:space:]]+$" sets the background colour to green for the regex [[:space:]]+$ - all whitespace at the end of the line. (The colour definition is <foreground>,<background> - but whitespace can't show a foreground colour.)
|
I use nano as my standard editor for a file type it has no build in syntax-highlighting for LilyPond. It is nothing I really need, though I'm missing out quite a lot of white-space characters at the end of lines. Sure I could batch remove them as mentioned here in Strip trailing whitespace from files.
But it should not be too hard to somehow enable this feature, so I could make a little more clean code from scratch.
Anyone knows how to do that?
Just as nano does for shell-scripts. | Nano - highlight trailing whitespaces |
Command-line options:
nano -J 80 file
nano --guidestripe 80 fileOr add this to ~/.nanorc:
set guidestripe 80That information is to be found in the manual under section 3. Notice that feature is absent for versions of nano older than 4.0.
|
How can I highlight a given column using nano?
I'm using a fairly large terminal but I would like a mark to know if my code exceeds the limit of let's say 80 characters.
| Line length marker in nano |
From the tmux FAQ:
******************************************************************************
* PLEASE NOTE: most display problems are due to incorrect TERM! Before *
* reporting problems make SURE that TERM settings are correct inside and *
* outside tmux. *
* *
* Inside tmux TERM must be "screen" or similar (such as "screen-256color"). *
* Don't bother reporting problems where it isn't! *
* *
* Outside, it must match your terminal: particularly, use "rxvt" for rxvt *
* and derivatives. *
******************************************************************************
http://tmux.git.sourceforge.net/git/gitweb.cgi?p=tmux/tmux;a=blob;f=FAQ
|
The problem:I open a terminal (in Linux Mint, so mate-terminal)
zsh is the shell
Then I run tmux
Edit a file with nano
Scroll up and down that file with the cursor
Issue: When scrolling down in nano, only the bottom half of the terminal window gets refreshed
Issue: When scrolling up in nano, only the top half of the terminal windo
gets refreshedThe complete nano view of file does not get refreshed in my terminal window when scrolling. Any tips?
Edit: my .tmux.conf
It seems that this line specifically is the culprit (as commenting it out fixes the problem):
set -g default-terminal "xterm-256color"I'm pretty sure I added that line because I have issues even running nano during an SSH session.
Here is the full file:
set-option -g default-shell /bin/zsh# Make sure tmux knows we're using 256 colours, for
# correct colourised output
set -g default-terminal "xterm-256color"# The following were marked as "unknown", so
# I do know what I'm doing wrong.
#set -g mode-mouse on
#setw -g mouse-select-window on
#setw -g mouse-select-pane on# Attempting to stop "alert" sound upon startup
# but none of these are working...
set-option bell-on-alert off
set-option bell-action none
set-option visual-bell off | Fixing scrolling in nano running in tmux in mate-terminal |
With those syntax-highlighting rules files, nano assumes that filenames ending in .1 - .9 are man pages.
It's been quite a while since I edited a man page, but I'm pretty sure that in groff -man, .I is for italic and .Bis for bold.
|
Typically when I'm editing a small file over SSH I'll just open up nano. I look at my apache2 access.log a good bit. Since I don't have fail2ban or anything enabled on this box, I typically look at access.log.1 as well. I've noticed in my access.log.(#) a particular line always has an odd highlighting:
GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 516 "-" "-"More things I've noticed:This only happens if there is a .B or a .I in the line, and every letter after that on the line is red
This only happens in files that end in a number; it occurs in access.log.1, but not access.log. Same for any test.log.1
This only works for filename.1 through filename.9
From the line syntax "man" "\.[1-9]x?$" I get that files .1 through .9 are highlighted.. but why?Apparently the files in /usr/share/nano handle syntax highlighting, After a bit of digging, I found out that one file in particular is responsible for this: man.nanorc. Here are the contents of it:
## Here is an example for manpages.
##
syntax "man" "\.[1-9]x?$"
color green "\.(S|T)H.*$"
color brightgreen "\.(S|T)H" "\.TP"
color brightred "\.(BR?|I[PR]?).*$"
color brightblue "\.(BR?|I[PR]?|PP)"
color brightwhite "\\f[BIPR]"
color yellow "\.(br|DS|RS|RE|PD)"For files such as wp-config.php on a wordpress site, nano does syntax highlighting correctly. What is so special about .I and .B that makes the first character blue and the rest red, and what does this have to do with the .1?
| Why does nano sometimes show colors over SSH? |
You can try:Edit your .nanorc file
Add line: set tempfileNow, after you finish editting your file, just press Ctrl + X, nano then quit and automatically save your file.
|
I use nano as my favorite text editor.
I was able to save my documents by pressing F3 + Enter.
But is there a way to save the document directly by pressing some key, if I'm sure I would like to save the document to the same name as before?
| Is it posssible to save text in nano with one keypress |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.