output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
Put \j in your prompt. From the bash manual:\j
The number of jobs currently managed by the shellJust remember that prompts do go stale and jobs can finish at any time, so if you have left the terminal idle, you'll want to redisplay the prompt.At the cost of requiring an extra process just to print your prompt, you can make the \j only appear if any jobs exist.
PROMPT_COMMAND='hasjobs=$(jobs -p)'
PS1='${hasjobs:+\j }\$ ' |
Is it possible to customise the bash prompt to show the if there are any background jobs? I find it easy to forget that there are background jobs.
Say if the prompt was...
$Is there a way to make it show the number of background jobs? For example, if there were two background jobs sent to the background using CTRL+Z, the prompt would be...
2 $ | Is it possible to customise the prompt to show the if there are any background jobs? |
If the terminal you launched the command from is still open, you can get it back by running fg.
If it is not, identify the process ID by running ps aux | grep yum or just pgrep yum and then use kill PID. Or, if you know you only have one yum instance, run pkill yum.
|
I would like to know how to stop a running process after appending it with &.
For example, I would like to install software foo. Now, assume, foo has many dependancies, it takes an hour to finish. So, I do: yum install foo &. But I would like to stop that on-going process either by making it foreground (the actual premise of my question) so I can interrupt it, or through other methods if necessary.
Ctrl+C does not seem to stop this.
| How to return a background task to be in the foreground? |
Eric Blake answered on the bash-bugs mailing list:jobs is an interesting builtin - the set of jobs in a parent shell is
DIFFERENT than the set of jobs in a subshell. Bash normally creates a
subshell in order to do a pipeline, and since there are no jobs in that
subshell, the hidden execution of jobs has nothing to report.
Bash has code to special-case jobs | when it can obviously tell that
you are running the jobs builtin as the sole command of the left side of
a pipe, to instead report about the jobs of the parent shell, but that
special-case code cannot kick in if you hide the execution of jobs,
whether by hiding it inside a function as you did, or by other means
such as:
eval jobs | grep vim |
I can grep the output of jobs, and I can grep the output of a function. But why can't I grep the output of jobs when it's in a function?
$ # yes, i can grep jobs
$ jobs
[1]+ Running vim
[2]+ Stopped matlab$ jobs | grep vim
[1]+ Running vim$ # yes, of course i can grep a function
$ type mockjobs
mockjobs is a function
mockjobs ()
{
echo '[1]+ Running vim banjo'
}
$ mockjobs | grep vim
[1]+ Running vim banjo$ # now put those two together and surely I can grep???
$ type realjobs
realjobs is a function
realjobs ()
{
jobs
}
$ realjobs | grep vim
$ # Nope, WTF?$ bash --version
GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)$ # funny though, redirection works just fine:
$ tmpfile=$(mktemp); realjobs > $tmpfile; grep vim $tmpfile; rm $tmpfile
[1]+ Running vimI'm not seeing a bug in the bash list, but maybe I missed it? There's reference to an issue in Bash 2.02 when jobs is part of a pipeline, but nothing recent and in a function that I can find.
What am I missing here?
| Cannot grep jobs list when jobs called in a function |
Example:
$ sleep 300 &
[1] 16012
$ echo %1
%1
$ jobs -x echo %1
16012Here %1 is the jobspec. The number was taken from [1] shown just after the job was created. Without jobs -x %1 is just a string, so echo %1 prints the string. jobs -x replaces the jobspec with the corresponding process group ID before running echo.
If you want to do something to the process group, jobs -x is a way to substitute the right ID.
My tests indicate that if the job doesn't exist then %1 will stay literal and the command will be executed anyway. I have found no way to make jobs -x fail by itself in such case, so I think the command (or the whole script) should be ready to handle it somehow (e.g. to fail without making any damage).
|
When i look at the jobs -x explanations, it says:If the -x option is supplied, jobs replaces any jobspec found in command or arguments with the corresponding process group ID, and executes command, passing it arguments, returning its exit status.Without an example, this explanation is extremely inadequate. If possible, can you give an example of the "x" option usage in real life? When do we need these options?
| Usage example of "x" option of "jobs" command in bash |
In general, it isn't possible for a shell to “adopt” a job. For a shell, a job means a few things:Associate a job identifier with a process ID.
Display its status (running, suspended, dead).
Notify the user when the status changes.
Send a SIGHUP signal when the terminal goes away.
Control whether the job “owns” the terminal (whether it's the foreground process group).Most of these don't require any special association between the shell and the job, but there are a few that do:In order to notify the user when the status changes, the shell relies on receiving a SIGCLD signal. This signal is sent by the kernel to the parent process of the job's initial process.
In order to control access to the terminal, the shell needs to be in the same session as the job.It isn't possible for a process to adopt another process, nor to attach a process to an existing session. So in order to support adoption, the shell would have to manage a partial status. The usual shells haven't implemented this.
In the special case where the job is one that was initially started by that shell and then disowned, these problems do not arise. But none of the usual shells have implemented an exception to the exception that lets them add a job if they had initially started it.
In your scenario, it would usually be most convenient to start the job in a screen or tmux session, and to use the PID if you want to suspend or resume it.
|
If I disown a job in bash and change my mind, can I undo it somehow ?
If so, taking it further, is it possible to bring an arbitrary process under job control (an acceptable restriction would be that I own the process) ?
And, finally, would this make the below workflow possible:put job in background (^z followed by bg)
get its pid (jobs -p %+ to get its <pid>)
open a screen or tmux session, attach to it and use it for the following steps
use reptyr to grab the job (reptyr <pid>)
add <pid> to shell's job control (is this possible at all?)
stop the job (^Z)I can do everything except the last two steps. Stopping the job with ^Z doesn't work because the process isn't in the shell's job list.
My research indicates that it isn't possible to own (reverse of disown) a pid but hopefully I am wrong...
| Is it possible to add a process to the job list in bash (e.g. to reverse "disown")? |
Shell jobs don't belong directly to the user. I mean there is no global list of jobs for the user. A job can be a process that belongs to the user and you can find all processes belonging to the user. But each job as a job belongs to some shell process, the shell keeps a list and tracks its jobs. If the shell process terminates, the job process may survive; but it's only "historically" a job, because now there is no list of jobs that contains this process.
When you disconnect, the shell process terminates. When you connect again, a new shell process is created for you. The new process knows nothing about jobs of any other shell process (still running or terminated). There is no mechanism that allows the new shell to adopt jobs of another shell.
Shells inside tmux or screen can survive your disconnection. When you connect again, you regain access to exactly the same shells. Each will remember its own jobs, as if nothing happened, because from their point of view nothing has happened.
|
I have a job that runs in the background via ctrl + z and bg, and after reconnecting ssh I cannot find that job in the jobs command but can find it in ps grep. For now, I searched this and I get the tmux may be a better solution, however, I am still wondering why exit ssh will lose jobs in the jobs command. I've put that in the background, it should exist after reconnecting, Am I right?
How can I manage jobs after I disconnect from my tty/ssh session?
| Why jobs lost when after reconnect ssh? |
Your background job continues executing until someone tells it to stop by sending it a signal. There are several ways it might die:When the terminal goes away for any reason, it sends a HUP signal (“hangup”, as in modem hangup) to the shell running inside it (more precisely, to the controlling process) and to the process in the foreground process group. A program running in the background is thus not affected, but…
When the shell receives that HUP signal, it propagates it to the background jobs. So if the background process is not ignoring the signal, it dies at this point.
If the program tries to read or write from the terminal after it's gone away,
the read or write will fail with an input/output error (EIO). The program may then decide to exit.
You (or your system administrator), of course, may decide to kill the program at any time.If your concern is to keep the program running, then:If the program may interact with the terminal, use Screen or Tmux to run the program in a virtual terminal that you can disconnect from and reconnect to at will.
If the program just needs to keep running and is not interactive, start it with the nohup command (nohup myprogram --option somearg), which ensures that the shell won't send it a SIGHUP, redirects standard input to /dev/null and redirects standard output and standard error to a file called nohup.out.
If you've already started the program and don't want it to die when you close your terminal, run the disown built-in, if your shell has one. If it doesn't, you can avoid the shell's propagation of SIGHUP by killing the shell with extreme prejudice (kill -KILL $$ from that shell, which bypasses any exit trigger that the indicated process has).
If you've already started the program and would like to reattach it to another terminal, there are ways, but they're not 100% reliable. See How can I disown a running process and associate it to a new screen shell? and linked questions. |
From gnome-terminal I know the ability to suspend a job with C-z, and then send it to the background. When I close the terminal the process does not end. Where is the job being managed from, or is it lost?
| Where do background jobs go? |
This kills the background process before the script exits:
trap '[ "$pid" ] && kill "$pid"' EXITfunction repeat {
while :; do
echo repeating; sleep 1
done
}
repeat &
pid=$!
echo running onceHow it workstrap '[ "$pid" ] && kill "$pid"' EXIT
This creates a trap. Whenever the script is about to exit, the commands in single-quotes will be run. That command checks to see if the shell variable pid has been assigned a non-empty value. If it has, then the process associated with pid is killed.
pid=$!
This saves the process id of the preceding background command (repeat &) in the shell variable pid.Improvement
As Patrick points out in the comments, there is a chance that the script could be killed after the background process starts but before the pid variable is set. We can handle that case with this code:
my_exit() {
[ "$racing" ] && pid=$!
[ "$pid" ] && kill "$pid"
}
trap my_exit EXITfunction repeat {
while :; do
echo repeating; sleep 1
done
}racing=Y
repeat &
pid=$!
racing=echo running once |
If I have a Bash script like:
function repeat {
while :; do
echo repeating; sleep 1
done
}
repeat &
echo running oncerunning once is printed once but repeat's fork lives forever, printing endlessly.
How should I prevent repeat from continuing to run after the script which created it has exited?
I thought maybe explicitly instantiating a new bash -c interpreter would force it to exit as its parent has disappeared, but I guess orphaned processes are adopted by init or PID 1.
Testing this using another file:
# repeat.bash
while :; do echo repeating; sleep 1; done# fork.bash
bash -c "./repeat.bash & echo an exiting command"Running ./fork.bash still causes repeat.bash to continue to run in the background forever.
The simple and lazy solution is to add the line to fork.bash:
pkill repeat.bashBut you had better not have another important process with that name, or it will also be obliterated.I wonder, if there is a better or accepted way to handle background jobs in forked shells that should exit when the script (or process) that created them has exited?
If there is no better way than blindly pkilling all processes with the same name, how should a repeating job that runs alongside something like a webserver be handled to exit? I want to avoid a cron job because the script is in a git repository, and the code should be self-contained without changing system files in /etc/. | Prevent a shell fork from living longer than its initiator? |
Until I started answering this question, I hadn’t realised that using the & control operator to run a job in the background starts a subshell. Subshells are created when commands are wrapped in parentheses or form part of a pipeline (each command in a pipeline is executed in its own subshell).
The Lists of Commands section of the Bash manual (thanks jimmij) states:If a command is terminated by the control operator ‘&’, the shell executes
the command asynchronously in a subshell. This is known as executing the
command in the background. The shell does not wait for the command to
finish, and the return status is 0 (true).As I understand it, when you run sleep 10 & the shell forks to create a new child process (a copy of itself) and then immediately execs to replace this child process with code from the external command (sleep). This is similar to what happens when a command is run as normal (in the foreground). See the Fork–exec Wikipedia article for a short overview of this mechanism.
I couldn’t understand why Bash would run backgrounded commands in a subshell but it makes sense if you also want to be able to run shell builtins such as exit or echo to be run in the background (not just external commands).
When it’s a shell builtin that’s being run in the background, the fork happens (resulting in a subshell) without an exec call to replace itself with an external command. Running the following commands shows that when the echo command is wrapped in curly braces and run in the background (with the &), a subshell is indeed created:
$ { echo $BASH_SUBSHELL $BASHPID; }
0 21516
$ { echo $BASH_SUBSHELL $BASHPID; } &
[1] 22064
$ 1 22064In the above example, I wrapped the echo command in curly braces to avoid BASH_SUBSHELL being expanded by the current shell; curly braces are used to group commands together without using a subshell. The second version of the command (ending with the & control operator) clearly demonstrates that terminating the command with the ampersand has resulted in a subshell (with a new PID) being created to execute the echo builtin. (I’m probably simplifying the shell’s behaviour here. See mikeserv’s comment.)
I would never have thought of running exit & and had I not read your question, I would have expected the current shell to quit. Knowing now that such commands are run in a subshell, your explanation that it’s the subshell which exits makes sense.“Why is subshell created by background control operator (&) not displayed under pstree”
As mentioned above, when you run sleep 10 &, Bash forks itself to create the subshell but since sleep is an external command, it calls the exec() system call which immediately replaces the Bash code and data in the child process with a running copy of the sleep program. By the time you run pstree, the exec call will already have completed and the child process will now have the name “sleep”.While away from my computer, I tried to think of a way of keeping the subshell running long enough for the subshell to be displayed by pstree. I figured we could run the command through the time builtin:
$ time sleep 11 &
[2] 4502
$ pstree -p 26793
bash(26793)─┬─bash(4502)───sleep(4503)
└─pstree(4504)Here, the Bash shell (26793) forks to create a subshell (4502) in order to execute the command in the background. This subshell runs its own time builtin command which, in turn, forks (to create a new process with PID 4503) and execs to run the external sleep command.Using named pipes, jimmij came up with a clever way to keep the subshell created to run exit alive long enough for it to be displayed by pstree:
$ mkfifo file
$ exit <file &
[2] 6413
$ pstree -p 26793
bash(26793)─┬─bash(6413)
└─pstree(6414)
$ echo > file
$ jobs
[2]- Done exit < fileRedirecting stdin from a named pipe is clever as it causes the subshell to block until it receives input from the named pipe. Later, redirecting the output of echo (without any arguments) writes a newline character to the named pipe which unblocks the subshell process which, in turn, runs the exit builtin command.Similarly, for the sleep command:
$ mkfifo named_pipe
$ sleep 11 < named_pipe &
[1] 6600
$ pstree -p 26793
bash(26793)─┬─bash(6600)
└─pstree(6603)Here we see that the subshell created to run the command in the background has a PID of 6600. Next, we unblock the process by writing a newline character to the pipe:
$ echo > named_pipeThe subshell then execs to run the sleep command.
$ pstree -p 26793
bash(26793)─┬─pstree(6607)
└─sleep(6600)After the exec() call, we can see that the child process (6600) is now running the sleep program.
|
I understand that when I run exit it terminates my current shell because exit command run in the same shell. I also understand that when I run exit & then original shell will not terminate because & ensures that the command is run in sub-shell resulting that exit will terminate this sub-shell and return back to original shell. But what I do not understand is why commands with and without & looks exactly the same under pstree, in this case sleep 10 and sleep 10 &. 4669 is the PID of bash under which first sleep 10 and then sleep 10 & were issued and following output was obtained from another shell instance during this time:
# version without &
$ pstree 4669
bash(4669)───sleep(6345)# version with &
$ pstree 4669
bash(4669)───sleep(6364)Should't the version with & contain one more spawned sub-shell (e.g. in this case with PID 5555), like this one?
bash(4669)───bash(5555)───sleep(6364)PS: Following code was omitted from output of pstree beginning for better readability:
systemd(1)───slim(1009)───ck-launch-sessi(1370)───openbox(1551)───/usr/bin/termin(4510)───bash(4518)───screen(4667)───screen(4668)─── | Why is subshell created by background control operator (&) not displayed under pstree |
No, the process is stopped, not killed. So ps will still show it.
If you run ps ax, you will see its status is T. In this state, the process will do nothing until it receives a SIGCONT, then it will continue to run (if you type fg in your terminal, you'll see the process starting again from the point it stopped, so in your case the next icmp_seq will be 5).
EDIT: I forgot the disown part. Since you disowned the process, it doesn't appear anymore in jobs. For this reason, you can't fg it. However it is still present in the ps output with the T status. So as you said, you are still able to CONTINUE it with a kill -sigcont <PID>. Nevertheless, even you sent a SIGCONT, you can't un-disown it, that means you won't be able to make it run back in foreground of your terminal.
|
I want to know why after disowning the stopped process, it is still appearing the process table
PING www.google.com (74.125.130.106) 56(84) bytes of data.
64 bytes from 74.125.130.106: icmp_seq=1 ttl=44 time=182 ms
64 bytes from 74.125.130.106: icmp_seq=2 ttl=44 time=209 ms
64 bytes from 74.125.130.106: icmp_seq=3 ttl=44 time=213 ms
64 bytes from 74.125.130.106: icmp_seq=4 ttl=44 time=122 ms
^Z
[1]+ Stopped ping www.google.com
anshul@anshul-Inspiron-N5010:~/Documents/workspace/shell$ jobs -l
[1]+ 10319 Stopped ping www.google.com
anshul@anshul-Inspiron-N5010:~/Documents/workspace/shell$ disown
bash: warning: deleting stopped job 1 with process group 10319
anshul@anshul-Inspiron-N5010:~/Documents/workspace/shell$ ps -ef | grep 10319
anshul 10319 9717 0 23:35 pts/25 00:00:00 ping www.google.comWhy still the process 10319 is showing, it should be deleted, right?
| Disowned "Stopped" job process still appears in process table |
The fg job commands are connected to interactive sessions only, and have no meaning in batch mode (scripts).
Technically, you could enable them with set -m, but it still would make little (if any) sense, as this only relates to interactive applications where you can send jobs to back by ^Z, but since you don't have that interaction in a shell script, it makes it pointless.
The answer is: in practice, No, you can't.
|
I am trying to create a fun Terminal Screensaver which consists of the cmatrix package (one that turns terminal in one similar to the movie The Matrix) and xprintidle to determine idle time of the Terminal.
I took some help from this Thread answer at SuperUser and using a shell-script similar to it as follows:
screensaver.sh
#!/bin/sh
set -x # Used for debugging
IDLE_TIME=$((60*1000)) #a minute in millisecondsscreen_saver(){
# My screensaver function
cmatrix -abs
}sleep_time=$IDLE_TIME
triggered=falsewhile sleep $(((sleep_time+999)/1000)); do
idle=$(xprintidle)
if [ $idle -ge $IDLE_TIME ]; then
if ! $triggered; then
screen_saver
triggered=true
sleep_time=$IDLE_TIME
fi
else
triggered=false
sleep_time=$((IDLE_TIME -idle+100))
fi
doneThe script runs flawlessly when I run it in foreground using:
./screensaver.shand I can see the matrix terminal triggered.
However If I run it in background with &; the function screen_saver() is triggered in the background and I cannot view it. The only possible way to see the matrix terminal is using fg which brings it foreground.
Question
Is it possible to use the fg command in the function screen_saver() like:
screen_saver(){
cmatrix -abs && fg
}or similar option to bring it to the foreground within the shell-script?
I wish to add this script into my .bashrc so that it actually becomes a customizable feature. Is this possible?
| Is it possible to use commands like `fg` in a shell-script? |
The problem with the script is that there is nothing in it which is going to call one of the wait system calls. Generally until something calls wait the kernel is going to keep an entry for the process as this is where the return code of the child process is stored. If a parent process ends before a child process the child process is reparented, usually to PID 1. Once the system is booted, PID 1 often is programmed to enter a loop just calling wait to collect these processes exit value.
Rewriting the test script to call the shell builtin function wait we get
pids=()
find $HOME/Downloads -name "dummy" &
pids+=( $! )
find $HOME/Downloads -name "dummy" &
pids+=( $! )
find $HOME/Downloads -name "dummy" &
pids+=( $! )echo "Initial active processes: ${#pids[@]}"
for ((i=${#pids[@]}; i>1; i--)) ; do
do
wait -n # Wait for one process to exit
echo "A process exited with RC=$?"
# Note that -n is a bash extension, not in POSIX
# if we have bash 5.1 then we can use "wait -np EX" to find which
# job has finished, the value is put in $EX. Then we can remove the
# value from the pids array. echo "Still outstanding $(jobs -p)"
sleep 1
done |
I am trying to check how many active processes are running in a bash script. The idea is to keep x processes running and when one finished then the next process is started.
For testing purposes I set up this script:
find $HOME/Downloads -name "dummy" &
find $HOME/Downloads -name "dummy" &
find $HOME/Downloads -name "dummy" &while true
do
pids=()
while read pid; do
echo "PID: $pid"
pids+=("$pid")
done < <(jobs -p)
jobs -p echo "Active processes: ${#pids[@]}"
if [ ${#pids[@]} -lt 2 ]; then
break
fi echo "Process(es) still running... ${pids[@]}"
sleep 1
doneBut this does not work because jobs -p continues to return the job ids even when the processes have finished.
The following example shows the problem in detail:
#!/bin/bashfind $HOME/Downloads -name "dummy" &
find $HOME/Downloads -name "dummy" &
find $HOME/Downloads -name "dummy" &while true
do
jobs -p # continues to print all 3 jobs
sleep 1
doneHow can I get the active jobs in the while loop?
Regards,
| Waiting for any process to finish in bash script |
You know, of course, that $(…) causes the command(s) within the parentheses
to run in a subshell. And you know, of course, that jobs is a shell builtin.
Well, it looks like jobs clears a job from the shell’s memory
once its death has been reported.
But, when you run $(jobs), the jobs command runs in a subshell,
so it doesn’t get a chance to tell the parent shell
(the one that’s running the script) that the death of the background job
(ping, in your example) has been reported.
So, each time the shell spawns a subshell to run the $(jobs) thingie,
that subshell still has a complete list of jobs
(i.e., the ping job is there, even though it’s dead after the 5th iteration),
and so jobs still (again) believes that
it needs to report on the status of the ping job
(even if it’s been dead for the past four seconds).
This explains why running an unadulterated jobs command
within the loop causes it to exit asexpected:
once you run jobs in the parent shell, the parent shell knows that
the job’s termination has been reported to the user.
Why is it different in the interactive shell?
Because, whenever a foreground child of an interactive shell terminates,
the shell reports on any background jobs that have terminated1
while the foreground process was running.
So, the ping terminates while the sleep 1 is running,
and when the sleep terminates,
the shell reports on the background job’s death.
Etvoilà.1 It might be more accurate to say “any background jobs
that have changed state while the foreground process was running.”
I believe that it might also report on jobs that have been suspended
(kill -SUSP, the programmatic equivalent to Ctrl+Z)
or become unsuspended (kill -CONT, which is what the fg command does).
|
Normally, when a job is launched in the background, jobs will report that it is finished the first time it is run after the job's completion, and nothing for subsequent executions:
$ ping -c 4 localhost &>/dev/null &
[1] 9666
$ jobs
[1]+ Running ping -c 4 localhost &> /dev/null &
$ jobs
[1]+ Done ping -c 4 localhost &> /dev/null
$ jobs ## returns nothing
$ However, when run in a subshell within a script it seems to always return a value. This script will never exit:
#!/usr/bin/env bash
ping -c 3 localhost &>/dev/null &
while [[ -n $(jobs) ]]; do
sleep 1;
doneIf I use tee in the [[ ]] construct to see the output of jobs, I see that it is always printing the Done ... line. Not only once as I expected but, apparently, for ever.
What is even stranger is that running jobs within the loop causes it to exit as expected:
#!/usr/bin/env bash
ping -c 3 localhost &>/dev/null &
while [[ -n $(jobs) ]]; do
jobs
sleep 1;
done
Finally, as pointed out by @muru, the first script works as expected and exits if run from the commandline:
$ ping -c 5 localhost &>/dev/null &
[1] 13703
$ while [[ -n $(jobs) ]]; do echo -n . ; sleep 1; done
...[1]+ Done ping -c 5 localhost &> /dev/null
$ This came up when I was answering a question on Super User so please don't post answers recommending better ways of doing what that loop does. I can think of a few myself. What I am curious about isWhy does jobs act differently within the [[ ]] construct? Why will it always return the Done... line while it doesn't when run manually?Why does running jobs within the loop change the behavior of the script? | Why does 'jobs' always return a line for finished processes when run in a subshell within a script? |
jobs is not a real command, but a command that is builtin to the shell that you're using:
martin@dogmeat:~$ type jobs
jobs is a shell builtinWhen you try to run it without a shell, you'll get an error message, because there is no binary executable called jobs.
It also doesn't have a manpage because it's just a builtin. Look in man builtins as Marco said, in man bash or in the manpage of the respective shell that you're using if you're not using bash.EDIT: to explain what running a program without a shell means: when a process in Linux wants to launch another process (fork and exec), it can either wrap this process in a shell or launch it directly without a shell. For example, in perl you can use the system function to launch a new process. This works fine with a real program files like echo (I've loaded the warnings module here too so we can see error messages):
martin@martin ~ % ll /bin/echo
-rwxr-xr-x 1 root root 31K Jan 17 2013 /bin/echo*
martin@martin ~ % perl -Mwarnings -e 'system "echo", "test"'
testBut this does not work with a shell builtin like jobs, because there is no binary file jobs:
martin@martin ~ % perl -Mwarnings -e 'system "jobs"'
Can't exec "jobs": No such file or directory at -e line 1.Of course when you're already working inside an interactive shell, you probably won't stumble over this issue. But this is relevant in some other situations, for example when you're using the Gnome Alt+F2 run dialog. It doesn't wrap your command in a shell, and therefore real binaries, work fine, while trying to run jobs will just show an error message.
From your original error message jobs : not found I had assumed that you're somehow not in a shell, because inside a shell jobs should of course work fine.
|
When does the jobs command issue the message jobs : not found?
Also, why does the command man jobs refuse to show any entry for the command jobs?
P.S. : I am able to successfully execute the jobs command on the terminal
| When does one get the error message "jobs : not found"? |
If your intent is to get the working directory of a process, this is one way:
~ » jobs -l
[1] + 14308 running sleep 1h
~ » readlink /proc/14308/cwd
/home/matti-nilluThe following function inside .bashrc, will do exactly what you want
cdjob ()
{
pid=$(jobs -p $1);
d=$(readlink /proc/$pid/cwd);
cd "$d"
}Example:
~$ sleep 1h &
[1] 15102
~$ jobs
[1]+ Running sleep 1h &
~$ cd /
/$ cdjob %1
~$ |
Minimal effort to reproduce what I am looking for, is as follows
/$ sleep 1h &
[1] 6564
/$ cd
~$ jobs
[1]+ Running sleep 1h & (wd: /)When I use jobs to manage my background processes, the working directory (wd) is also displayed when it is different from the current directory.
Is there an easy way to easily change into this displayed work directory? I am using bash 3.2.
I am not looking for wrapper functions using awk or similar tools, I am specifcally looking for a shell built in solution.
| Quick access to work directory of background job |
Most builds are I/O-limited, not CPU-limited, so while nproc is a decent starting point (see also How to determine the maximum number to pass to make -j option?), most builds can use more than that. This is especially true if you’re using small VMs to build, which you’ll often find on build farms; there you’d end up with -j 1 or -j 2, and using -j 2 or -j 3 will usually result in shorter build times without the risk associated with a formula such as $(nproc) * 2 (which could cause problems even on an 8-thread system, let alone the larger thread counts you find on servers).
|
I was trying to install OpenCV in Ubuntu based on few guides online. One of the guides is this one. It has the following line:
make -j $(($(nproc) + 1))The nproc returns the number of processors/ threads available on the system. So, what is the advantage of going one higher than what's available?
| What is the logic of using $($(nproc) + 1) in make command? |
jobs
shows the jobs managed by the current shell. Your script runs inside its own shell, so jobs there will only show jobs managed by the script's shell...
To see this in action, run
#!/bin/bash
sleep 60 &
jobs -lTo see information about "jobs" started before a shell script, inside the script, you need to treat them like regular processes, and use ps etc. You can limit yourself to processes started by the parent shell (the shell from which the script was started, if any) by using the $PPID variable inside the script, and looking for processes sharing the same parent PID.
|
If I run jobs -l on command prompt it shows me the status of running jobs but if I run below file ./tmp.sh
#! /bin/bash
jobs -lIt shows empty output.
Why is that and how can I obtain information about a status of a particular job inside of a shell script?
| Why does the jobs command not work in shell script? |
jobs reports about jobs, which can have more than one process.
When interactive, jobs are placed in process groups, so they can be suspended/resumed/interrupted as a whole.
jobs -p (in both zsh and bash) returns the process group id of each job, not the process ids of all the processes in the job.
When the shell doesn't run interactively (where there's no job control), the shell reports the pid of the process that would have been the process group leader if there had been job control (which may not be the same as the one returned in $! by the way).
$ sleep 1 | sleep 20 &
[1] 29643 29644
$ jobs -p
[1] + 29643 done sleep 1 |
running sleep 20zsh reports that for job 2 of pgid 29643, zsh started 2 processes, one of which is already done, the other still running.
If you want to get the pgids only like in bash, you can pipe that do:
awk '/^\[/{print $3}'If you want to get the pids in the job (only the ones started by the shell), you can look at the $jobstates special associative array:
$ typeset jobstates
jobstates=( 1 'running:+:29643=done:29644=running' )To extract the pids of the processes still running in job 1:
$ echo ${${(M)${(s/:/)jobstates[1]}:#*=running}%=*}
29644Now if you want to kill those jobs, you'd want to kill the process groups (in interactive shells), i.e. the whole jobs, not one process at random in the job.
You can kill the process group of id 29643 with kill -- -29643, kill 29643 would only kill the process of id 29643 (which is already dead in our example above).
Here, it would make more sense to pass job numbers instead of pid or pgid (which wouldn't work in non-interactive shells) to kill. That's what the job numbers are for:
kill %${(k)^jobstates}Unfortunately, I don't think bash has a reliable equivalent.
bash-4.4$ sleep $(: '
> [213] ...') 2134 &
[1] 29856
bash-4.4$ jobs
[1]+ Running sleep $(: '
[213] ...') 2134 &It's difficult to extract the job numbers reliably from that. (I also find that even while kill %; do continue; done doesn't work because of what looks like some race condition bug).
Note that when not running interactively, there's no job control, so when you do kill %1 to kill the job number 1, that job doesn't have a process group. So as a poor man's approximation, the shell (both bash and zsh) will kill each process it knows about individually instead. In our example above, instead of:
kill(-29643, SIGTERM);it will do:
kill(29643, SIGTERM);
kill(29644, SIGTERM);It will not kill the other processes (if any) that those 2 processes have spawned themselves, though it's still better than killing only 29643 like in your kill $(jobs -p) approach.
|
Is there a way to make jobs -p in zsh behave as in bash, ie. give
only a number, such that one can do kill $(jobs -p)?
| Making zsh jobs -p behave as in bash |
An alternative is to setup the device with stty, then read it with cat:
stty <my options>
nohup sh -c "cat /dev/ttyACM0 > data.log" & |
I would like picocom to log serial data on a remote computer, without having to keep my ssh session to the remote computer alive.
I have tried:picocom <my options>This dies when I logout.picocom <my options> & No output on terminal, and exiting picocom with C-a C-x leaves the job as stopped, it doesn't kill it (I need to kill -9, a simple kill on the job does not work. I then have to manually clean the tty lock in /var/lock/).picocom <my options> > tmp/data.log&then in another ssh session:
tail -f tmp/data.logNo data cames out in the file data.log.bash -c "picocom --baud 115200 /dev/ttyACM1 > /home/pi/tmp/data.log" &No output to the file either. The job becomes "Stopped" right away.nohup sh -c "picocom --baud 115200 /dev/ttyACM1 > /home/pi/tmp/data.log" &I get the start output of picocom in the tailed file, but then the job is exited.Good to know as well: picocom does not react to C-z.
My questions are:is it at all possible to run picocom in the background?
what alternatives are there to log serial terminal without an open session? | Running picocom in the background without open session |
jobs works only for the instantiation of the shell that created the jobs. jobs n use numbers not the pid. Once the shell is run inside a script (another new process), the old shell that launched the script, jobs (issued in the old shell) no longer can reference job # 1 in the new shell.
Why? Because the current shell could have its own job #1. UNIX/Linux maintains what is known as a process group or session. The group leader of the session is the process that owns the tty and interacts with it via the keyboard. Look up the description of the setsid() function in your manual. If the process was launched and still runs as a child under the old parent shell - the leader, then jobs command will work. Otherwise no.
|
I have a script that runs some programs in the background, but after running it, they are not listed by the command 'jobs'. Why is this?
(./m_prog -t m_prog1 m_config) &
(./m_prog -t m_prog2 m_config) &
(./m_prog -t m_prog3 m_config) &But if I execute each one of them from the terminal, they do appear in 'jobs'
How can I get the same effect from commands run in a script?
| why jobs doesn't list commands from script |
If command4 is currently running, it is possible to do this pretty straightforwardly:
^Z
$ fg && right_command5 && command6This is essentially what you were already doing to start command4 in the first place. wrong_command5 and the rest will be replaced and never execute.
I think that behaviour is going to be unexpected to you, so read on for an explanation of where things don't work the way you thought they would.However: note that your original command sequence does not do what you think it does. When you do:
$ command1 && command2 && command3
Running command1 ...
Running command2 ...^Z
[1]+ Stopped
$ fg && command4then command2 will be resumed, but the rest of the chain will not be. Stopping command2 with Ctrl-Z means that it exits abnormally with SIGTSTP (-20). Since command2 exits non-succesfully, && short-circuits out of the chain and command3 will never run. command4 will start immediately after the end of the resumed command2.
You can watch this behaviour in action by replacing the && with ||:
$ command2 || echo Running command3, exit was $?
^Z
[1]+ Stopped command2
Running command3, exit was 148
$You have to list out the entire subsequent chain of commands when you resume if you want the behaviour you're aiming for. Alternatively, if you want to be able to stop and resume whole command chains you need to run them in a subshell:
$ (command1 && command2 && command3)
^Z
$ fgIn that case, though, there's no way to intercede and replace a command at all.
|
$ command1 && command2 && command3
Running command1 ...
Running command2 ...^Z
[1]+ Stopped
$ fg && command4 && wrong_command5 && command6
Running command3
Running command4How do I cancel the wrong_command5, scheduling right_command5 instead after the termination of currently running command4?
| How do I cancel/replace one element of running `cmd1 && cmd2 && cmd3 && ...` chain? |
“Process suspended with Ctrl+Z” is actually a subset of “suspended process that's a child of this shell”, and it's easier to track: that means there's a suspended background job.
In zsh, you can check the jobstates array.
if ((${(M)#jobstates:#suspended:*} == 0)); then
echo There are no suspended jobs
else
echo There are ${(M)#jobstates:#suspended:*} suspended jobs
fiIn bash or zsh, jobs -s lists only suspended jobs.
echo "There are $(jobs -s | wc -l) suspended jobs" |
I'd like to add an indicator to my PS1, showing whether there is a process suspended with ctrl+z. In order to do so, I'll need a function that is able to check for this situation. I'm not even sure where to start to think about this problem. Google has failed me. Any ideas?
| Shell function to check if there is a suspended process that's a child of this shell? |
When a background job is a pipeline of the form cmd1 | cmd2, it's still a single background job. There's no way to know when cmd1 starts.
Each & creates one background job. As soon as cmd & returns, the shell is aware of that background job: cmd & jobs lists cmd. cmd & pid=$! sets pid to the process ID that runs cmd.
The pipeline cmd1 | cmd2 creates two more subprocesses: one to run cmd1 and one to run cmd2. Both processes are children of the subprocess that runs the background job. Here's how the process tree looks like for bash -c '{ sleep 123 | sleep 456; } & jobs -p; sleep 789':
PID PPID CMD
268 265 | \_ bash -c { sleep 123 | sleep 456; } & sleep 789
269 268 | \_ bash -c { sleep 123 | sleep 456; } & sleep 789
270 269 | | \_ sleep 123
271 269 | | \_ sleep 456
272 268 | \_ sleep 789268 is the original bash process. 269 is the background job that jobs -p prints. 270 and 271 are the left- and right-hand sides of the pipe, both children of the main process of the background job (269).
The version of bash I tested with (5.0.17 on Linux) optimizes cmd1 | cmd2 & without braces. In that case, the left-hand side of the pipe runs in the same process as the background job:
PID PPID CMD
392 389 | \_ bash -c sleep 123 | sleep 456 & jobs -p; sleep 789
393 392 | \_ sleep 123
394 392 | \_ sleep 456
395 392 | \_ sleep 789You can't rely on this behavior to be stable across versions of bash, or possibly even across platforms, distributions, libc versions, etc.
jobs -p %cmd1 looks for a job whose code starts with cmd1. What it finds is cmd1 | cmd2. jobs -p %?cmd2 finds the same job¹. There's no way to access the process IDs running cmd1 and cmd2 through built-in features of bash.
If you need to know for sure that cmd1 has started, use a process substitution.
cmd1 >(cmd2)You don't get to know when cmd2 starts and ends.
If you need to know when cmd1 and cmd2 start and end, you need to make them both jobs, and have them communicate through a named pipe.
tmp=$(mktemp -d) # Remove this in cleanup code
mkfifo "$tmp/pipe"
cmd1 >"$tmp/pipe" & pid1=$!
cmd2 <"$tmp/pipe" & pid2=$!
…The jobs command is not very useful in scripts. Use $! to remember PIDs of background jobs.
¹ Or at least it should. My version complains about an ambiguous job spec, which has to be a bug since it's saying that despite there being only a single job.
|
I need to retrieve the PID of a process piped into another process that together are spawned as a background job in bash. Previously I simply relied on pgrep, but as it turns out there can be a delay of >2s before pgrep is able to find the process:
#!/bin/bash
cmd1 | cmd2 &
pid=$(pgrep cmd1) # emtpy in about 1/10I found that some common recommendations for this problem are using process substitution rather than a simple pipe (cmd1 >(cmd2) & pid=$!) or using the jobs builtin.
Process substitution runs an entire subshell (for the entire runtime), so I would rather use jobs for now, but I want to avoid making the same mistake twice...
Can I 100% depend on jobs to be aware of both processes if I perform the lookup immediately after spawning them?
#!/bin/bash
cmd1 | cmd2 &
pid=$(jobs -p %cmd1) # 10/10?This is probably on account of running jobs in background, or maybe a quirk of set -x, but the following example usually lists the executed commands in any which order. The jobs output appears to be correct so far, but I want to entirely rule out the possibility that jobs could be executed before the jobs have been started (or at least that jobs will fail to list the two processes)!?
#!/bin/bash
set -x
tail -f /dev/null | cat &
jobs -l
kill %tailExample:
+ jobs -l
[1]+ 2802325 Running tail -f /dev/null
2802326 | cat &
+ tail -f /dev/null
+ kill %tailLikewise, in the process substitution case, can I rely on pid=$! to always work? It is designed to "expand to the process ID of the most recently executed background (asynchronous) command" after all?
| Reliable way to get PID of piped background process |
A process can be sent that SIGTTOU signal (which causes that message), when it makes a TCSETSW or TCSETS ioctl() for instance (like when using the tcsetattr() libc function) to set the tty line discipline settings while not in the foreground process group of a terminal (like when invoked in background from an interactive shell), regardless of whether tostop is enabled or not (which only affects writes to the terminal).
$ stty echo &
[1] 290008
[1] + suspended (tty output) stty echoSee info libc SIGTTOU on a GNU system for details:Macro: int SIGTTOU
This is similar to SIGTTIN, but is generated when a process in a
background job attempts to write to the terminal or set its modes.
Again, the default action is to stop the process. SIGTTOU is
only generated for an attempt to write to the terminal if the
TOSTOP output mode is set(emphasis mine)
I believe it's not the only ioctl() that may cause that. From a cursory look at the Linux kernel source code, looks like TCXONC (tcflow()), TCFLSH (tcflush()) should too.
|
I like my background processes to freely write to the tty. stty -tostop is already the default in my zsh (I don't know why, perhaps because of OhMyzsh?):
❯ stty -a |rg tostop
isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprtBut I still occasionally get my background processes suspended (this is not a consistent behavior, and I don't know how to reproduce it):
[1] + 3479064 suspended (tty output) | zsh: Why do I get suspended background processes even when I have `stty -tostop`? |
You can use kill -STOP pid to pause a job and kill -CONT pid to resume it. You get the proper pid from the ps command you already know.
|
If I ssh to a box and start a task that will take some time to complete I usually press control+z to pause the process, and then immediately type bg 1 to put run it in the background.
I can then type jobs and see it running.
If I disconnect (type exit, press control+d, etc) and then log back in, I can no longer type jobs to see it running - it won't show anything.
I know I can type something like
ps -u `whoami`to see what items are running, but I'm not sure if I can pause them any longer. I know I can kill them, but is there a way to pause them or can I somehow get them to show back up in the jobs list?
Linux-fu tips regarding jobs and process management are also welcome and will be upvoted.
| How can I manage jobs after I disconnect from my tty/ssh session? |
You will have to rewrite your function to be able to do that.
When you start a background job with &, the shell does indeed keep track of that, and you can indeed figure out more information by using the jobs builtin. However, that information is specific to that instance of the shell; if you run your function with & itself, then a separate shell is spawned which is not the shell with the background jobs, and therefore you can't access the information about the original shell's jobs from that separate shell.
However, there's a simple way to fix that:rewrite your function so it runs in terms of process IDs (PIDs) rather than job numbers. That is, have it check whether a process still exists (e.g., by way of parsing ps output, of by checking whether /proc/pid exists)
run your new function with %2 rather than 2 as argument. That is, give it the percent sign followed by the job id you want to monitor; the percent sign is used by Bourne shells to replace a job id by the pid of the given job.With that, it should just work.
|
I have written following function in my bashrc. It checks if a job is running every 5 seconds and sends email when job is finished.
function jobcheck () {
time=`date`
jobstatus=`jobs $1`
while [ 1 ]; do
if (( ! `echo $jobstatus | grep "Running" | wc -l` )); then
"sendEmail command to send email"
break;
fi
sleep 5;
done
}I want to call this function as below (for job number 2, for exmaple)
jobcheck 2and proceed to run other commands on the commandline. But the while loop keeps on running and I do not get command prompt.
If I run it as
jobcheck 2 &Then it gives error that bash: jobs 2 not found
How do I run this function in background?
| bash function that sends email after job is finished |
Yes, bash uses waitpid() with WNOHANG in a loop. You can see this in waitchld() in jobs.c:
static int
waitchld (wpid, block)
pid_t wpid;
int block;
{...
do
{
/* We don't want to be notified about jobs stopping if job control
is not active. XXX - was interactive_shell instead of job_control */
waitpid_flags = (job_control && subshell_environment == 0)
? (WUNTRACED|wcontinued)
: 0;
if (sigchld || block == 0)
waitpid_flags |= WNOHANG;...
if (block == 1 && queue_sigchld == 0 && (waitpid_flags & WNOHANG) == 0)
{
internal_warning (_("waitchld: turning on WNOHANG to avoid indefinite block"));
waitpid_flags |= WNOHANG;
} pid = WAITPID (-1, &status, waitpid_flags);...
/* We have successfully recorded the useful information about this process
that has just changed state. If we notify asynchronously, and the job
that this process belongs to is no longer running, then notify the user
of that fact now. */
if (asynchronous_notification && interactive)
notify_of_job_status (); return (children_exited);
}The notify_of_job_status() function simply writes to the bash process' standard error stream.
I can't unfortunately say much about whether setting tostop with stty should influence the shell that you do this in.
|
While studying the internals of bash's job control mechanism under Linux I have came across a little problem of understanding. Let's assume the following scenario:
script is executed in background
user@linux:~# ./script.sh &
same script is executed in foreground at the same time
user@linux:~# ./script.sh
Now the first execution of the script is performed as a background job during the second execution of the script in the foreground. Bash now performs a blocking wait call with the PID of the foreground process until it terminates and then it gets the corresponding information. After the termination of the forground process bash controls the status of all background jobs and informs about every changes before returning to prompt. This is usually the default behavior called "+b".
But there is an other mode called "-b". In this mode bash informs immediately about every background job status change. In my understanding this is done by sending the signal SIGCHLD by the background process. But how could bash react to this signal of a terminated background process and print a message to the terminal although it executes a blocking wait call. Because in my opinion signals are only handled before returning to user mode.
Does bash call wait with the option WNOHANG within a loop until the current foreground terminates?
Furthermore, when operating in mode "-b" bash can write to the terminal although it doesn't belong to the foreground process group of the terminal. Even when I set the option "tostop" with stty, bash can write to the terminal without being part of the foreground process group. Does bash get any special permissions because it is part of the controlling process group of the terminal.
I hope, I could make clear where my problems of understanding are.
| How does bash's job control handle stopped or terminated background jobs? |
You need to tell the now backgrounded application to stop writing on the tty device. There is no generic way to do that.
You can do:
stty tostopto make so that background jobs be suspended (with a SIGTTOU signal) when they try to write to the tty.
You can attach a debugger to the process and make it re-open the file descriptors it has open on the tty device to /dev/null. Like (here for stdout only):
gdb --batch -ex 'call close(1)' -ex 'call open("/dev/null",2)' -p "$pid"(assuming the application is dynamically linked or has debug symbols and note that some systems will have security restrictions preventing you from doing it).
For dynamically linked applications, you could run them with a LD_PRELOAD blob that installs a handler on SIGTTOU (and do the stty tostop) that reopens stdout and stderr on /dev/null if they were going to a terminal. That would handle the case of non-setuid/setgid applications writing on the terminal via their stdout/stderr and don't handle SIGTTOU themselves.
Run:
gcc -Wall -shared -fPIC -nostartfiles -o ~/lib/handle_tostop.so handle_tostop.cWhere handle_tostop.c contains:
#include <signal.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>static void reopen_on_null(int req_fd)
{
int fd;
if (close(req_fd) == 0) {
fd = open("/dev/null", O_RDWR);
if (fd != req_fd && fd >= 0) {
dup2(fd, req_fd);
close(fd);
}
}
}static void handle_sigttou(int sig)
{
if (isatty(1)) reopen_on_null(1);
if (isatty(2)) reopen_on_null(2);
}
void _init()
{
signal(SIGTTOU, handle_sigttou);
}And then:
LD_PRELOAD=~/lib/handle_tostop.so
export LD_PRELOAD
stty tostopInstead of /dev/null, you may want to redirect the output to some log file (which you might then want to open with O_APPEND, and possibly include the pid of the process in the file name so you know whose process the output is from) so it's not discarded if it's of any use to you.
| When putting a job in background with the command bg, I have the output of the command of the job.
How to put the job in background (like the bg command) but without any output?
PS: the job is linked to a command with an output (there is no >/dev/null 2>1 in the original command)
| How to put a job in background without output? [duplicate] |
I assume you run go.sh & not just go.sh and then jobs, as this is trivial (no job would be expected to be reported once it's over).
Your script starts two jobs in the background and it's done. So you don't see it. You don't see also the jobs started by the script as it's another shell. If you want to see the jobs, instead of executing the script, source it.
. go.sh; jobsNow you should see the two jobs listed.
If you prefer to execute the script and still see it as a job, tell it to wait until its children finish. Add wait before its end. You can also call jobs from within the script.
|
I have a script called go.sh
python3 bob.py &> lol.log &
python3 alice.py &> lol2.log &When I run ./go.sh and then run jobs. Neither one of the jobs that I've executed in go.sh appear in jobs. In fact, nothing appears in jobs.
Yet, when I run each command in go.sh by one, it does appear in jobs.
How can I fix this?
| Jobs not appearing in `jobs` |
If you just need to fix a typo in the shell language itself, look for your job in directory /var/spool/cron/atjobs:
# type -p date | at 1430
warning: commands will be executed using /bin/sh
job 2 at Fri Aug 23 14:30:00 2019
# atq
2 Fri Aug 23 14:30:00 2019 a root
# ls /var/spool/cron/atjobs/
a00002018e67ea*
# cat /var/spool/cron/atjobs/a00002018e67ea
#!/bin/sh
# atrun uid=0 gid=0
# mail root 0
umask 22
LESSCLOSE=/usr/bin/lesspipe\ %s\ %s; export LESSCLOSE
LANG=en_US.UTF-8; export LANG
LESS=-X; export LESS
EDITOR=/usr/bin/vi; export EDITOR
USER=root; export USER
PAGER=/usr/bin/less; export PAGER
PWD=/root; export PWD
HOME=/root; export HOME
XDG_DATA_DIRS=/usr/local/share:/usr/share:/var/lib/snapd/desktop; export XDG_DATA_DIRS
MAIL=/var/mail/root; export MAIL
SHLVL=1; export SHLVL
LOGNAME=root; export LOGNAME
XDG_RUNTIME_DIR=/run/user/0; export XDG_RUNTIME_DIR
PATH=/root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/root/bin; export PATH
LESSOPEN=\|\ /usr/bin/lesspipe\ %s; export LESSOPEN
cd /root || {
echo 'Execution directory inaccessible' >&2
exit 1
}
/bin/dateRegarding modifying the date/time the job is to run, that is encoded into the filename for the job. If we suppose:
# ls -ltr /var/spool/cron/atjobs/
total 12
-rwx------ 1 root daemon 1054 Aug 23 13:58 a00002018e67ea*
-rwx------ 1 root daemon 1054 Aug 23 14:03 a00003018e67eb*
-rwx------ 1 root daemon 1054 Aug 23 14:03 a00004018e67e9*
# atq
2 Fri Aug 23 14:30:00 2019 a root
3 Fri Aug 23 14:31:00 2019 a root
4 Fri Aug 23 14:29:00 2019 a rootThen the filename a00003018e67eb within the /var/spool/cron/atjobs directory is constructed thus:a is the "queue identifier" (the a in the atq listing)
00003 is a (hex) representation of the job number, 3
018e67eb is the hex representation of the time the job is to runThe hex value 018e67eb is 26109931 in decimal. That appears to be minutes past the epoch since 26109931 * 60 = 1566595860 and 1566595860 seconds past the epoch is Friday, August 23, 2019 2:31:00 PM here in my time zone.
|
Trying here to modify a job from at command.
Any idea on how to do it?
Already managed to list it, delete, and execute, but can't modify it.
| Editing a job in At command |
There is no job stack, each job is handled independently.
You can do fg; otherbigjob. That will put reallybigjob back to the foreground, then when the fg command stops run otherbigjob. This isn't the same thing as queuing otherbigjob for execution after the first job: if you press Ctrl+Z then otherbigjob starts immediately. If you press Ctrl+C then reallybigjob is killed. You can't leave otherbigjob in the background and queue another job after it.
If the jobs are CPU-intensive, then the batch utility can let you schedule the next job when the CPU isn't busy.
|
Job control is probably my favorite thing about Linux. I find myself, often, starting a computationally demanding process that basically renders a computer unusable for up to days at a time, and being thankful that there is always CTRL-Z and fg, in case I need to use that particular computer during that particular time period.
Sometimes, however, I want to insert a job onto the job stack. I've never figured out how to do that.
I'm using bash.
It would probably look something like:
$ ./reallybigjob
^Z
[1]+ Stopped reallybigjob
$ (stuff I don't know) && ./otherbigjob
$ fg | Basic job control: stop a job, add a job onto the stack, and `fg` |
You could also trap for an interrupt to ensure the process is killed if ctrl+c is pressed
#!/bin/shtrap ctrl_c INTctrl_c () {
kill "$bpid"
exit
}# start process one in background
python3 -m http.server 8086 &
bpid=$!# start second process
firefox "http://localhost:8086/index.html"
wait |
I have a process that is blocking in nature. It is executed first. In order to execute the second process, I moved the first process to the background and executed the second process. Using the wait statement, I am waiting in the terminal. However, it seems that upon exiting the shell (pressing CTRL+C), the first process was not exited smoothly. Below is the shell script:
execute.sh
#!/bin/sh# start process one in background
python3 -m http.server 8086 & # start second process
firefox "http://localhost:8086/index.html"
waitI found a similar question here but it seems not working properly. Basically, when I call ./execute.sh a second time, the server says "Address already in use". It means the server could not exit peacefully. On the other hand, if I execute the sever manually in a terminal, it exits smoothly.
| Call other process after executing a blocking process in shell |
Processes backgrounded via bg or & will typically die under 2 scenarios:The shell receives a SIGHUP
They try to write to a terminal which no longer exists.Item #1 is the primary culprit when closing your terminal. However whether it happens or not depends on how you close your terminal. You can close it by:Something like clicking the "X" in your window manager
You can type exit, logout, or CTRL+D.Item #1 is the one that will result in a SIGHUP being sent. #2 does not.
So long story short, if you background a process with bg, and then log out with exit, logout, or CTRL+D, the process will not be killed.
|
Would someone explain if & why disown is necessary if you wish to keep a job running even after you disconnect.
I ask because every site I have visited says to use disown and bg but bg on its own is working for me and I'm not sure why.
Is it because I haven't fully understood what disown is for or are there some settings somewhere that are affecting the default behavior of the bg command?
Here's what I did:Connected to my CentOS 6 box via SSH
Created a simple process:
tar zcvf example.tar.gz ./examplepath > /dev/null 2>&1Suspended the job
Resumed it in background via bg 1
Disconnected from SSH serverI then checked via FTP to see if the .tar.gz file was still growing in size and it was.
| Is it necessary to disown a process for it to continue running after disconnecting? |
To return to job 4, run:
fg %4The command fg tells the shell to move job %4 to the foreground. For more information, run help fg at the command prompt.
When you want to suspend the job you are working on, run bg. For more information, run help bg at the command prompt.
For more detail than you'd likely want to know, see the section in man bash entitled JOB CONTROL.
|
If I have the following jobs running in a shell
-> % jobs -l
[1] 83664 suspended nvim
[2] 84330 suspended python
[3] 84344 suspended python
[4] 84376 suspended nvim
[5] - 84701 suspended python
[6] + 84715 suspended pythonHow can i return to the nth job, suppose I want to return to job 4, or job 1, how can I do that without having to kill all those which are before it?
| Return to a particular job in the jobs list |
In zsh, the $jobdirs associative array maps job numbers to the working directory they were started from, so you can do cd $jobdirs[n] to cd into job n's directory (or cd ${jobdirs[n]?} to get an error instead of taking you home if the job doesn't exist).
~$ jobs -d
[1] - running sleep 1000
(pwd : /tmp)
[2] + suspended sleep 123123
(pwd : /usr/local)
~$ cd $jobdirs[2]
/usr/local$See also the $jobstates and $jobtexts associative array for the state and code of each job.
In bash, your only option is going to be parsing the output of jobs which unfortunately can only be done with heuristics. For instance, see:
bash-5.1$ mkdir ') (wd: blah blah)
[5] (wd: blah blih)
'
bash-5.1$ cd ') (wd: blah blah)
[5] (wd: blah blih)
'
bash-5.1$ sh -c 'sleep inf' '(wd: /etc)
[2] (wd: bloh bloh)
'
^Z
[1]+ Stopped sh -c 'sleep inf' '(wd: /etc)
[2] (wd: bloh bloh)
'
bash-5.1$ jobs 1
[1]+ Stopped sh -c 'sleep inf' '(wd: /etc)
[2] (wd: bloh bloh)
'
bash-5.1$ cd /
bash-5.1$ jobs 1
[1]+ Stopped sh -c 'sleep inf' '(wd: /etc)
[2] (wd: bloh bloh)
' (wd: ~/1/) (wd: blah blah)
[5] (wd: blah blih)
)You can see that the (wd:...) is only printed if the job's dir is not the current working directory, and if the job's command line or working directory happens to contain (wd: ...) there's no way to tell which of the (wd:s is the start of the actual working directory.
Also replaces your home directory with ~.
Also beware the wd: is localised. For instance, in Ukrainian:
bash-5.1$ LC_ALL=uk_UA.utf8 jobs 1
[1]+ Зупинено sh -c 'sleep inf' '(wd: /etc)
[2] (wd: bloh bloh)
' (РД: ~/1/) (wd: blah blah)
[5] (wd: blah blih)
)So you would want to make sure jobs is called in the locale for which you expect the corresponding working directory abbreviation.
All you could do is use heuristics and hope for the best:
cdj() {
local dir
dir=$(LC_ALL=C jobs -- "$1") || return
case $dir in
(*'(wd:'*')')
dir=${dir%')'}
dir=${dir##*'(wd: '}
case $dir in
('~'*) dir=$HOME${dir#'~'}
esac
printf >&2 '%s\n' "Job $1's dir is likely \"$dir\""
cd -- "$dir";;
(*)
printf >&2 '%s\n' "Job $1's dir is likely the current directory already"
esac
}Which fails in our contrived example above but should work in most normal cases.
|
The output of jobs looks something like this
[1] Stopped TERM=xterm-256color vim --servername vim ~/.gitconfig
[2]- Stopped TERM=xterm-256color vim --servername vim ~/.vimrc (wd: ~)
[3]+ Stopped TERM=xterm-256color vim --servername vim i3blocks.conf (wd: ~/.config/i3/configs)where the (wd: path) part, if present, shows the path of dir from where the corresponding job was started.
Several times I want to move to that dir for one of the jobs.
Is there a utility for doing so?
| Is there a way to quickly change dir to one from where one of the jobs is running? |
With GNU xargs:
xargs -rn 1 -P 5 -a file wibbleThat runs up to 5 wibble commands in Parallel each taking 1 word from the file as argument.
For GNU xargs words are delimited by sequences of space, tab or newline character and with single quotes, double quotes and backslash recognised as quoting operators for those delimiters and for each other.
For words to be each line of the the file, add a -d '\n'.
For words to be treated like in your approach where on bash, they are by default (unless $IFS was modified) delimited on space, tab and newline and also subject to filename generation, you'd do something like:
xargs -rn 1 -P 5 -0a <(printf '%s\0' $(<file)) wibbleThat is, have the shell perform that split+glob and printf pass the resulting words to xargs.
| I have an analysis program and a text file with data in it which for the sake of exposition I will call wibble and data.txt respectively.
I tried a simple for loop to process all of my data:for i in $(cat data.txt); do
wibble $i
done
But it takes a very long time to finish the analysis one by one.
So I tried making it spin off separate jobs for each datum:for i in $(cat data.txt); do
( wibble $i ) &
done
But this many analysis processes running causes a memory crash!
So I want to spin up analysis processes in groups of, say, five. I want to take the first five data items, spin up an analysis process on those; then take the next five and do the same; and so on.
How can I do that without using the program "parallel' as explained in previous posts (bellow)? I don't have sudo permission on my institutional workstation to install this app, so I am trying to accomplish this goal using simpler code.
https://unix.stackexchange.com/questions/299346/running-commands-at-once
https://unix.stackexchange.com/questions/361505/how-to-control-for-loop | How do I process my data file in bunches (without using the app parallel)? [duplicate] |
In both cases, the process group does start out as a job of the shell. When you call disown %1, the shell removes that entry from its list of jobs: that's the whole point of disown. With ( sleep 200 & ), the sleep process is a job of the subshell created by the parentheses; you can see that by running ( sleep 200 & jobs ). This one stops being the job of a shell when that shell exits; as with everything else, the subshell's jobs are its own, the parent never sees them.
The processes remain in the same session and still have the same controlling terminal. This has nothing to do with job control.
As with any process, they can be killed either by exiting or by receiving a signal. Since they aren't members of a foreground process group (and can't be, since the shell won't put them back into the foreground), they won't receive a kernel-generated SIGHUP if the terminal disappears.
A process group is a job of a shell if it started out as a job of that shell and it hasn't stopped being one. The details of what commands become separate jobs depends on the shell and is beyond the scope of this answer. In a nutshell, if a shell has job control enabled (which is the default if the shell is interactive), then each compound command is its own job. In particular:A pipeline is a single job.
A command or process substitution is not a separate job, it runs in the original process group.
Anything running in the background (either started with & or backgrounded with Ctrl+Z) is a separate job.A process group stops being a job if its leader exits, if the shell exits, or if the shell lets it go (with disown).
|
Here are two cases when a process group in an interactive shell isn't a job of the shell:
$ sleep 100 &
[1] 16081
$ jobs
[1]+ Running sleep 100 &
$ disown %1
$ jobs
$and
$ ( sleep 200 & )
$ jobs
$ How does each case achieve making a process group not a job? In interactive bash, what is the necessary and sufficient condition for a process group to be a job of the shell?
The shell is a session leader running on a pseudoterminal slave. When the shell terminates, it will not affect the two sleep processes above, since they are not in the job list of the shell so don't receive SIGHUP. Then do the two sleep processes still have the pseduoterminal slave as their session's controlling terminal?
What can terminate the two sleep processes, besides they exit normally or are killed by kill sending a signal? I would like to know how different in effect the two cases are from a real daemon.
| In bash, a process group is a job if and only if ...? |
You're overcomplicating it by trying to make one job that conditionally does two things. You want one job to start the service on Thursday, and another to start it on Friday, as in the following cron table.
0 0 * * 4 service myspiffyservice stop > /dev/null 2>&1 # stop myspiffyservice on Thursday
0 0 * * 5 service myspiffyservice start > /dev/null 2>&1 # start myspiffyservice on FridayIf you are talking about executing a job rather than starting or stopping a service, this can also be handled by one cron job that only runs on nonThursdays:
0 0 * * 0-3,5-6 /path/to/myspiffyjob > /dev/null 2>/dev/null # Run spiffy job on non-ThursdaysThe above schedule translates to 'At 00:00 on every day-of-week from Sunday through Wednesday and every day-of-week from Friday through Saturday'.
|
I am pretty new to unix systems and their workings. Is there any way to schedule a cron job in unix which runs on every day 12:00 AM and checks if the day is Thursday it stops a service and if the day is Friday it starts the service again?
| scheduling cron jobs to stop a service on Thursday and start it on Friday |
You can use screen
Suppose you have logged in using SSH, then simply run following command to create screen session called 'mysession'
screen -S mysession
in case your connection disconnected, then you can simply attach your session using:
screen -x mysession
Check this link for more information about screen
|
Here is the scenario:
Let's say that I log into my server via ssh and start an emacs or vi (or whatever other program) session. Then my ssh connection disconnects.
Is there a way for me to reconnect to those programs via a new ssh session. In other words, when I log back into my server through a new ssh session?. In other words, how can I "pick up" where I left off.
I am assuming that programs are not automatically stopped when the first ssh account drops out...are they?
I read somewhere that I can use screen or tmux, I am wondering if there is a simple way, if not please let me know.
Thanks
| How to resume a program when logged in with different session |
The kill builtin only recognizes the %N format for jobs running in the current shell. However, shell scripts run in their own separate subshell and in that subshell, there are no jobs to kill. This might be clearer with an example:
$ for i in {1..5}; do sleep 100 & done
[1] 2259152
[2] 2259153
[3] 2259154
[4] 2259155
[5] 2259156
$ for i in {1..5}; do kill %$i; done
[1] Terminated sleep 100
[2] Terminated sleep 100
[3] Terminated sleep 100
[4]- Terminated sleep 100
[5]+ Terminated sleep 100As you can see, that works as expected if you run both sets of commands in the same shell session. Similarly, it also works if you launch and kill the commands from the same shell script:
#! /usr/bin/env bash
for i in {1..5}; do
sleep 100 &
done## Show the running jobs
runningSleepJobs=$(pgrep -c sleep)
echo "There are $runningSleepJobs sleep jobs running!"for i in {1..5}; do
kill %$i;
done
## Show that they've been stopped
runningSleepJobs=$(pgrep -c sleep)
echo "Now there are $runningSleepJobs sleep jobs running!"If I now run this script, I can see it both starts and then kills the jobs as expected:
$ foo.sh
There are 5 sleep jobs running!
Now there are 0 sleep jobs running!There is a way around this, however. Instead of executing your script, you can source it so that it runs in the current shell:
$ cat ~/bin/foo.sh
#! /usr/bin/env bash
for i in $( seq 1 $1 )
do
kill %$i
done$ for i in {1..5}; do sleep 100 & done
[1] 2295221
[2] 2295222
[3] 2295223
[4] 2295224
[5] 2295225$ jobs
[1] Running sleep 100 &
[2] Running sleep 100 &
[3] Running sleep 100 &
[4]- Running sleep 100 &
[5]+ Running sleep 100 &$ . ~/scripts/foo.sh 5
[1] Terminated sleep 100
[2] Terminated sleep 100
[3] Terminated sleep 100
[4]- Terminated sleep 100
[5]+ Terminated sleep 100 |
for i in $( seq 1 $1 )
do
kill %$i
doneI try to kill the stopped jobs with this script, but interestingly it can't be able to even though I have jobs open.
$ jobs
[10] Stopped vim detect_thread.py
[11] Stopped python3 detect.py
[12]- Stopped python3 detect.py
[13]+ Stopped python3 detect.py
$ kill 13
bash: kill: (13) - No such process
$ ./delete.sh 13
./delete.sh: line 8: kill: %1: no such job
./delete.sh: line 8: kill: %2: no such job
./delete.sh: line 8: kill: %3: no such job
./delete.sh: line 8: kill: %4: no such job
./delete.sh: line 8: kill: %5: no such job
./delete.sh: line 8: kill: %6: no such job
./delete.sh: line 8: kill: %7: no such job
./delete.sh: line 8: kill: %8: no such job
./delete.sh: line 8: kill: %9: no such job
./delete.sh: line 8: kill: %10: no such job
./delete.sh: line 8: kill: %11: no such job
./delete.sh: line 8: kill: %12: no such job
./delete.sh: line 8: kill: %13: no such job | Can't kill stopped jobs with bash script |
The jobs command is a shell builtin, so is only intended for the current shell instantiation. In particular, the Bash manual describes jobs as:-p List only the process ID of the job's process group leader.(Emphasis mine.)
If you run jobs -l, you'll see that the sleep which you backgrounded in the other shell is not listed either.
The command to list all processes is ps. Its argument syntax can be a little baroque, so you're best off reviewing man ps for yourself. For myself, I usually use one of these:
ps aux
ps axjfOne just gives a list of all processes, the other arranges them in a dependency tree format. You might also try pgrep as a way of finding particular processes matched by name.
As a side note, if you're operating in a context where security is an issue, be aware that there are sneaky methods for hiding processes even from the root user, eg see https://github.com/gianlucaborello/libprocesshider.
|
Suppose, as a dummy example, I execute a bash command in one terminal tab:
$: sleep 1000 &Then, in another tab, I run the command that should "kill all background jobs":
$: jobs -p | xargs -a killExcept nothing happens. Or, better yet: how does one list all jobs started by my user via <some-job> & from any context?
| How does one kill jobs submitted via bg in one subshell from another shell instance? |
Because there are two lines in later case, one being shown, one not because it is being shown in a subshell with a trailing newline that is being counted. I have checked zsh and bash, this behavior exists in zsh, bash behaves similarly with different formatting.
Take a look at the following ASCII dump from zsh:
/tmp/test% jobs
[1] + running tail -f /var/log/syslog/tmp/test% jobs | wc -l
1/tmp/test% jobs | od -c
0000000 [ 1 ] + r u n n i n g
0000020 t a i l - f / v a r / l
0000040 o g / s y s l o g \n
0000052/tmp/test% cd ../tmp% jobs | wc -l
2/tmp% jobs | od -c
0000000 [ 1 ] + r u n n i n g
0000020 t a i l - f / v a r / l
0000040 o g / s y s l o g \n ( p w d n
0000060 o w : / t m p ) \n
0000072It seems shell is keeping track of the current working directory, look at ( p w d n
0000060 o w : / t m p ), the parentheses indicate subshell.
Here is the relevant source code of zsh, jobs.c:
/* print "(pwd now: foo)" messages: with (lng & 4) we are printing
* the directory where the job is running, otherwise the current directory
*/ if ((lng & 4) || (interact && job == thisjob &&
jn->pwd && strcmp(jn->pwd, pwd))) {
doneprint = 1;
fprintf(fout, "(pwd %s: ", (lng & 4) ? "" : "now");
fprintdir(((lng & 4) && jn->pwd) ? jn->pwd : pwd, fout);
fprintf(fout, ")\n");
fflush(fout);
}While bash has:
if (strcmp (temp, jobs[job_index]->wd) != 0)
fprintf (stream,
_(" (wd: %s)"), polite_directory_format (jobs[job_index]->wd));To get what you want you can run jobs in a subshell:
( jobs ) | wc -l |
I can't make sense of these return values.
When I run jobs -s | wc -l, I get different results depending on what directory I'm in.
If I'm in the directory that I returned to after suspending the job, I get the correct answer.
If I'm in any other directory, I get the correct answer + 1.
See screenshot:I also ran jobs -s | > test.txt and got a single line document, then ran wc -l < test.txt and got the right output.
Any idea what's causing this? As you can see, it's messing up my background jobs indicator in my shell prompt (right side, blue).
Any suggestions on how to fix this function would be greatly appreciated:
#tells us how many jobs are in the background
function jobs_status() {
count=$(jobs -s | wc -l)
if [[ $count -ne "0" ]]; then
echo "$bg_jobs$split$fg_text $count $fg_jobs"
fi
} | jobs / wc: getting weird return values |
As you noted, invoking firefox a second time will simply ask the running instance to open another window. The -no-remote switch can be used to inhibit this behavior.
Something similar happens with nautilus: it is used to display the desktop window (with it's icons), so it's already running when you start it.
|
After a recent workspace reorganization, I am left with a question about the way certain processes interact with the output of jobs.
I am running all of my program in the background on one 'Main' terminal, that way I have control and information from them all neatly in one place. what I have noticed is that when I create instances of some programs in the background, they continue running yet I get notification almost instantly in the console that they have ended. Programs I have noticed this on are:Firefox (only on the 2nd, or higher, instance)
gnome-terminal (may only be on second, since I already have one open when I try this)
nautilus (on first instance)While I can understand the firefox issue, since combining the processes under one parent could make performance/memory sense, I don't understand why a program like nautilus appears to be unable to exist on the jobs list for any amount of time, even though the window remains open and the program remains entirely functional.
| Some processes not remaining in jobs list |
Note that it's not processes you put in foreground, but jobs, made of a shell command which could be a compound command starting several processes in parallel (like in sleep 10 | sleep 20 &) or one after the other (like in for i in {1..10}; do sleep $i; done &).
And each of these process could in turn start more processes (which would still be part of the job, but unknow to zsh as not direct descendants) or could change their argument list as reported by ps (like in sh -c 'exec env sleep 10' which runs a process that executes sh, then env, then sleep all in the same process) or could leave the job (by becoming a new process group leader).
It sounds like that for each job, you want to see the arg list of the processes in that job.
Maybe something like:
for job state ("${(@kv)jobstates}") {
pgid=${${state%%=*}##*:}
echo Job $job:
pgrep -ag $pgid
}Which on your example gives something like:
Job 2:
26590 sleep 1
Job 3:
26591 sleep 2
Job 4:
26592 sleep 3
Job 5:
26593 sleep 4
Job 6:
26594 sleep 5
Job 7:
26595 sleep 6
Job 8:
26596 sleep 7
Job 9:
26597 sleep 8
Job 10:
26598 sleep 9
Job 11:
26599 sleep 10 |
As you can see in the following output the variable $i in the output of ps aux is expanded to sleep 1, sleep 2, etc.
Is there a way to make zsh do the same? In the jobs output all the commands get the same name, which is sleep $i.
$ for i in {1..10}; do sleep $i& done; ps aux | grep sleep; jobs
[6] 1630
[7] 1631
[8] 1632
[9] 1633
[10] 1634
[11] 1635
[12] 1636
[13] 1637
[14] 1638
[15] 1639
root 1630 0.0 0.0 5224 684 pts/3 SN 10:06 0:00 sleep 1
root 1631 0.0 0.0 5224 684 pts/3 SN 10:06 0:00 sleep 2
root 1632 0.0 0.0 5224 744 pts/3 SN 10:06 0:00 sleep 3
root 1633 0.0 0.0 5224 744 pts/3 SN 10:06 0:00 sleep 4
root 1634 0.0 0.0 5224 748 pts/3 SN 10:06 0:00 sleep 5
root 1635 0.0 0.0 5224 752 pts/3 SN 10:06 0:00 sleep 6
root 1636 0.0 0.0 5224 680 pts/3 SN 10:06 0:00 sleep 7
root 1637 0.0 0.0 5224 748 pts/3 SN 10:06 0:00 sleep 8
root 1638 0.0 0.0 5224 748 pts/3 SN 10:06 0:00 sleep 9
root 1639 0.0 0.0 5224 748 pts/3 SN 10:06 0:00 sleep 10
root 1641 0.0 0.0 6144 880 pts/3 S+ 10:06 0:00 grep --color=auto sleep
[6] running sleep $i
[7] running sleep $i
[8] running sleep $i
[9] running sleep $i
[10] running sleep $i
[11] running sleep $i
[12] running sleep $i
[13] running sleep $i
[14] - running sleep $i
[15] + running sleep $iCheers!
| How to get variables expanded in output of jobs command |
When you kill the job with kill %6, you kill the tail and kill grep too.
tail -f /var/log/mintupdate.log|grep ez&
[6] 3368377If you kill 3368377, you kill just the grep process.
3368376 pts/6 S 0:00 tail -f /var/log/mintupdate.log
3368377 pts/6 S 0:00 grep --color=auto ezOf course it caused to kill the tail -f too....
|
I'm working with some tail -f path/to/my/log/file | grep pattern& and I need to kill the process as quick as possible.
With classic kill {tail PID}, tail still displays its buffer and it takes around 12 second (on my setup) to get tail completely silent.
However, it's much faster when I kill it with kill %{job id} (slightly more than a second).
How is it different to call kill {tail PID} and kill %{job id}?
Some samples :
01/09/2021 15:45:29:670:kill {tail PID}
...
01/09/2021 15:45:39:232: {some log}
01/09/2021 15:45:39:232: {some log}
01/09/2021 15:45:39:232: {last log line}
takes around 10 seconds to fully shutdownwith kill %{job id} :
01/09/2021 10:56:57:793 -> (COM12<):kill %{tail job ID}
...
01/09/2021 10:56:58:966 -> (COM12>):[root@my_board ~]#
takes 1 sec to fully shutdown | "kill %{job id}" vs "kill {job pid}" |
Wouldn't make sense because each job writes to it's own console, you couldn't watch 2 consoles at once.
Think of jobs as an array where you can get 1 item at a time, add 1 at a time, or remove 1 at a time.
https://www.redhat.com/sysadmin/jobs-bg-fg
|
I’ve got 2 background jobs running, I got their job ids using jobs is it possible to bring them both to the foreground using the fg <jobId> cmd? I can’t seem to add 2 parameters.
| Bring multiple background jobs to foreground |
FWIW, I can reproduce your case with:
rhel8$ /bin/jobs(){ jobs -l; }
rhel8$ sleep 1 | sleep 3600 &
[1] 2611
rhel8$ sleep 2
rhel8$ jobs
[1]+ Running sleep 1 | sleep 3600 &
rhel8$ /bin/jobs
[1]+ 2610 Running sleep 1
2611 Running | sleep 3600 &
rhel8$ pgrep 2610
<nothing!>
rhel8$ ls /proc/2610
ls: cannot access '/proc/2610': No such file or directory
rhel8$ /bin/jobs
[1]+ 2610 Running sleep 1
2611 Running | sleep 3600 &
rhel8$ cat /bin/jobs
#!/bin/sh
builtin jobs "$@"Or with (even lamer than the previous):
rhel8$ unset -f /bin/jobs
rhel8$ export JOBS=$(jobs -l)
rhel8$ builtin(){ echo "$JOBS"; }
rhel8$ export -f builtin
rhel8$ /bin/jobs
[1]+ 2610 Running sleep 1
2611 Running | sleep 3600 &
rhel8$ type /bin/jobs
/bin/jobs is /bin/jobsNote: As already demonstrated, jobs -l in bash is displaying stale information, with pipeline processes which have already exited still shown as Running. IMHO this is a bug -- other shells like zsh, ksh or yash correctly show them as Done.
|
I'm running a long-running pipeline from bash, in the background:
find / -size +500M -name '*.txt' -mtime +90 |
xargs -n1 gzip -v9 &The 2nd stage of the pipeline takes a long time to complete (hours) since there are several big+old files.
In contrast, the 1st part of the pipeline completes immediately, and since the pipe isn't full, and it has completed, find exits successfully.
The parent bash process seems to wait properly for child processes.
I can tell this because there's no find (pid 20851) running according to either:
ps alx | grep 20851
pgrep -l findThere's no zombie process, nor there's any process with process-id 20851 to be found anywhere on the system.
The bash builtin jobs correctly shows the job as a single line, without any process ids:
[1]+ Running find / -size +500M -name '*.txt' -mtime +90 | xargs -n1 gzip -v9 &OTOH: I stumbled by accident on a separate job control command (/bin/jobs) which shows:
[1]+ 20851 Running find / -size +500M -name '*.txt' -mtime +90
20852 Running | xargs -n1 gzip -v9 &and which is (wrongly) showing the already exited 20851 find process as "Running".
This is on CentOS (edit: More accurately: Amazon Linux 2 AMI) Linux.
Turns out that /bin/jobs is a two line /bin/sh script:
#!/bin/sh
builtin jobs "$@"This is surprising to me. How can a separate process, started from another program (sh), know the details of a process which is managed by another (bash) after that process has already completed and exited and is NOT a zombie?
Further:
how can it know details (including pid) about the already exited process, when other methods on the system (ps, pgrep) can't?
Edits:
(1) As Uncle Billy noted in the comments, on this system /bin/sh and/bin/bash are the same (/bin/sh is a symlink to /bin/bash) but /bin/jobs is a script with a shebang line so it runs in a separate process.
(2) Also, thanks to Uncle Billy: an easier way to reproduce. /bin/jobs was a red herring. I mistakenly assumed it is the one producing the output. The surprising output apparently came from the bash builtin jobs when called with -l:
$ sleep 1 | sleep 3600 &
[1] 13616
$ jobs -l
[1]+ 13615 Running sleep 1
13616 Running | sleep 3600 &
$ ls /proc/13615
ls: cannot access /proc/13615: No such file or directorySo process 13615 doesn't exist, but is shown as "Running" by bash builtin job control, which appears like a bug in jobs -l.
The presence on /bin/jobs which confused me to think it must be the culprit (it wasn't), seems confusing and questionable. I believe it should be removed from the system as it is useless (a sh script running in a separate process, which can't show jobs of the caller anyway).
| 'jobs' shows a no longer existing process as running |
That happens because the Python interpreter in background "competes" the prompt with the command-line shell as soon as it finishes executing the test.py program.
Naturally Python loses that "battle" because it is not the one allowed to go interactive at that moment. However it gets stopped a bit too late, enough to leave its own prompt in a state that will not resume cleanly on an fg command.
One way to settle this is by adding the following lines at the end of your Python program:
import os, signal # access os-level functions and UNIX-signals
os.kill(os.getpid(), signal.SIGSTOP) # send myself the UNIX SIGSTOP signal(Of course the import can also be placed on top of your program as per typical practice.)
That os.kill() line will put the Python interpreter in stopped state, just like it gets when it tries to go interactive at the wrong moment. Only, this time it does it itself, before even attempting to prompt, so that it is not left in an inconsistent state.
You know when that os.kill() is reached because the command-line shell notifies you that Python got Stopped. An fgat that moment will resume Python making it proceed from the os.kill() line, thus starting its own interactive session.
Don't use bg to resume it after it got Stopped by that os.kill(), because doing so will only make the kernel stop Python again for attempting to go interactive while in background.
|
python3 -i test.py opens an interactive python shell after running test.py. However, if I try to run it in the background with python3 -i test.py &
the job stops automatically with a ^C and shows
[4]+ Stopped python3 -i test.py, and I can't access python's interactive shell (with the variables from test.py still in the environment) afterwards. fging the process (i.e. fg %4) leads to an interactive shell where my input can't be seen but is still run after pressing <Enter>. How do I run the interactive shell "normally" after running test.py in the background?
(For reference, test.py contains
from time import sleep
a = 'hello world'
for i in range(10):
sleep(1)
print(a)and my shell looks like this:
$ python3 -i test.py &
[4] 6708
$ hello world
hello world
hello world
hello world
hello world
hello world
hello world
hello world
hello world
hello world
fg %4
python3 -i test.py
>>> 'hello world'
>>>and I typed a after being prompted by the first >>>, but it isn't shown.
)
-- Edit @muru --
Sending it to the bg after running it normally in the fg gives:
$
$
$ python3 -i test.py
hello world
hello world
hello world
hello world
^Z
[4]+ Stopped python3 -i test.py
$ bg %4
[4]+ python3 -i test.py &
$ hello world
hello world
hello world
hello world
hello world
hello world
echo 'hello world'
hello world[4]+ Stopped python3 -i test.py
$
$where shell was expecting input and I typed echo 'hello world' after the 10 "Hello World"'s.
| Opening interactive python shell after running python script in background |
If this PC has an Intel CPU, then the Thread(s) per core most certainly indicates hyper-threading.
https://en.wikipedia.org/wiki/Hyper-threadingFor each processor core that is physically present, the operating system addresses two virtual (logical) cores and shares the workload between them when possible. The main function of hyper-threading is to increase the number of independent instructions in the pipeline; it takes advantage of superscalar architecture, in which multiple instructions operate on separate data in parallel. .Should I run 4 of them a time or 8 to reduce the total time cost? It depends. Some tasks run faster under hyper-threading, and some don't. You'll have to test that yourself.Assume now I have 800 of them need to be run.I'd use GNU Parallel to handle this problem.
https://www.gnu.org/software/parallel/GNU parallel is a shell tool for executing jobs in parallel using one or more computers.If you've got a list of of files in . which need to be processed, this will work:
find . | parallel -j4 yourprogramIf your earlier tests show that it runs faster with hyper-threading, then change the "4" to an "8".
EDIT: forgot to mention that sometimes programs run faster when you disable HT in the BIOS.
|
I do some scientific calculation on a PC(in fact several PCs) and I want to know how many jobs should I submit one time. lscpu shows:
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1The ambiguity is 'Thread'. I searched the net and learned something about it. But I still felt confused (it is said that how many jobs should I submit is depend). I do not care about the details of machines. For example, now I have an executable file. If I run it directly, it spends about 10 min. Assume now I have 800 of them need to be run. Should I run 4 of them a time or 8 to reduce the total time cost?
| How many CPU do I have and how many jobs should I submit? |
You can just aggregate the two commands with &&:
cd ..
mv directory.1/file.2 another.directory && rm -r directory.1 & |
I downloaded a directory of files, directory.1
containing the files
file.1
file.2
file.3I move file.2 by
mv file.2 ../another.directory &My problem is I want to get rid of directory.1 and the contents but I have to wait for my job to finish. Is there a way around this? Or a quick way to do a..
rm -r directory.1triggered when my mv job is finished?
edit: to be clear... the file I'm moving is ~1GB to another device. So can take a few minutes.
| How can you move a file as a background job and delete all other files and the directory before waiting for the job to finish? |
There are no relations between a PID and a job ID on shells I have used (bash, dash and zsh).
However, a shell job is a child process of the shell, whereas PID 1 (init) is the ancestor of all processes, including the shell. Therefore a process with job id 1 will always have a PID greater than the job ID.
The assignment of a job ID depends on the shell. On bash, usually the job ID assigned is one greater than the greatest job ID of a running background job:
$ sleep 1 & sleep 10 & sleep 1 &
[1] 11367
[2] 11370
[3] 11373
$
[1] Done sleep 1
[3]+ Done sleep 1
$ sleep 1 &
[3] 11378 | I don't know what exactly is JID (job ID) and how it is assigned. What is it's relation to PID and how does one number affect the size of the other in any way?
| What is JID (job ID) and is it always smaller than PID? [closed] |
journalctl --unit=my.service -n 100 --no-pager |
I'm looking for a way, to simply print the last X lines from a systemctl service in Debian. I would like to install this code into a script, which uses the printed and latest log entries. I've found this post but I wasn't able to modify it for my purposes.
Currently I'm using this code, which is just giving me a small snippet of the log files:
journalctl --unit=my.service --since "1 hour ago" -p errTo give an example of what the result should look like, simply type in the command above for any service and scroll until the end of the log. Then copy the last 300 lines starting from the bottom.
My idea is to use egrep ex. egrep -m 700 . but I had no luck since now.
| How to see the latest x lines from systemctl service log |
that "reader" is just less.No mention of using the reader is in the man page for journalctl.err, said man page:The output is paged through `less` by default,but:but G does not go to the end of the file. In fact, if press G, the stream freezes and I have forcibly terminate it.G works beautifully, the log is just very long, so it's searching for a long time until it reaches the end.
from the man page: -e, --pager-end
Immediately jump to the end of the journal inside the implied pager
tool. This implies -n1000 to guarantee that the pager will not buffer
logs of unbounded size. This may be overridden with an explicit -n
with some other numeric value, while -nall will disable this cap.
Note that this option is only supported for the less(1) pager.So,
journalctl -eis what you want to run!
|
If I type sudo journalctl I get the system journal in some kind of a reader. Pressing j and k works like in Vi but G does not go to the end of the file. In fact, if press G, the stream freezes and I have forcibly terminate it.
No mention of using the reader is in the man page for journalctl.
| How do you go to the end of the file in journalctl? |
They are two totally different things.
On most systems that I'm aware of that has dmesg, it is sometimes a command and sometimes a log file in /var/log, and may be both. The log contains messages produced by the kernel. This will usually include the various device probe messages during the boot sequence as well as any further messages outputted by the kernel during the running of the system.
Depending on what "journal" refers to, I suppose it way be different things. The journal that first springs to my mind is the journal of a journaled filsystem. This journal contains the various transactions made to a particular partition (part of a disk) and allows the system to replay disk operations consistently in the case of a system crash. This journal is not generally accessible to users.
If "journal" refers to journalctl, then the two are similar, but not the same. journalctl has a --dmesg option that makes it mimic dmesg.
Compare the manuals for journalctl and dmesg on your system.
| I am completely new to Linux.
I know that dmesg and journalctl record commands invoked by my operating-system, but why do 2 recorders exist, what types of messages should I expect to see within each of them, and what are the differences in their life cycles?
| What is the difference between dmesg and journalctl [closed] |
journalctl -f -u mystuff.serviceIt's in the manual:-f, --follow
Show only the most recent journal entries, and continuously print new entries as they are appended to the
journal.and-u, --unit=UNIT|PATTERN
Show messages for the specified systemd unit UNIT (such as a service unit), or for any of the units
matched by PATTERN. If a pattern is specified, a list of unit names found in the journal is compared with
the specified pattern and all that match are used. For each unit name, a match is added for messages from
the unit ("_SYSTEMD_UNIT=UNIT"), along with additional matches for messages from systemd and messages
about coredumps for the specified unit.
This parameter can be specified multiple times. |
I want to watch output from a systemd service on CentOS as if I have started this service from console. Yes, I can see output with journalctl, but it doesn't scroll to the bottom automatically. So how can I watch live output from service?
| How to watch output from systemd service? |
EDIT: Please refer to this answer instead. The below answer is kept only for posterity.This seems to have been implemented in 2018, see this PR. With version 236 and above it looks like you can use --output-fields=, described in --help. Check your version with systemctl --version, my CentOS 7 currently (in 2019) runs version 219 so this will probably take some time to make it out to most environments.
edit: FYI EL8 (as of 2021-04-12) runs systemd 239, so this is available.
|
I'm trying to get the last few lines from journalctl so I can feed them into my conky. However journalctl by default provides too much crap that wastes space: With journalctl -u PROCESS -n 5 --no-pager -l I get entries like:
DATE TIME HOSTNAME PROCESS: MESSAGE
I want to print only TIME MESSAGE. How can I do that?The manpage says there's an -o argument, but there's no predefined format that fits my need. I tried adding --output-fields=__REALTIME_TIMESTAMP,MESSAGE but that just shows the default output (and not timestamp/message). That argument claims only some formats are affected, so I tried --output-fields=__REALTIME_TIMESTAMP,MESSAGE -o verbose but that only gives me the normal vebose output. Besides, apparently there's 4 fields that are always printed, which is already too many for me. I want just 2: a compact timestamp and the message.
I could use some bash magic or a python script to clean it up, but that seems a bit excessive. Surely there's a way to ask journalctl to give me just a timestamp and message?
| Print only timestamp and message in journalctl |
Systems with s6, runit, perp, nosh, daemontools-encore, et al. doing the service management work this way. Each main service has an individual associated set of log files that can be monitored individually, and a decentralized logging mechanism.
systemd however does not work this way. There is no individual "associated log file" for any given service. There is no such file to be monitored.
All log output is funneled into a single central dæmon, systemd-journald, and that dæmon writes it as a single stream with all services' log outputs combined to the single central journal in /{run,var}/log/journal/.
The -u option to journalctl is a post-processing filter that filters what is printed from the single central journal, all journal entries being tagged with (amongst other things) the name of the associated service. Everything fans in, and it then has to be filtered to separate it back out to (approximately) how it was originally.
The systemd way is to use journalctl -f with appropriate filters added, or write your own program directly using the systemd-specific API for its journal.
Further readinghttps://unix.stackexchange.com/a/294206/5132 |
Using sudo journalctl -u {service} I can see the log of specific service.How to find the associated log file?
What is the best way to monitor a log file programmatically? (I mean a program the react based on something appears in the log file) | Where to find the log file of specific service |
I'd start with journald's logs. Try one of these:
$ journalctl /usr/bin/gnome-shell
$ journalctl /usr/bin/gnome-sessionIf the logs are not there then try this:
$ journalctl -xeGoogling I did find this thread titled: gnome-session logging which did have several examples that worked when I tried them on an Ubuntu 16.04 system I have.
SYSLOG_IDENTIFIER=gnome-session-binary
$ journalctl SYSLOG_IDENTIFIER=gnome-session-binary -n 5
-- Logs begin at Sun 2018-07-15 06:33:41 EDT, end at Sat 2018-07-21 00:23:03 EDT. --
Jul 16 18:35:21 manny gnome-session-binary[17047]: GLib-GObject-CRITICAL: g_object_unref: assertion 'G_IS_OBJECT (object)' failed
Jul 16 18:35:21 manny gnome-session-binary[17047]: GLib-GObject-CRITICAL: g_object_unref: assertion 'G_IS_OBJECT (object)' failed
Jul 16 18:35:21 manny gnome-session-binary[17047]: GLib-GObject-CRITICAL: g_object_unref: assertion 'G_IS_OBJECT (object)' failed
Jul 17 02:11:43 manny gnome-session-binary[18526]: Entering running state
Jul 17 13:08:48 manny gnome-session-binary[23495]: Entering running stateSYSLOG_IDENTIFIER=gnome-session
$ journalctl SYSLOG_IDENTIFIER=gnome-session -n 5
-- Logs begin at Sun 2018-07-15 06:33:41 EDT, end at Sat 2018-07-21 00:23:03 EDT. --
Jul 20 06:30:10 manny gnome-session[18526]: (gnome-software:18773): Gs-WARNING **: failed to call gs_plugin_refresh on apt: apt transaction returned result exit-failed
Jul 20 06:30:10 manny gnome-session[18526]: (gnome-software:18773): Gs-WARNING **: failed to refresh the cache: no plugin could handle refresh
Jul 20 07:36:18 manny gnome-session[18526]: Nautilus-Share-Message: Called "net usershare info" but it failed: 'net usershare' returned error 255: mkdir failed on directory /var/run/samba/msg.l
Jul 20 07:36:18 manny gnome-session[18526]: net usershare: cannot open usershare directory /var/lib/samba/usershares. Error No such file or directory
Jul 20 07:36:18 manny gnome-session[18526]: Please ask your system administrator to enable user sharing..xsession-errors
$ tail -n 5 ~/.xsession-errors
upstart: indicator-session main process (25074) killed by TERM signal
upstart: indicator-application main process (25139) killed by TERM signal
upstart: unity7 main process (25334) terminated with status 1
upstart: unity-panel-service-lockscreen main process (32162) killed by HUP signal
upstart: Disconnected from notified D-Bus bus |
Where can I find gnome-shell error (or debug) log?
I'm interested in mutter error/warning messages since I have a GLX problem I need to debug and I believe the key could be in gnome-shell's messages.
| gnome-shell error log |
5.10.15 doesn't solve this problem. I still have same error. Intel bugs are really annoying since kernel > 4.19.85 (November 2019 !)
As a workaround, i915 guc need to be enable as mentionned in Archlinux Wiki : https://wiki.archlinux.org/index.php/Intel_graphics#Enable_GuC_/_HuC_firmware_loading and loaded before others modules
To resume :Add guc paramters to kernel parameters by editing /etc/default/grubGRUB_CMDLINE_LINUX="i915.enable_guc=2"Add guc option to i915 module by adding /etc/modprobe.d/i915.conf file with :options i915 enable_guc=2Add i915 to /etc/mkinitcpio.conf :MODULES=(i915)Rebuild kernel initramfs (needs reboot after successfull build) :# mkinitcpio -PRemove xf86-video-intel (driver is already in kernel) :# pacman -Rscn xf86-video-intel |
last three days I am experiencing random freezes. If i am looking on youtube when this happens Audio keeps playing but screen is froze and keyboard or cursor do not do anything.
I trying to look in sudo journalctl and this is what I found:
led 04 10:44:02 arch-thinkpad kernel: i915 0000:00:02.0: [drm] *ERROR* Atomic update failure on pipe C (start=113031 end=113032) time 340 us, min 1073, max 1079, scanline start 1062, end 1085
led 04 11:09:15 arch-thinkpad kernel: i915 0000:00:02.0: [drm] *ERROR* Atomic update failure on pipe C (start=203838 end=203839) time 273 us, min 1073, max 1079, scanline start 1072, end 1090
led 04 11:15:47 arch-thinkpad kernel: i915 0000:00:02.0: [drm] *ERROR* Atomic update failure on pipe C (start=227329 end=227330) time 278 us, min 1073, max 1079, scanline start 1066, end 1085uname -a returns:
Linux arch-thinkpad 5.10.4-arch2-1 #1 SMP PREEMPT Fri, 01 Jan 2021 05:29:53 +0000 x86\_64 GNU/LinuxI use: i3wm, picom, pulseaudio. I have lenovo x390 yoga with intel processor.
How can I diagnose and solve this problem?EDIT: Upgrading linux kernel to 5.10.16 solved my problem. Still I will accept answer of @Sylvain POULAIN for its complex view on the problemand offering alternative solution.
| Arch linux randomly freezes after updating to kernel 5.10 |
Adding --lines=all does the trick - rather than overriding --boot they work together to follow lines since boot.
journalctl --boot --lines=all --follow |
journalctl --boot prints log lines since boot and journalctl --follow prints the last 10 lines of the log and then follows it. But journalctl --boot --follow doesn't work like I expect it to. Rather than printing all the journal lines since boot and then following the journal it just ignores --boot flag. Swapping the flags around makes no difference. How do I print all the log lines since boot and then follow the log?
Version info:
$ journalctl --version
systemd 239
+PAM +AUDIT -SELINUX +IMA +APPARMOR +SMACK -SYSVINIT +UTMP -LIBCRYPTSETUP +GCRYPT -GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID -ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid | How to follow journalctl since boot? |
The problem is actually with buffering from the Flask application and not with how systemd or journald are ingesting those logs.
This can be counter-intuitive, since as you mentioned, running python3 run.py directly on the command-line works and shows logs properly and also timestamps look correct on the logs.
The former happens because Unix/Linux will typically set up stdout to be unbuffered when connected to a terminal (since it's expecting interaction with an user), but buffered when connected to a file (in case of StandardOutput=file:...) or to a pipe (in case you're logging to the journal, which is the default.)
The latter is because the Python/Flask logger is adding timestamps, so even though it's buffering that output, when it finally issues it into the logs, all the timestamps are there.
Some applications will know this is typically an issue and will set up buffering on stdout appropriately when using it for logs, but this doesn't seem to be the case with this particular Python/Flask setup you are using.
On Python, it's fairly easy to globally change stdout to unbuffered mode, which you can do by:Passing a -u flag to python3 in your command.
Setting PYTHONUNBUFFERED=1 in your environment (which you can do in the systemd service unit with an additional Environment=PYTHONUNBUFFERED=1 line.)You confirmed this worked for your specific case, so that's great!
For non-Python applications suffering from similar issues, there are command-line tools such as unbuffer and stdbuf which can often solve this same problem.
Solutions are usually specific to the kind of application, which is somewhat unfortunate, but often googling or looking for other answers in Stack Exchange (once you know buffering is the issue) will usually lead you to an useful suggestion.
|
I'm having a systemd service defined as follows, that works fine:
[Unit]
Description=my service
After=network.target[Service]
User=myuser
Group=mygroup
WorkingDirectory=/home/myuser/myapp
Environment="PATH=/home/myuser/myapp/.venv/bin"
ExecStart=/home/myuser/myapp/.venv/bin/python3 /home/myuser/myapp/run.py
Restart=on-failure[Install]
WantedBy=multi-user.targetThis is a Python web application based on Flask framework. Normally in the stdout of the application I can see incoming requests "live", I mean when I run the app like python run.py.
Now after starting the service I'd like to follow logs of the app and I do:
sudo journalctl -f -u my_app.serviceand incoming logs are awfully slow - sometimes it takes minutes or more for them to appear in logs. Afterwards they all have proper timestamp though, so it's not like they're disappearing, they do, but after a long time.
What I've tried:to redirect systemd service output to files:
StandardOutput=file:/var/log/my_app/output.log
StandardError=file:/var/log/my_app/error.log
with no luck - they save fine but with the same slow speed
to try to dump journalctl logs to offline faster setting SyncIntervalSec from default 5m to 5s - didn't help eitherIs there any way to pass those logs faster from my application to journald? I don't have troubles with other services like system authentication ones - I see records immediately.
My journald.conf file has default parameters (except one above), my systemd is version 237 and I'm running Ubuntu 18.04.
| Getting systemd service logs faster from my service |
I've found the answer. You need to use sd_journal_send() from systemd/sd-journal.h.
You can also use the SYSLOG_IDENTIFIER and SYSLOG_PID tags to customise what is used.
More info on the available tags can be found here.
Example:
std::string sysLogIdentifier("SYSLOG_IDENTIFIER=");
sysLogIdentifier += program_invocation_short_name;std::string sysLogPid("SYSLOG_PID=");
sysLogPid += getpid();sd_journal_send("MESSAGE=Found the answer",
sysLogIdentifier.c_str(),
sysLogPid.c_str(),
NULL);Output:
Feb 10 17:11:48 hostname processB [1418]: Found the answer |
I have a process which is started by systemd - lets call it A. This process spawns numerous child processes - lets just take one and call it B.
This is a C++ application. When you print to std::cout the output is captured by systemd and can be viewed with the journalctl command.
Whenever a message is printed to std::cout from process A it appears in the journalctl output with the name of process A preceding the log message - makes sense.
Nov 09 16:27:17 hostname processA [1417]: message from process AWhenever a message is printed from process B, however, the message that is printed is still preceded by the name of process A.
Nov 09 16:27:18 hostname processA [1417]: message from process BI presume this is expected behaviour as it displaying the name of the process that was actually started by systemd - disregarding the fact it was raised by a child of that process. It does appear as if systemd is aware of there being multiple processes when you use the systemctl status processA command though:
Active: active (running) since Wed 2016-11-09 16:27:20 GMT; 30min ago
Main PID: 1417 (processA)
CGroup: /system.slice/processA.service
├─1417 /opt/test/bin/processA
├─1450 /opt/test/bin/processBMy question is: Is there a way for the output in journalctl to display the child process name when it's output is captured?
| systemd - journalctl output always shows the parent process name in log entries |
The -o option allows the timestamp formatting to be chosen among a few options, including ISO format (e.g. 2022-08-04T17:45:08+0200) with
journalctl -o short-isoor
journalctl -o short-iso-preciseif you need milliseconds.
|
How can I configure the journalctl output to use a specified date format?
I know I can parse out the line and convert to a date format, but I'm hoping to see the log lines with the ISO format.
| journalctl set default datetime format |
You can use the invocation id, which is a unique identifier for a specific run of a service unit.
It was introduced in systemd v232, so you need at least that version of systemd for this to work.
To get the invocation id of the current run of the service:
$ unit=prometheus
$ systemctl show -p InvocationID --value "$unit"
0e486642eb5b4caeaa5ed1c56010d5cfAnd then to search journal entries with that invocation id attached to them:
$ journalctl INVOCATION_ID=0e486642eb5b4caeaa5ed1c56010d5cf + _SYSTEMD_INVOCATION_ID=0e486642eb5b4caeaa5ed1c56010d5cfI found that you need both INVOCATION_ID and _SYSTEMD_INVOCATION_ID to get all the logs. The latter is added by systemd for logs output by the unit itself (e.g. the stdout of the process running in that service), while the former is attached to events taken by systemd (e.g. "Starting" and "Started" messages for that unit.)
Note that you don't need to filter by the unit name as well. Since the invocation id is unique, filtering by the id itself is sufficient to only include logs for the service you're interested in.
|
Is there a canonical way to get all the logs from journalctl since a service was last restarted? What I want to do is restart a service and immediately see all the logs since I initiated the restart.
I came up with:
$ unit=prometheus
$ sudo systemctl restart $unit
$ since=$(systemctl show $unit | grep StateChangeTimestamp= | awk -F= '{print $2}')
$ sudo systemctl status -n0 $unit && sudo journalctl -f -u $unit -S "$since"This will probably work, but I was wondering if there is a more concrete way to say: restart and give me all logs from that point onwards.
| Show journal logs from the time a service was restarted |
As answered over on ServerFault, this is because less is invoked with the K flag, which causes it to die upon recieving a ^C character, rather than returning to its command prompt.
To remedy this, export the variable SYSTEMD_LESS="FRSXM" into your environment. This is the standard set of flags that systemd passes to less, minus the problematic K that makes it impossible to break out of follow mode.
|
An interesting annoyance that just plagued a coworker:
If you less a file that's being appended to, you can hit shift-f to start following the output stream in real time. Then, to stop following the output, you hit ctrl-c, after which you can navigate and search the file as usual.
This does not work when using journalctl. Say you want to follow your nginx log - you'd run journalctl -u nginx, and then the usual shift-f to start following the output. However, when you press ctrl-c, less immediately terminates, rather than exiting the "follow" mode and returning to "navigation" mode as it does when following a file.
Needless to say, this is incredibly annoying. Why is this, and how do I restore the normal functionality?
| Breaking out of follow mode with less and journalctl |
But it's very annoying that I cannot refresh the logs. Is there a way to do that with less? Using Shift + F doesn't work.You might be looking for journalctl's, --follow/-f. That cancels the use of a pager by journalctl, so I guess you need to add it.
journalctl -fu xyz | lessThen less's Shift+F works to see new entries that are added while you're viewing in less.
If you want to scroll up in less while less is waiting for new input (because you hit Shift+F or simply scrolled to the bottom), then make sure to use Ctrl+X rather than Ctrl+C to interrupt its waiting for new entries. If you use Ctrl+C, that's going to kill journalctl so you wouldn't be able to get new entries afterwards.
Here's an example video. I hope this clarifies:For Ctrl+X, you need to update to less version 633, released last May.
|
I'm using less to view my journalctl logs because it's more convenient. It doesn't clutter the console window with logs after you exit less and you can scroll using your mouse wheel.
journalctl --unit xyz | less +GBut it's very annoying that I cannot refresh the logs. Is there a way to do that with less? Using Shift + F doesn't work.
| Is it possible to dynamically refresh journalctl with less? |
tl;dr
Create the following file:
# /etc/tmpfiles.d/somegroup_journal.conf
#Type Path Mode User Group Age Argument
a+ /run/log/journal - - - - d:group:somegroup:r-x
a+ /run/log/journal - - - - group:somegroup:r-x
a+ /run/log/journal/%m - - - - d:group:somegroup:r-x
a+ /run/log/journal/%m - - - - group:somegroup:r-x
a+ /run/log/journal/%m/*.journal* - - - - d:group:somegroup:r--
a+ /run/log/journal/%m/*.journal* - - - - group:somegroup:r--How to figure it out:
man systemd-journald.service(8) has the following:Additional users and groups may be granted access to journal files via file system access control lists (ACL). Distributions and administrators may choose to grant read access to all members of the "wheel" and "adm" system groups with a command such as the following:
# setfacl -Rnm g:wheel:rx,d:g:wheel:rx,g:adm:rx,d:g:adm:rx /var/log/journal/While this sounds perfect, the example touches /var/log/journal/, but journalctl prioritizes /run/log/journal/ as demonstrated by the following source:
if (laccess("/run/log/journal", F_OK) >= 0)
dir = "/run/log/journal";
else
dir = "/var/log/journal";/* If we are in any of the groups listed in the journal ACLs,
* then all is good, too. Let's enumerate all groups from the
* default ACL of the directory, which generally should allow
* access to most journal files too. */
r = acl_search_groups(dir, &g);/run is mounted as tmpfs, so the following ACL rule would probably not persist:
# setfacl -Rnm g:somegroup:rx,d:g:somegroup:rx /run/log/journal/To make this persist, configure whatever is used to generate /run/log/journal. Browsing a few more sources, we find tmpfiles.d/systemd.conf.m4:
z /run/log/journal 2755 root systemd-journal - -
Z /run/log/journal/%m ~2750 systemd-journal - -
m4_ifdef(`HAVE_ACL',`
a+ /run/log/journal/%m - - - - d:group:adm:r-x
a+ /run/log/journal/%m - - - - group:adm:r-x
a+ /run/log/journal/%m/*.journal* - - - - d:group:adm:r--
')'m4_dnlThis suggests that the ACL rules need to be added in tmpfiles.d. The compiled version of the above file is found locally at /usr/lib/tmpfiles.d/systemd.conf. Combining that example with man tmpfiles.d(5) gives some details to help create a working solution.
Create the following file:
# /etc/tmpfiles.d/somegroup_journal.conf
#Type Path Mode User Group Age Argument
a+ /run/log/journal - - - - d:group:somegroup:r-x
a+ /run/log/journal - - - - group:somegroup:r-x
a+ /run/log/journal/%m - - - - d:group:somegroup:r-x
a+ /run/log/journal/%m - - - - group:somegroup:r-x
a+ /run/log/journal/%m/*.journal* - - - - d:group:somegroup:r--
a+ /run/log/journal/%m/*.journal* - - - - group:somegroup:r--A quick test plus reboot confirms that this works!
|
How do I grant read-only permission to somegroup to read the system journal? (I'm on Debian10 buster).
$ journalctl
Hint: You are currently not seeing messages from other users and the system.
Users in the 'systemd-journal' group can see all messages. Pass -q to
turn off this notice.
No journal files were opened due to insufficient permissions.I know I can add a user to the systemd-journal group, but how do I give a group read-permission?
| How to let a specific group have read-access to systemd's journal? |
I have been fighting the same problem for quite a while now. I have a Yocto-based distro where I have a read-only rootfs and the /var/log folder is mounted in a different partition using volatile-binds. Once I got journald to log to the new partition I noticed the same thing, journalctl --list-boots would only show the current boot and not the older ones. After a lot of trial and error I ended up figuring out why.
As you point out, /var/log/journal shows 3 folders:
root@mir-edb-intel-gen1:~# ls -lrt /var/log/journal/
total 24
drwxr-sr-x+ 2 root systemd-journal 4096 Mar 5 09:24 7f7f3d516b554d718d688a0b0fa9648e
drwxr-sr-x+ 2 root systemd-journal 4096 Mar 5 09:25 93eb74229e9244a4b7f60c90acacc12f
drwxr-sr-x+ 2 root systemd-journal 4096 Mar 5 10:57 4a9652b4b33a4a099bdc6be8c6fb2b1aAnd journalctl --list-boots shows only 1 entry:
0 6cb4f9f236954f69937be217f36c1ce2 Fri 2021-03-05 10:57:48 UTC—Fri 2021-03-05 11:01:22 UTCHowever, if you run journalctl -D /var/log/journal --list-boots all the 3 boots appear as expected:
root@mir-edb-intel-gen1:~# journalctl -D /var/log/journal --list-boots
-2 9ab6d00dea9d41da9c76bf2a3f64895c Fri 2021-03-05 09:24:20 UTC—Fri 2021-03-05 09:25:03 UTC
-1 00509584245d44d599d88db8dccd4177 Fri 2021-03-05 09:25:37 UTC—Fri 2021-03-05 10:14:42 UTC
0 6cb4f9f236954f69937be217f36c1ce2 Fri 2021-03-05 10:57:48 UTC—Fri 2021-03-05 11:01:22 UTCAlthough I'm not that knowledgeable in systemd, I believe that the problem is due to the fact that we are flushing the journal from /run/log/journal to /var/log/journal every time we boot instead of writing more to the existing log in /var/log/journal. Then, when you call journalctl it only looks at the newest folder, therefore being unable to find the previous boots unless you specifically tell it to look at the whole /var/log/journal.
Edit:
@Jaap Joris Vens seems to have a good explanation for the behaviour in this answer
Edit2:
In our case the /etc/machine-id file is an empty file as explained in the poky/meta/classes/rootfs-postcommands.bbclass:
#
# A hook function to support read-only-rootfs IMAGE_FEATURES
#
read_only_rootfs_hook () { [...] if ${@bb.utils.contains("DISTRO_FEATURES", "systemd", "true", "false", d)}; then
# Create machine-id
# 20:12 < mezcalero> koen: you have three options: a) run systemd-machine-id-setup at install time, b) have / read-only and an empty file there (for stateless) and c) boot with / writable
touch ${IMAGE_ROOTFS}${sysconfdir}/machine-id
fi
} |
I am using Yocto to produce a custom image for a small embedded Linux system with SystemD Version 241. The root file system is Read-Only. I am using bind mounts and overlayfs to make the /var/log/journal directory exist on a separate Read/Write partition. I have a problem where systemd-journald gets "Amnesia" and does not remember previous boot logs, even though they are on the persistent Read/Write filesystem. This means journal cannot access or clean older log files from previous boots, even though the log files are present and valid on the filesystem.
Yocto volatile binds
# Setup overlayfs binds for various RW files
VOLATILE_BINDS_append = " \
/persistent-storage/var/log /var/log\n\
"The path /var/log exists:
root@me:/var/log# cd /var/log/
root@me:/var/log# ls -lrt
total 9
drwxr-xr-x 2 root root 1024 Jun 3 01:50 nginx
-rw-r--r-- 1 root root 5260 Jun 9 17:56 messages
drwxr-sr-x 5 root systemd-journal 1024 Jun 9 18:00 journal
root@me:/var/log# ls -lrt journal/
total 3
drwxr-sr-x 2 root systemd-journal 1024 Jun 9 17:56 5f6085cd81114e8688cf23b3bb91933e
drwxr-sr-x 2 root systemd-journal 1024 Jun 9 17:57 de59603d1ea24e7582ed7d7ed3ac8fb0
drwxr-sr-x 2 root systemd-journal 1024 Jun 9 18:00 0c34cc794e6c4241a75774bbb8324102I have a journald config file fragment in /lib/systemd/journald.conf.d/10-persistent-journal.conf that looks like this:
# By default the maximum use limit (SystemMaxUse) is 10% of the filesystem, and the minimum
# free space (SystemKeepFree) value is 15% - though they are both capped at 4G.
# The journals should be rotated automatically when they reach the SystemMaxFileSize value,
# and the number of journals is controlled by SystemMaxFiles. If you prefer time based
# rotation you can set a MaxFileSec to set the maximum time entries are stored in a single journal.
[Journal]
Storage=persistent
SystemMaxFileSize=128M
SystemMaxFiles=10
SystemMaxUse=256M
SystemKeepFree=256M
SyncIntervalSec=30The problem is that even though I reboot several times, and journald successfully finds and writes logs to /var/log/journal, it can never find previous logs and has no knowledge about previous boot logs. This means I cannot vacuum previous logs and my partition runs out of space even though journald should maintain 50% of the partition free.
root@me:/# journalctl --list-boots
0 82fef865e29e481aae27bd247c10e591 Tue 2020-06-09 18:00:12 UTC—Tue 2020-06-09
18:15:23 UTCEven though:
root@me:/# ls -lrt /var/log/journal/
total 3
drwxr-sr-x 2 root systemd-journal 1024 Jun 9 17:56 5f6085cd81114e8688cf23b3bb91933e
drwxr-sr-x 2 root systemd-journal 1024 Jun 9 17:57 de59603d1ea24e7582ed7d7ed3ac8fb0
drwxr-sr-x 2 root systemd-journal 1024 Jun 9 18:00 0c34cc794e6c4241a75774bbb8324102Also, the following commands work:
root@me:/# journalctl -b 0
<information>
root@me:/# journalctl -b 1
<information>root@me:/# journalctl -b 2
Data from the specified boot (+2) is not available: No such boot ID in journalI read this post: Can be the journal path on a filesystem other than /?. And I tried the following mount file, however I see exactly the same behavior:
[Unit]
Description=Persistent Journal Storage Bind[Mount]
What=/anotherfs/journal
Where=/var/log/journal
Type=none
Options=bind[Install]
WantedBy=local-fs.targetWhat am I doing wrong and how can I get journald to work with persistent logs on a bind mount system?
| systemd-journald persistent logs do not work with bind mount /var/log |
I've finally find the error, my laptop is an HP Spectre 13 v001nf.
For future users:
You should add this in your rc.local (if you have systemd add this has a service)
#!/bin/sh -e
for device in XHC PWRB
do
if grep -q "$device.*enabled" /proc/acpi/wakeup
then
echo $device > /proc/acpi/wakeup
fi
done
exit 0And voila suspend and hibernate works
|
I've a problem, every time I close the lid the laptop wakes up 2 seconds after (with the lid still closed)
systemctl suspend/hibernates comes to the same result
The problem happens on every linux distros (Ubuntu, ElementaryOS, debian, ...)
Here is my journalctl when closing the lid
-- Logs begin at Sun 2018-06-10 12:00:48 CEST. --
juin 10 18:16:26 PierreArch systemd-logind[294]: Lid closed.
juin 10 18:16:26 PierreArch systemd-logind[294]: Suspending...
juin 10 18:16:26 PierreArch systemd[1]: Reached target Sleep.
juin 10 18:16:26 PierreArch systemd[1]: Starting Suspend...
juin 10 18:16:26 PierreArch systemd-sleep[352]: Suspending system...
juin 10 18:16:26 PierreArch kernel: PM: suspend entry (deep)
juin 10 18:16:31 PierreArch kernel: PM: Syncing filesystems ... done.
juin 10 18:16:31 PierreArch kernel: Freezing user space processes ... (elapsed 0.001 seconds) done.
juin 10 18:16:31 PierreArch kernel: OOM killer disabled.
juin 10 18:16:31 PierreArch kernel: Freezing remaining freezable tasks ... (elapsed 0.000 seconds) done.
juin 10 18:16:31 PierreArch kernel: Suspending console(s) (use no_console_suspend to debug)
juin 10 18:16:31 PierreArch kernel: ACPI: EC: interrupt blocked
juin 10 18:16:31 PierreArch kernel: ACPI: Preparing to enter system sleep state S3
juin 10 18:16:31 PierreArch kernel: ACPI: EC: event blocked
juin 10 18:16:31 PierreArch kernel: ACPI: EC: EC stopped
juin 10 18:16:31 PierreArch kernel: PM: Saving platform NVS memory
juin 10 18:16:31 PierreArch kernel: Disabling non-boot CPUs ...
juin 10 18:16:31 PierreArch kernel: smpboot: CPU 1 is now offline
juin 10 18:16:31 PierreArch kernel: smpboot: CPU 2 is now offline
juin 10 18:16:31 PierreArch kernel: smpboot: CPU 3 is now offline
juin 10 18:16:31 PierreArch kernel: ACPI: Low-level resume complete
juin 10 18:16:31 PierreArch kernel: ACPI: EC: EC started
juin 10 18:16:31 PierreArch kernel: PM: Restoring platform NVS memory
juin 10 18:16:31 PierreArch kernel: Enabling non-boot CPUs ...
juin 10 18:16:31 PierreArch kernel: x86: Booting SMP configuration:
juin 10 18:16:31 PierreArch kernel: smpboot: Booting Node 0 Processor 1 APIC 0x2
juin 10 18:16:31 PierreArch kernel: cache: parent cpu1 should not be sleeping
juin 10 18:16:31 PierreArch kernel: CPU1 is up
juin 10 18:16:31 PierreArch kernel: smpboot: Booting Node 0 Processor 2 APIC 0x1
juin 10 18:16:31 PierreArch kernel: cache: parent cpu2 should not be sleeping
juin 10 18:16:31 PierreArch kernel: CPU2 is up
juin 10 18:16:31 PierreArch kernel: smpboot: Booting Node 0 Processor 3 APIC 0x3
juin 10 18:16:31 PierreArch kernel: cache: parent cpu3 should not be sleeping
juin 10 18:16:31 PierreArch kernel: CPU3 is up
juin 10 18:16:31 PierreArch kernel: ACPI: Waking up from system sleep state S3
juin 10 18:16:31 PierreArch kernel: ACPI: button: The lid device is not compliant to SW_LID.
juin 10 18:16:31 PierreArch kernel: ACPI: EC: interrupt unblocked
juin 10 18:16:31 PierreArch kernel: ACPI: EC: event unblocked
juin 10 18:16:31 PierreArch kernel: usb 1-7: reset full-speed USB device number 3 using xhci_hcd
juin 10 18:16:31 PierreArch kernel: ata1: SATA link down (SStatus 4 SControl 300)
juin 10 18:16:31 PierreArch kernel: ata2: SATA link down (SStatus 4 SControl 300)
juin 10 18:16:31 PierreArch kernel: usb 1-5: reset high-speed USB device number 2 using xhci_hcd
juin 10 18:16:31 PierreArch kernel: psmouse serio1: synaptics: queried max coordinates: x [..5690], y [..4772]
juin 10 18:16:31 PierreArch kernel: psmouse serio1: synaptics: queried min coordinates: x [1250..], y [1084..]
juin 10 18:16:31 PierreArch kernel: OOM killer enabled.
juin 10 18:16:31 PierreArch kernel: Restarting tasks ...
juin 10 18:16:31 PierreArch kernel: Bluetooth: hci0: Bootloader revision 0.0 build 2 week 52 2014
juin 10 18:16:31 PierreArch kernel: done.
juin 10 18:16:31 PierreArch kernel: Bluetooth: hci0: Device revision is 5
juin 10 18:16:31 PierreArch kernel: Bluetooth: hci0: Secure boot is enabled
juin 10 18:16:31 PierreArch kernel: Bluetooth: hci0: OTP lock is enabled
juin 10 18:16:31 PierreArch kernel: Bluetooth: hci0: API lock is enabled
juin 10 18:16:31 PierreArch kernel: Bluetooth: hci0: Debug lock is disabled
juin 10 18:16:31 PierreArch kernel: Bluetooth: hci0: Minimum firmware build 1 week 10 2014
juin 10 18:16:31 PierreArch systemd[1]: Starting Load/Save RF Kill Switch Status...
juin 10 18:16:31 PierreArch systemd-rfkill[401]: Failed to open device rfkill0: No such device
juin 10 18:16:31 PierreArch systemd[1]: bluetooth.target: Unit not needed anymore. Stopping.
juin 10 18:16:31 PierreArch systemd[1]: Started Load/Save RF Kill Switch Status.
juin 10 18:16:31 PierreArch kernel: Bluetooth: hci0: Found device firmware: intel/ibt-11-5.sfi
juin 10 18:16:31 PierreArch systemd[1]: Stopped target Bluetooth.
juin 10 18:16:31 PierreArch kernel: thermal thermal_zone7: failed to read out thermal zone (-61)
juin 10 18:16:31 PierreArch systemd-sleep[352]: System resumed.
juin 10 18:16:31 PierreArch kernel: PM: suspend exit
juin 10 18:16:31 PierreArch systemd[1]: Started Suspend.
juin 10 18:16:31 PierreArch systemd[1]: sleep.target: Unit not needed anymore. Stopping.
juin 10 18:16:31 PierreArch systemd[1]: Stopped target Sleep.
juin 10 18:16:31 PierreArch systemd[1]: Reached target Suspend.
juin 10 18:16:31 PierreArch systemd[1]: suspend.target: Unit not needed anymore. Stopping.
juin 10 18:16:31 PierreArch systemd[1]: Stopped target Suspend.
juin 10 18:16:31 PierreArch systemd-logind[294]: Operation 'sleep' finished.
juin 10 18:16:33 PierreArch kernel: Bluetooth: hci0: Waiting for firmware download to complete
juin 10 18:16:33 PierreArch kernel: Bluetooth: hci0: Firmware loaded in 2000483 usecs
juin 10 18:16:33 PierreArch kernel: Bluetooth: hci0: Waiting for device to boot
juin 10 18:16:33 PierreArch kernel: Bluetooth: hci0: Device booted in 15656 usecs
juin 10 18:16:33 PierreArch kernel: Bluetooth: hci0: Found Intel DDC parameters: intel/ibt-11-5.ddc
juin 10 18:16:33 PierreArch kernel: Bluetooth: hci0: Applying Intel DDC parameters completed
juin 10 18:16:36 PierreArch kernel: hp_wmi: Unknown event_id - 131073 - 0x0
juin 10 18:16:38 PierreArch systemd-logind[294]: Lid opened.Hope you can help me, thank you very much in advance
| Laptop wake up when lid is closed |
TL;DR: This will work:
$ journalctl _SYSTEMD_UNIT=vpn.service + SYSLOG_IDENTIFIER=vpn.shYou can use + to connect two sets of connections and look for journal log lines that match either expression. (This is documented in the man page of journalctl.)
In order to do that, you need to refer to them by their proper field names (the flags -u and -t are shortcuts for those.)
You can look at systemd.journal-fields(5) for the documentation of the field names. (That page will also explain why one of them has a leading underscore and the other one doesn't.)
For _SYSTEMD_UNIT you will need an exact match, including the .service suffix (the -u shortcut is smart and will find the exact unit name when translating it to a query by field.)
Putting it all together, you'll get the command above.
|
I have a vpn service unit for which I can view the logs with...
journalctl -u vpnI also have a script that interacts with the vpn manually and is logged to journal with...
exec > >(systemd-cat -t vpn.sh) 2>&1and I can view the logs with...
journalctl -t vpn.shI tried viewing both logs with...
journalctl -u vpn -t vpn.shbut it didn't work.
Is there a way to view both logs at the same time? Or is it possible to set the identifier (-t vpn.sh) in the vpn service unit file to match the identifier of my script (vpn.sh).
| How can I view journalctl logs by unit and identifier with one command? |
you can disable colour in journalctl like so;
SYSTEMD_COLORS=false journalctlyou could then add it to your ~/.bashrc, exporting the env variable;
export SYSTEMD_COLORS=falseThat will be global though and may effect other systemd related things.
I guess a more direct solution would be to add an alias to your ~/.bashrc, like so;
alias journalctl='SYSTEMD_COLORS=false journalctl' |
I have a dark background for my console so there's quite a bit of journalctl output that is unreadable.
I see lots of information about how to add color! But how do I completely disable it?
| journalctl output disable colorization |
Others point out that running journald without any persistent logs, is an option. This approach is documented without any particular warnings, and is used on large numbers of systems. Fedora started with no persistent journal plus a syslog daemon, and Debian still defaults that way.
So there's no reason to expect a problem.
I would feel free to mask the original service, and arrange for the flush to be run later however you like.
If at some later point you have a weird system crash during the boot process, you might want to re-enable it (and set a low SyncIntervalSec= in journald.conf), to try and recover any relevant log messages.
|
On my archlinux installation, I realised that flushing the journal logs to disk by the systemd-journal-flush service significantly prolongs the boot process and masking the service improves boot time. Can I permanently mask the service and run journalctl --flush later when the computer is idle to flush the journal logs to disk. Will this cause any undesirable system behaviour?
| Can I mask the systemd-journal-flush service and run journalctl --flush later manually? |
The new binary logs on Linux operating systems do not work in the way that the old binary logs did.
The old binary logs were /var/log/wtmp and /var/log/btmp. At system bootstrap an entry would be written to wtmp with the username reboot, and at shutdown an entry would be written to wtmp with the username shutdown. Finding the times that the system was rebooted was a matter of using the last reboot and last shutdown commands to print out these entries.
The new binary logs are the systemd journal, and they do not have such entries.
Instead, every journal record has a field named the boot ID. You can see this with the -o verbose option to journalctl. A boot ID is generated by the kernel at bootstrap, and systemd-journald applies the current boot ID, taken from the kernel, to every log record as it is adding it to the journal.
To implement the list-boots functionality, journalctl scans the entire journal, reading the timestamps and boot IDs of every record, and noting the earliest and latest timestamps associated with each unique boot ID.
There are no explicit boot log entries to vacuum, as was the case with the old binary logs. Instead you have to get rid of all entries stamped with the boot IDs that you no longer want to see.
Your problem is that because you are a privileged user, the entire journal also includes the per-user journals: those user-1001.journal and user-42.journal files that you can see in your directory listing. And as you can also see, you have per-user journals for some users that have not been touched since 2014 and 2015. Those are where the journal entries are that have these old boot IDs.
As the manual says, "vacuuming" only operates on archived journal files, not on the currently active files. But as you can further see those various per-user journals dated 2014 and 2015 are the still currently active files for those users. Ironically, they never grew large enough to be archived.
You have two options:Use the --system option to restrict journalctl --list-boots to just the system files and have it not read the per-user files.
Use journalctl --rotate to force the per-user journals to be archived (as well as the system journal — cave!), so that they will be "vacuum"-able. If your journalctl does not have this, you have to send a signal to the systemd-journald process, as described on its manual page.Of course, before doing that you might want to investigate why your gdm user account has a per-user journal at all, and what user ID 995 was doing on April Fools' Day 2015. ☺
You might also want to see what results from journalctl -n 20 _BOOT_ID=53baf678f0d749d6b390afea4a3ef96b and journalctl --reverse -n 20 _BOOT_ID=53baf678f0d749d6b390afea4a3ef96b and their equivalents for the other boot IDs.
|
The system has very old boots (2 and 3 years old), which are not recycled and I haven't been able to vacuum them:
$ journalctl --list-boots --no-pager
-16 53baf678f0d749d6b390afea4a3ef96b Wed 2014-04-02 22:07:26 IDT—Wed 2014-04-02 22:46:08 IDT
-15 60a54132f5c8450d9b33a77819a037d1 Thu 2014-04-03 00:04:50 IDT—Thu 2014-04-03 12:30:21 IDT
-14 24b65a7e589d4479bf5020b98b8120b7 Wed 2015-04-01 03:10:01 IDT—Wed 2015-04-01 08:35:21 IDT
-13 43398d6d74c849bcb359a2d3963f4aaa Wed 2015-04-08 00:26:31 IDT—Wed 2015-04-08 00:26:31 IDT
-12 51b28f394cbb4699b2c4098297f73b2e Mon 2017-07-24 18:28:02 IDT—Mon 2017-07-24 19:08:37 IDT
-11 67467a640fb5413189f9cd518a56f668 Tue 2017-07-25 01:21:00 IDT—Mon 2017-07-24 22:54:40 IDT
-10 1370875e2f2c4b3c80c82901367b0835 Tue 2017-07-25 03:05:18 IDT—Tue 2017-07-25 01:18:51 IDT
-9 462c24a6b4cd487c834e121240bb880c Tue 2017-07-25 13:16:02 IDT—Tue 2017-07-25 18:50:32 IDT
-8 970d61bd3a6f455bb67a7a77c788b930 Wed 2017-07-26 00:01:58 IDT—Wed 2017-07-26 02:37:09 IDT
-7 dc33b354faa64c7c981da25eb9b77bde Wed 2017-07-26 12:45:04 IDT—Wed 2017-07-26 17:01:02 IDT
-6 1bb69b41c09c40aea412714b09678cf2 Wed 2017-07-26 20:01:12 IDT—Wed 2017-07-26 18:43:34 IDT
-5 9a6ed1de771d4056b8be15409dfe06f4 Wed 2017-07-26 23:18:25 IDT—Tue 2017-08-01 00:04:56 IDT
-4 e3eba22761bc470ca9bd1d9004478ad1 Tue 2017-08-01 13:12:55 IDT—Tue 2017-08-01 18:40:22 IDT
-3 02d288fc10714e0592b24ea1cbaf60e4 Tue 2017-08-01 23:00:01 IDT—Wed 2017-08-02 01:29:53 IDT
-2 3230c51e8792424aaec920fa15fa96c0 Wed 2017-08-02 12:54:43 IDT—Wed 2017-08-02 18:09:52 IDT
-1 10621f49412c43cf976ab30555e6eb36 Wed 2017-08-02 22:37:36 IDT—Wed 2017-08-02 20:22:57 IDT
0 d2e38bd2d96b4027ac14e132638561fb Wed 2017-08-02 23:23:07 IDT—Thu 2017-08-03 00:03:01 IDTI tried vacuuming by time:
$ sudo journalctl --vacuum-time=1yearsBut nothing was deleted.
I tried by files:
$ sudo journalctl --vacuum-files=12Deleted archived journal /var/log/journal/b78deda26d9b4740a6bc52f31d993baf/system@412dc1544fb841d4909752bffd03e810-0000000000000001-00055511dc545ff3.journal (16.0M).
Deleted archived journal /var/log/journal/b78deda26d9b4740a6bc52f31d993baf/user-1000@15238a40d91c40fabe10b0bec7f53a23-00000000000009ef-0005550f5956d667.journal (25.0M).
Deleted archived journal /var/log/journal/b78deda26d9b4740a6bc52f31d993baf/system@412dc1544fb841d4909752bffd03e810-000000000000b685-00055528569c7855.journal (16.0M).
Deleted archived journal /var/log/journal/b78deda26d9b4740a6bc52f31d993baf/user-1000@15238a40d91c40fabe10b0bec7f53a23-000000000000b5cd-0005552854d417fd.journal (25.0M).
Vacuuming done, freed 82.0M of archived journals on disk.It deleted some boots but not the ones I expected:
$ journalctl --list-boots --no-pager
-9 53baf678f0d749d6b390afea4a3ef96b Wed 2014-04-02 22:07:26 IDT—Wed 2014-04-02 22:46:08 IDT
-8 60a54132f5c8450d9b33a77819a037d1 Thu 2014-04-03 00:04:50 IDT—Thu 2014-04-03 12:30:21 IDT
-7 24b65a7e589d4479bf5020b98b8120b7 Wed 2015-04-01 03:10:01 IDT—Wed 2015-04-01 08:35:21 IDT
-6 43398d6d74c849bcb359a2d3963f4aaa Wed 2015-04-08 00:26:31 IDT—Wed 2015-04-08 00:26:31 IDT
-5 9a6ed1de771d4056b8be15409dfe06f4 Wed 2017-07-26 23:23:00 IDT—Tue 2017-08-01 00:04:56 IDT
-4 e3eba22761bc470ca9bd1d9004478ad1 Tue 2017-08-01 13:12:55 IDT—Tue 2017-08-01 18:40:22 IDT
-3 02d288fc10714e0592b24ea1cbaf60e4 Tue 2017-08-01 23:00:01 IDT—Wed 2017-08-02 01:29:53 IDT
-2 3230c51e8792424aaec920fa15fa96c0 Wed 2017-08-02 12:54:43 IDT—Wed 2017-08-02 18:09:52 IDT
-1 10621f49412c43cf976ab30555e6eb36 Wed 2017-08-02 22:37:36 IDT—Wed 2017-08-02 20:22:57 IDT
0 d2e38bd2d96b4027ac14e132638561fb Wed 2017-08-02 23:23:07 IDT—Thu 2017-08-03 00:09:37 IDTHow can I get rid of those ancient boots from 2014 and 2015? Why are they preserved?Update
As per derobert's suggestion listing the directory returns the following:
$ ls -lt /var/log/journal/b78deda26d9b4740a6bc52f31d993baf/ | tail
-rw-r-----+ 1 root systemd-journal 26214400 Aug 2 22:44 user-1000@15238a40d91c40fabe10b0bec7f53a23-0000000000026307-000555aff155c595.journal
-rw-r-----+ 1 root systemd-journal 16777216 Aug 2 22:44 system@412dc1544fb841d4909752bffd03e810-0000000000026340-000555b006ce5e6d.journal
-rw-r-----+ 1 root systemd-journal 26214400 Aug 1 15:04 user-1000@15238a40d91c40fabe10b0bec7f53a23-000000000001d951-0005558547686519.journal
-rw-r-----+ 1 root systemd-journal 8388608 Aug 1 15:04 system@412dc1544fb841d4909752bffd03e810-000000000001d952-000555854768665a.journal
-rw-r-----+ 1 root systemd-journal 26214400 Jul 30 12:10 user-1000@15238a40d91c40fabe10b0bec7f53a23-0000000000015378-0005553e36e51b82.journal
-rw-r-----+ 1 root systemd-journal 8388608 Jul 30 12:03 system@412dc1544fb841d4909752bffd03e810-0000000000015393-0005553e40691231.journal
-rw-r-----+ 1 root systemd-journal 8388608 Apr 18 2015 user-1002.journal
-rwxr-xr-x+ 1 root systemd-journal 8388608 Apr 8 2015 user-42.journal
-rwxr-xr-x+ 1 root systemd-journal 8388608 Apr 1 2015 user-995.journal
-rwxr-xr-x+ 1 root systemd-journal 8388608 Apr 3 2014 user-1001.journal | journalctl shows very old boots which are not recycled |
The new binary logs on Linux operating systems do not work in the way that the old binary logs did.
The old binary logs were /var/log/wtmp and /var/log/btmp. At system bootstrap an entry would be written to wtmp with the username reboot, and at shutdown an entry would be written to wtmp with the username shutdown. Finding the times that the system was rebooted was a matter of using the last reboot and last shutdown commands to print out these entries.
The new binary logs are the systemd journal, and they do not have such entries.
Instead, every journal record has a field named the boot ID. You can see this with the -o verbose option to journalctl. A boot ID is generated by the kernel at bootstrap, and systemd-journald applies the current boot ID, taken from the kernel, to every log record as it is adding it to the journal.
To implement the list-boots functionality, journalctl scans the entire journal, reading the timestamps and boot IDs of every record, and noting the earliest and latest timestamps associated with each unique boot ID.
Thus if parts of the journal are purged, or conversely stick around overlong, the apparent boot and shutdown times reported by journalctl will differ wildly from the actual boot and shutdown times.
/run/utmp is a table of terminal login records, with special entries for bootup and shutdown. These entries are read by uptime and who -b. They are written by programs such as systemd-update-utmp, an analogue of the FreeBSD utx command, which are run as parts of the startup and shutdown procedures. They are not run first or last, as the relevant services are not (and indeed cannot be) ordered absolutely first or last. There may be journal entries with the relevant boot ID that precede the time that systemd-update-utmp reboot is run, and similar journal entries that postdate the time that systemd-update-utmp shutdown is run.
Further readinghttps://unix.stackexchange.com/a/383575/5132
https://unix.stackexchange.com/a/294206/5132 |
Here's a test script I used:
last_reboot=$(last reboot | grep 'still running' | awk '{for (i=5; i<=NF; i++) printf $i FS}' | awk '{for (i=1; i<=NF - 2; i++) printf $i FS}')
if [ "$last_reboot" ]; then
date -d "$last_reboot" '+last reboot: %Y-%m-%d'
fidays=$(uptime | awk '{print $3}')
hours=$(uptime | awk '{print $5}' | sed -E 's/,$//')
h=$(echo "$hours" | cut -d: -f 1)
m=$(echo "$hours" | cut -d: -f 2)
date -d "- $days days - $h hours - $m minutes" '+uptime: %Y-%m-%d'who -b | awk '{print "who: " $3}'journalctl --list-boots | awk '$1 == "0" {print "journalctl: " $4}'Locally, all four dates match.
I ran it on about 10 servers. last reboot doesn't report anything (probably, because wtmp gets rotated by logrotate). uptime and who -b match. And journalctl doesn't. What exactly does journalctl --list-boots report? Why can it not match what other tools report?
| Why `journalctl --list-boots` doesn't match what `uptime` and `who -b` report? |
What you want to use is the journalctl command. For example, if I want to get updated log entries on the service vmware, I would run this (f = follow, u = unit/service name):
journalctl -f -u vmware.serviceHere's how you can get the full system journal. I use this command for my updated system logs (f = follow, x = Add message explanations where available, b = since boot):
journalctl -fxb --no-hostname --no-full |
With normal syslog I can go to /var/log and run tail -F *log if I am not sure which log something is logged in.
Is there an equivalent for systemd?
Background
I am trying to debug a server. It crashes without leaving a trace. I am hoping that using the systemd version of tail -f *log that I can see log messages that are logged (but not yet written to disk) when the server crashes.
| 'tail -F *.log' but with systemd |
First of all Journal is a logging system and is part of systemd. Their existence is crucial when you need to know what happened.
As mentioned here, journalctl --file isn't that usable.As the journal files are rotated periodically, this form is not really usable for viewing complete journals.Now, whether you consider the files useless, that's for you to decide. Normally, too old logs are not worth keeping and you could delete them.
To do that, is best to use journalctl itself and its utility vacuum. For instance you can use
sudo journalctl --vacuum-time=3weeksto delete all journal files that are more than 3 weeks old.
For more info check the man page with man journalctl.--vacuum-size=, --vacuum-time=, --vacuum-files=
Removes the oldest archived journal files until the disk space they
use falls below the specified size (specified with the usual "K", "M",
"G" and "T" suffixes), or all archived journal files contain no data
older than the specified timespan (specified with the usual "s", "m",
"h", "days", "months", "weeks" and "years" suffixes), or no more than
the specified number of separate journal files remain. Note that
running --vacuum-size= has only an indirect effect on the output shown
by --disk-usage, as the latter includes active journal files, while
the vacuuming operation only operates on archived journal files.
Similarly, --vacuum-files= might not actually reduce the number of
journal files to below the specified number, as it will not remove
active journal files.Also, I don't believe its worthwhile to periodically check this. Best thing you can do is set an upper limit by uncommenting and changing the following in /etc/systemd/journald.conf.
For example:
SystemMaxUse=4GThen restart the service. sudo systemctl restart systemd-journald.
Use man journald.conf for more information.Edit:
As explained by @reinierpostThis question is not about regular old logs, it is about old logfiles
that do not appear to contain any logs at all (but they still occupy 8
MB each).Try running journalctl --verify. If files don't pass then the journal is corrupted and you should restart the service.
sudo systemctl restart systemd-journaldThat should fix the problem for logs going forward.
As for why this happened in the first place, I don't know and its not easy to figure out. And yes, corrupted files are probably junk. You could try this for a clean slate.
|
One of our Ubuntu 18.04 hosts was caught with 12 GB of *.journal files, far more than intended. Attempting to find out if they were worth keeping, I ran
journalctl --file $fon each file older than today; which always resulted in either Failed to open files or --- No entries ---.
Am I correct to conclude that such files are junk and can be discarded?
If they are, why do they exist? What is a supported way to clean them up? Is it worthwhile to regularly check systems for their existence?
| How to detect and clean up junk journal files? |
There is a mapping between a unique connection name
and process accessible through busctl.
If it remains stable for a few seconds you could try your luck in trying to catch it as it occurs.
journalctl -f | \
while read line ; do
echo "$line" | grep "sender=:"
if [ $? = 0 ]
then
busctl --no-pager | grep
fi
done(Based on this answer)
|
Currently my systemd journal is filling up with messages of the form:
Feb 01 16:40:31 host systemd[1]: Got message type=method_call
sender=:1.58666 destination=org.freedesktop.systemd1 object=/org
/freedesktop/systemd1 interface=org.freedesktop.DBus.Properties member=Get
cookie=2 reply_cookie=0 error=n/aThe only identifier seems to be the sender, which appears to change every few seconds (so I've failed at trying to map the sender to a PID), and this does not appear to happen on other systems on similar hardware or OS. Is there some way of identifying what is sending this messages (so that I can either stop that process/service/whatever or control the amount of messages sent).
| How do I work out which process/service/program is sending systemd dbus messages? |
Turns out I was using old version of systemd - 229, while latest is 236, and using newer journalctl fixes this problem. Unfortunately, 229 is the latest available systemd version for 16.04 LTS, so I do not see a sensible workaround for the issue (apart from building latest systemd in user-space and using freshly built journalctl binary directly).
|
I'm using Ubuntu 16.04 and I am seeing some strange behavior when analyzing journalctl logs.
Here's unfiltered output (I used json output to hopefully include all relevant fields):
$ journalctl -o json-pretty --since "2018-01-11 12:00:00" --until "2018-01-11 12:00:05"
{
"__CURSOR" : "s=bac060082d1c447a972958f176cdcec7;i=1d2a2e;b=a948062f9f0b4091ae299c0523a99111;m=337bc483bf3;t=5627c5f8a3bae;x=7fd2518d6b70b9d",
"__REALTIME_TIMESTAMP" : "1515661201914798",
"__MONOTONIC_TIMESTAMP" : "3537916935155",
"_BOOT_ID" : "a948062f9f0b4091ae299c0523a99111",
"_TRANSPORT" : "stdout",
"PRIORITY" : "6",
"SYSLOG_FACILITY" : "3",
"SYSLOG_IDENTIFIER" : "start.sh",
"_PID" : "6474",
"_UID" : "1000",
"_GID" : "1000",
"_COMM" : "start.sh",
"_EXE" : "/bin/bash",
"_CMDLINE" : "/bin/bash /usr/local/malibu/start.sh",
"_CAP_EFFECTIVE" : "0",
"_SYSTEMD_CGROUP" : "/system.slice/malibu.service",
"_SYSTEMD_UNIT" : "malibu.service",
"_SYSTEMD_SLICE" : "system.slice",
"_MACHINE_ID" : "f03ff2ad269ea529c82323dd57f29b00",
"_HOSTNAME" : "terminal",
"MESSAGE" : "2018-01-11 12:00:01.914 DEBUG: malibu.devices.validator.ValidatorPortService$ - health check succeeded"
}As you can see, the output contains a single log message.
But if I add filter by unit name (-u malibu.service), this message disappears:
$ journalctl -u malibu.service -o json-pretty --since "2018-01-11 12:00:00" --until "2018-01-11 12:00:05"The unit name is exactly equal to _SYSTEMD_UNIT field in unfiltered output. Why journalctl does not display it in the filtered version?
EDIT: Turns out I was using old version of systemd - 229, while latest is 236, and using newer journalctl fixes this problem. Unfortunately, 229 is the latest available systemd version for 16.04 LTS, so I do not see a sensible workaround for the issue (apart from building latest systemd in user-space and using freshly built journalctl binary directly).
| Why journalctl does not display log message if I use filtering by unit? |
You can check for information in the system logs.
on redhat:
/var/log/messages on debian based distributions :
/var/log/syslogYou should also look for the logs of your particular service, it could be into something like
/var/log/<yourservice>.logAll those log are rotated so you can find older information inside the .log.1 ans .log.x.gz (you have to gunzip those, or use zless or vi)
Good search
|
I'm using RHEL 7, and would like to know if and when a particular service 'myservice.service' went inactive. Unfortunately, using:
journalctl -u myservice.serviceonly seems to show output from my actual service, but at some point, the output stopped, and I don't know if that's due to:the service being shut down, or
systemd still considering the service to be active, but the actual process itself was just no longer generating output due to a problem with the underlying codeIs there any way to basically get a log of systemctl status events?
| How to know exactly when a Linux service went inactive? |
It's the priority that determines how journalctl displays messages.
Based on a quick test with logger :Messages of priority debug and info are displayed "normally".
Messages of priority notice and warning are displayed in bold white.
Messages of priority err, crit, alert, emerg are displayed in bold red.Edit:
To answer the comment about how to indicate a level just by writing to stdout, yes you can, just prefix your message with <n> where n is a number between 0 (emerg) and 7 (debug) representing the priority.
For example the following service writes an alert message, which will thus appear in red in journalctl output :
[Unit]
Description=Loth[Service]
ExecStart=/bin/echo "<1>Victoriae mundis et mundis lacrima."[Install]
WantedBy=multi-user.targetSee sd-daemon(3) and http://0pointer.de/blog/projects/journal-submit.html for more details.
|
Some error messages in journalctl show up in red and white. If I'm authoring my own systemd service, how can I format my messages such that they show up in red or white. It's a nice way of having errors stand out.
| How do I make journalctl messages show up in red? |
From man systemd-journald:FILES/run/log/journal/machine-id/*.journal, /run/log/journal/machine-id/*.journal~, /var/log/journal/machine-id/*.journal, /var/log/journal/machine-id/*.journal~
systemd-journald writes entries to files in /run/log/journal/machine-id/ or /var/log/journal/machine-id/ with the ".journal" suffix. If the daemon is stopped uncleanly, or if the files are found to be corrupted, they are renamed using the ".journal~" suffix, and systemd-journald starts writing to a new file.
When systemd-journald ceases writing to a journal file, it will be renamed to "[emailprotected]" (or "[emailprotected]~"). Such
files are "archived" and will not be written to any more.So basically as you can see above systemd-journald is writing these files when the files are corrupted or if the daemon is stopped uncleanly.
Regarding:Can I delete these filesAs mentioned by user @intelfx you should remove these files if you are not interested in the journal logs provided by these files.
Also as mentioned by same user the files are still readable. You can, for example, read a journal file by using:
sudo journalctl --file /path/to/file.journal
sudo journalctl --file /path/to/file.journal~Anyways, if you do not remove them, the systemd-journald will automatically remove these files, as you can see in the same man systemd-journald:systemd-journald will automatically remove the oldest archived journal files to limit disk use. See SystemMaxUse= and related settings in journald.conf(5). |
I have some files in my /var/log/journal/f7e928ba68a9449e85bd828252981fc6/ that have a .journal~ extension.
Can I remove these?
example:
-rw-r-----+ 1 root systemd-journal 16777216 Dec 21 08:46 [emailprotected]~Can I delete these files?
| What are the .journal files that have a ~ on the end? |
SELinux is preventing your program to run: the AVC denial states type=AVC msg=audit(08/16/2021 20:14:04.216:698) : avc: denied { execute } for pid=2568 comm=(ster_myservice) name=program dev="dm-2" ino=137 scontext=system_u:system_r:init_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=0.
This means that systemd, running under the init_t process context, isn't allowed to start your program, labeled as user_home_t.
To mitigate, move your program over to a standard binary directory, such as /usr/local/bin, and then remember to relabel, using restorecon -Rv /usr/local/bin.
Alternatively, if you need your program to run out of your home directory, compile a custom SELinux policy module:
ausearch -m avc -ts recent --comm ster_myservice | audit2allow -a -M ster-myservice
semodule -i ster-myservice.pp |
I am trying to create a service to run on boot. The service is a program I wrote in C++ and compiled and is located in my users home directory. The program opens some UDP sockets and sits in an infinite loop so it does not exit automatically. I can run the program manually and everything works as expected but when I run systemctl start myservice then check the status I see that it is not running. Error results below + other useful information. FYI the operating system is CentOS Stream.
Output from systemctl status myservice
myservice.service - my serivce
Loaded: loaded (/etc/systemd/system/myservice.service; disabled; vendor present: disabled)
Active: failed (Result: exit-code) since <redacted unnecessary timestamp>
Process 2101 ExecStart=/home/user/program (code=exited, status=203/EXEC)
Main PID: 2101 (code=exited, status=203/EXEC)Error Message from journalctl
myservice.service: Main process exited, code=exited, status=203/EXEC
myservice.service: Failed with result 'exit-code'
myservice.service: Service RestartSec=2s expired, scheduling restartSystemd Unit File
[Unit]
Description=my service
After=network.target[Service]
Type=simple
ExecStart=/home/user/program
User=user
WorkingDirectory=/home/user/
Restart=always
RestartSec=2
KillMode=process[Install]
WantedBy=multi-user.targetI understand that the 203 status usually means the file does not exist or does not have proper permissions so below is output to prove it is neither of those issues (hopefully)
Output from ls -laZ /home/user/program
-rwxrwxrwx. 1 root root unconfined_u:object_r:user_home_t:s0 803168 Aug 14 23:35 /home/user/program
Output from sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory /etc/selinux
Loaded policy name: targeted
Current mode: enforcing
Mode from config file: enforcing
Policy MLS status: enabled
Policy deny_unknown status: allowed
Memory protection checking: actual (secure)
Max kernel policy version: 33Output from ausearch -ts recent -m avc -i
type=PROCTITLE msg=audit(08/16/2021 20:14:04.216:698) : proctitle=(ster_myservice)
type=SYSCALL msg=audit(08/16/2021 20:14:04.216:698) : arch=x86_64 syscall=execve success=no exit=EACCES(Permission denied) a0=0x5572ff82e7a0 a1=0x5572ff6ff6d0 a2=0x5572ff7f54b0 a3=0x1 items=0 ppid=1 pid=2568 auid=unset uid=user gid=user euid=user suid=user fsuid=user egid=user sgid=user fsgid=user tty=(none) ses=unset comm=(ster_myservice) exe=/usr/lib/systemd/systemd subj=system_u:system_r:init_t:s0 key=(null)
type=AVC msg=audit(08/16/2021 20:14:04.216:698) : avc: denied { execute } for pid=2568 comm=(ster_myservice) name=program dev="dm-2" ino=137 scontext=system_u:system_r:init_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=0 | Systemd Service Failing with exit-code status=203/EXEC |
For some reason journalctl does not use the LESS environment variable but uses SYSTEMD_LESS: $SYSTEMD_LESS
Override the options passed to less (by default "FRSXMK").Since less is already the default pager you can configure
export SYSTEMD_LESS=-ior use the same options as less:
export SYSTEMD_LESS="$LESS" |
When I search in journalctl with / the search is case sensitive unless I switch to case insensitive with -i first.
How can I configure journalctl so that searches are case insensitive by default?
| journalctl search case insensitive |
Seems to be related to latest kernel (5.6.x).
If you tail the journal with verbose level
sudo journalctl -f -o verboseYou can see _TRANSPORT=kernel
Sun 2020-04-12 09:32:38.852081 CEST [s=ca0e47a50a2047e483013075418f4a72;i=1d58f89;b=343f4563d34649baad6f57aacc0320a1;m=e9cad4480;t=5a312f895b1f1;x=16d4d0c3d713857c]
_MACHINE_ID=*******
_HOSTNAME=*******
_TRANSPORT=kernel
PRIORITY=4
SYSLOG_FACILITY=1
MESSAGE=testing the buffer
_BOOT_ID=343f4563d34649baad6f57aacc0320a1
_SOURCE_MONOTONIC_TIMESTAMP=62758810437There is a bug indeed listed for OpenSUSE https://bugzilla.opensuse.org/show_bug.cgi?id=1168664 (but kernel related, I'm on Ubuntu) fixed as minot leftover at https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=41c55ea6c2a7ca4c663eeec05bdf54f4e2419699
Fortunately nothing to worry, it will disappear when they merge the fix into packaged kernel...
|
In the past few days I have had a message in the system log/journal over a thousand times within a second. How can I find out where it originated?
# journalctl --system | grep "testing the buffer" | uniq --count
1522 Apr 06 13:49:31 laptop unknown: testing the bufferSo, 1522 times the same message by "unknown". Is this possibly malicious?
# find / -xdev -type f -print0 | xargs -0 grep "testing the buffer" | grep -v /var/log/journal
Exit 1No system file contains that string!
The system uses systemd-journald for system logs.
| How can find out where a system message originates ("testing the buffer" in dmesg / journalctl / messages)? |
I have a simple Python snippet managed by a systemd service which logs to the rsys[l]ogd daemon […]No you haven't.
What you have is a service that logs to the systemd journal. The server listening on the well-known /dev/log socket that your Python program is talking to is not rsyslogd. It is systemd-journald. rsyslogd is attached to the other side of systemd-journald, and your Python program is not talking to it.
From this, it should be apparent that the only way to not send stuff via systemd-journald is to use some other route to rsyslogd, not the well known socket that your Python library uses by default. That all depends from how you have configured rsyslogd. It is possible that you have turned on a UDP server with the imudp module, in which case you could tell your Python program to use that by using a different Python library that speaks to such a UDP server. (The Python syslog library is hardwired to use the well-known local socket.)
Or (and better, given that you have to be careful about not opening a UDP service to the world outwith your machine) you could have given rsyslogd a second, not well known, AF_LOCAL socket to listen to by configuring this in the imuxsock module's configuration. Again, you'll have to tell your Python program to use that and use a different Python library.What exactly you do in your Python program is beyond the scope of this answer.
Further readinghttps://unix.stackexchange.com/a/294206/5132 |
I have a simple Python snippet managed by a systemd service which logs to the rsysogd daemon where I've defined a configuration file to put it to a syslog server with a format I've defined. This is working fine so far.
In the code below, I'm passing the argument as the string I want to log on the server. I'm using this code below as a module and using it for logging alone, the actual script uses this for logging purposes.
#!/usr/bin/env pythonimport syslog
import syssyslog.openlog(facility=syslog.LOG_LOCAL0)
syslog.syslog(syslog.LOG_INFO, sys.argv[1])Since the application is managed by systemd it is making a copy of the syslog available when seen from the journalctl -xe and the journalctl -u <my-service> which I do not wish to happen because I've other critical information I'm logging in journal logs.
The service definition is
[Unit]
Description=Computes device foobar availability status[Service]
Type=simpleEnvironmentFile=/etc/sysconfig/db_EndPoint
ExecStart=/usr/bin/python /opt/foobar/foobar.py
WatchdogSec=60
RestartSec=10
Restart=always
LimitNOFILE=4096[Install]
WantedBy=default.targetand in the /etc/systemd/journald.conf file, I've not enabled any of the options to be available. I looked up this journald.conf documentation to use ForwardToSyslog=no and did a restart of journald service as
systemctl restart systemd-journaldand also restarted my service unit, but I see the logs out to the syslog server and also to the journal logs. What option I'm missing here?
| Prevent syslogs from being logged under journalctl |
You can however convert those fields into unix timestamps.
I.e:
journalctl --list-boots | awk '{ d2ts="date -d \""$3" "$4" " $5"\" +%s"; d2ts | getline $(NF+1); close(d2ts)} 1' |
journalctl has the -o short-unix flag that I can use to change the output date format on stuff like -t systemd-sleep.
But the only way I've found to list boots is --list-boots, and this doesn't seem to obey the -o flag.
Is there a way to make journalctl list boots with unix timestamps? Since systemd is here to stay I fear other methods might break in the future, but I'm open to those suggestions too.
| List boots with unix timestamps via journalctl |
journalctl keeps track of the boot_id attached to logs, and when that changes, indicates that the system rebooted.
The boot_id is generated by the kernel, and can be retrieved from /proc/sys/kernel/random/boot_id.
|
I've noticed, on machines where the journalctl logs are saved on disk, that on a reboot, I get a line between the message before and after the reboot happened like so:
blah
blah
blah
-- Reboot --
blah
blah
blahHow does journalctl know to add that line at that location?
| How is journalctl able to add the line with the "-- Reboot --" log message? |
Specify it as a raw filter, as you already know the fields:
journalctl PRIORITY=2 + PRIORITY=6While the filter syntax is not very expressive, it does support one level of 'OR' operation using +. (For example, journalctl A B C + D E means "(A && B && C) || (D && E)".)
|
I would like to show only crit and info and nothing in between.
I tried:
journalctl -p 2..2 -p 6..6But it doesn't work. The second argument seems to override the first.
The following code produces a syntax error:
journalctl -p 2,6How can I retrieve only two priorities without any values in between?
I'm searching for a more concise solution than this.
journalctl -o json | jq --argjson p '{"2":"CRIT","6":"INFO"}' --raw-output 'select(.PRIORITY == "2" or .PRIORITY == "6") | "\(.__REALTIME_TIMESTAMP | tonumber / 1000 / 1000 | strflocaltime("%Y-%m-%d %H:%M:%S")) \($p[.PRIORITY]) \(.MESSAGE)"' | How to tell journalctl do display exactly two priorities? |
Journal header limits reached or header out-of-date, rotating is not an error, it's an informational message.
It means systemd-journald adheres to the journal limit you've set.
|
What is the cause of the error ”Journal header limits reached or header out-of-date, rotating” and what can be done to fix it?
journalctl (systemd-journald[###]) reports journal errors:
Data hash table of /run/log/journal/####/system.journal has a fill level at 75.1 (3273 of 4359 items, 2510848 file size, 767 bytes per hash table item), suggesting rotation
and
/run/log/journal/####/system.journal: Journal header limits reached or header out-of-date, rotating
Guest
alma Linux 9.1 kernel 5.14.0-162.12.1.el9_1.x86_64 Fresh install, Media and Hash OK
Security Policy [DRAFT] DISA STIG for Red Hat Enterprise Linux 9
Hypervisor VirtualBox 7.0.6 r155176
df -h
No Partition Use % above 30%
Host Windows 10 Pro 10.0.19044 Build 19044
Possibly Related
Chrony Time Sync Error on boot (no error on reboot, internet connection ok: occasional hang on connecting)
/etc/chrony.conf includes
pool 2.almalinux.pool.ntp.org iburst maxpoll 16
server [0..3].rhel.pool.ntp.org iburst
There is ample disk space, time sync seems sufficient, the system is up to date, no new software has been installed.
| How to resolve journalctl error - journal header limits reached or header out of date |
All entries for which you will find a relation using journalctl -u <myapp.service> correspond to the contents of the systemd journal for a given host. HOWEVER (from man journalctl):Output is interleaved from all accessible journal files, whether they are rotated or currently being written, and regardless of whether they belong to the system itself or are accessible user journals.The issue may boil down to the fact that on the "new VM" (a remote PaaS for instance, or even a VM on a local host) your $USER on that VM may not be granted access to anything but his/her own, private journals. For any other journals (provided your migrated log files are located in the right place) your $USER will need to be part of a few special authorized groups or to be root.
That would apply to any imported journal files since those would not be recognized as your $USER's private journals on the new host. Special groups include systemd-journal, adm, and wheel. Ubuntu may include others.
To access your migrated files you should start making sure that:you migrated your journals to the single central journal in /{run,var}/log/journal/. I'm not certain which of /run or /var begins the path on your Ubuntu VM; you will have to see that by yourself. In any case see journald.conf(5), by issuing $ man 5 journald.conf in a terminal. You will find that your files may be at:
/run/log/journal/machine-id/*.journal,
/run/log/journal/machine-id/*.journal~,
/var/log/journal/machine-id/*.journal,
/var/log/journal/machine-id/*.journal~From the Ubuntu man page:systemd-journald writes entries to files in either /run/log/journal/machine-id/ or /var/log/journal/machine-id/ with the ".journal" suffix. If the daemon is stopped uncleanly, or if the files are found to be corrupted, they are renamed using the ".journal~" suffix, and systemd-journald starts writing to a new file.[...]
/run/... is used when /var/log/journal is not available, or when Storage=volatile is set in the VM's /etc/systemd/journald.conf configuration file.your VM's $USER is member of the wheel group (for instance).Other recommendations and steps may apply but I do not have enough info to run on to proceed. Please report in a comment, so I may complete this answer if need be.
|
I have an app registered as a service in Ubuntu 16.04
If I type:
journalctl -u myapp.serviceI can see the logs for my app.
I am moving my app to a new VM where the same service will be in place. Is it possible to migrate the log files to the new VM so that if I type journalctl -u myapp.service it will show all my old logs and any new logs seamlessly?
I have tried to swap in the contents of the old /var/log/journal directory into the new VM, and restarting the systemd-journald service, but it doesn't seem to work.
More Details:The logs are stored in /var/log/journal/< machine-id >/
the directory contents look like this:$ ls /var/log/journal/05b6b1e76c6040cc99b4d34977a98eca/
[emailprotected]~
[emailprotected]~
[emailprotected]~
[emailprotected]~
system@9b08b416ae4c47a78c24b4ed77c39ea2-0000000000000001-0005b3c2d1380bae.journal
system@9b08b416ae4c47a78c24b4ed77c39ea2-0000000000000248-0005bccbf3f7d7c1.journal
system@e5c655526bb54aa886764039cd37f897-0000000000000001-0005c4ce02f66caf.journal
system.journal [emailprotected]~
[emailprotected]~
[emailprotected]~
[emailprotected]~
user-1000@7b4df282ccfe4816a30db088f2621493-00000000000000ab-0005b3c2d1be9c3b.journal
user-1000.journalThere are no files in /run/log/
both old and new VMs share the same service, service-running user, and machine-id (this is because ultimately both VMs stem from (were copied from) the same VM just at different times and with minor software/configuration settings)
the user I access the logs with 'ubuntu' has the same groups in both VMs:$ groups
ubuntu adm dialout cdrom floppy sudo audio dip video plugdev netdev lxdjournal version on both VMs is:$ journalctl --version
systemd 229
+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDNI am not looking to hold the logs longer than what is configured in the journal settingsWhats Workingreplacing the contents of the journal folder entirely
adjusting file ownership to root:systemd-journal as they were originally
'root' can now see the full log
'ubuntu' can only see some 'current' logs, which are confusing, as they are not what was copied over | Can I migrate ubuntu journald logs? |
I would use an empty -b,--boot option to journalctl in order to request "the current boot", then -n 0 to request zero lines of output, which leaves just the header:
journalctl -b -n 0Example output:
-- Logs begin at Wed 2021-02-10 17:46:08 PST, end at Thu 2021-02-11 15:36:01 PST. --Or if the -n 0 fails to output the proper information try the number 1.
~# journalctl -b -n 0
-- No entries --
~# journalctl -b -n 1
-- Logs begin at Mon 2021-02-08 20:24:14 AST, end at Thu 2021-02-11 21:33:56 AST. --
Feb 11 21:33:56 zeus-H370M systemd[1]: anacron.service: Succeeded. |
What should I look for in the systemd journal to find when the latest boot happened?
| What will a boot look like in the systemd journal (journalctl)? |
sudo journalctl -b -u 'systemd-fsck*' The credit for this answer belongs here: https://unix.stackexchange.com/a/436033/29483
A second answer on the linked question notes that this method will not work on all systems, even if the system uses systemd. One reason is if the initramfs, which runs fsck on the root filesystem (and /usr), does not use systemd. In this case the initramfs might arrange to save its fsck logs somewhere else, not in the journal.
You can also use sudo systemctl status 'systemd-fsck*', and it will work perfectly well according to my analysis. Although I cannot think of a reason why you would prefer this. systemctl status only shows the last ten messages for each unit by default.
|
Is there a reasonable procedure for a system administrator to view all the fsck messages?On my current Fedora 29 system, I can view all the fsck messages from my current boot like this:
sudo journalctl -b /usr/lib/systemd/systemd-fsckHowever, it is a hack that assumes fsck writes messages to stdout / stderr. It does not allow for a hypothetical fsck which detects that it is run from systemd, and sends log messages through the syslog or journald socket e.g. in order to set appropriate "priority" for each message.
Is there a cleaner method, that works even if some fsck sends its log messages directly to journald?
| How to show journal messages from all `fsck` units |
I don't think the boot ID is actually changing that rapidly. I think you're viewing logs from several different boots at once, and they're getting mixed together.
This can happen if your system doesn't have a battery-powered real-time clock.
journalctl uses the wall clock time to sort log messages that came from different boots. If the wall clock resets on each boot, it will look like the messages from different boots happened at the same time, and journalctl will interleave them confusingly.
See Lennart Poettering's comment on systemd feature request #662 (emphasis mine):So here's what happens: each time you reboot, you start with a date in 1970, and a newly randomized boot id. A couple of messages are generated this way for each boot.
During display journalctl will now see this data, and try to make sense of it, and interleave it. Interleaving means it needs to sort the lines together, and put older data before newer data. To do that it tries to compare sequence numbers first. Sequence numbers are maintained in memory, and are reset to zero on each boot. They hence can be compared only if the boot ID matches between the two lines to compare. Since the boot ID is different for your reboots (obviously, and rightfully), this logic of ordering cannot be used. Next, journalctl tries to order things by the monotonic timestamp (i.e. the time passed since boot). Such ordering only works under the same conditions: we can only ordering lines of the same boot via the monotonic time. Which means we resort to the last way of comparing two lines: via the wallclock time. But that is always 1970, and hence results in the interleaving you see, where the lines are all mixed up.One way to confirm this is to run journalctl --list-boots and check the date ranges. If they overlap, journalctl is using bogus timestamps.
|
While looking through some archived journal files from another machine, I noticed that some log entries have different __BOOT_ID values with extremely narrow time gaps in between. For an example, log entries that are milliseconds apart would have different __BOOT_ID values. This should not be possible as the machines could not restart within that small of a time gap.
When I run journalctl -o verbose --directory <dir path> | grep -B 30 -A 30 -- "-- Reboot --" I can see quote the following real example with two different boot ID values for log events that are 10ms apart.
Wed 2019-11-13 21:35:58.469925 ...
_TRANSPORT=kernel
PRIORITY=6
SYSLOG_FACILITY=0
SYSLOG_IDENTIFIER=kernel
_BOOT_ID=fec227a60ef24474aacd023d6c02733f
...
...
...
MESSAGE=spi1.0: ttyMAX1 at I/O 0x20 (irq = 30, base_baud = 3000000) is a MAX3109
-- Reboot --
Wed 2019-11-13 21:35:58.470352 ...
_SOURCE_MONOTONIC_TIMESTAMP=0
_TRANSPORT=kernel
PRIORITY=6
SYSLOG_FACILITY=0
SYSLOG_IDENTIFIER=kernel
MESSAGE=Booting Linux on physical CPU 0x0
_BOOT_ID=21b95aabab034009a19d1b7deac80327
...
...
...I've tried to search for what could possibly cause boot ID to rapidly change like that without any success. Looking at the systemd source for the version (v243) shows that sd_id128_get_boot() that seems to be the function used to read the boot ID just reads the kernel generated value from the file. if (sd_id128_is_null(saved_boot_id)) {
r = id128_read("/proc/sys/kernel/random/boot_id", ID128_UUID, &saved_boot_id);
if (r < 0)
return r;
} *ret = saved_boot_id;
return 0;The end result is a list of reboots that show up in the logs that might not be reboots after all. Interestingly journalctl --list-boots will not show these as reboots at all (currently trying to understand get_boots() implementation).
Appreciate any ideas, inputs if anyone has seen this behavior earlier. I understand that jounalctl --list-boots is the go to place to get the list of reboots, but I'm trying to analyze the logs with a specific set of reboot information in mind. These false positives pollute the results out of the scripts I'm trying to write.
| Rapidly changing boot_id values in systemd |
According to the man page, fsck's error code 4 means "Filesystem errors left uncorrected".
This is not good. It means that the file system is corrupted beyond what is safe to automatically repair on boot, so manual intervention is required.
As the error message on your screen says, you need to manually run fsck to attempt to fix the errors. You can probably do that from the initramfs root prompt (try fsck /dev/sda6). If not, you'll need a rescue disk like gparted. IMO, Clonezilla also serves as a good rescue disk - it's got just about every file-system related utility you could possibly need.
Alternatively, since your Linux box is running systemd, you could interrupt the GRUB boot process and edit the linux kernel's command line to make system perform a forced fsck:before grub times out and boots a kernel, press a key to cancel the timeout countdown.
move the cursor as required to the entry you want to edit (probably the default menu entry).
press e to enter grub's editor.
scroll through the editor and look for the line that starts with linux followed by a whole bunch of kernel options.
add fsck.mode=force fsck.repair=preen to the end of that line.
press F10 or Ctrl-X to accept the changes and boot that menu entry.Note: these grub edits are not persistent across reboots. They're temporary one-time changes for this boot session only.
Also note: it's possible that your filesystem is corrupted beyond repair. fsck will do its best to get the fs back to a consistent state, but it's not magic: it can't repair or restore corrupted data.
|
Everything worked fine last time I used Solus, and now I cannot boot because of the error below.
Let's try to solve it without using a live USB.**Failed to start File System Check on /dev/disk/by-partuuid/5ff8cfe6-53e1-43b1-9275-dc6f24ad812a** | Solus distro is not booting anymore |
Probably during the migration process from tracker to track3 something goes wrong.
Clean tracker database ($ tracker3 reset) and any leftover file ($ rm -rf ~/.cache/tracker{,3})
|
I was doing the recommended maintenance work as advised here. I ran the command sudo journalctl -p 3 -xb, I see that tracker-extract is repeatedly crashing and dumping core. I am not sure what to do. Please help me.
Followig is a part of output generated by sudo journalctl -p 3 -xb.I am using Arch Linux, with Gnome DE.
| tracker-extract repeatedly core dumping and crashing |
There's no such option. The time formatting options are limited (either locale-dependent, but still 24-hour, or Unix timestamps, or ISO 8601 timestamps).
I'd just check the current date in 24-hour format (and put it in a prompt or a widget somewhere):
% date +%T
18:40:59But you can also parse the journalctl output and convert the timestamps:
# journalctl --output=short-unix -b | awk '/^[0-9]/ {sub(/^[0-9]+\.[0-9]+/, strftime("%F %I:%M:%S %p"))} 1' | head
-- Logs begin at Sat 2018-08-25 19:16:53 JST, end at Sun 2019-06-23 18:37:04 JST. --
2019-06-23 06:37:50 pm cthulhu kernel: Linux version 5.1.11-arch1-1-ARCH (builduser@heftig-31251) (gcc version 9.1.0 (GCC)) #1 SMP PREEMPT Mon Jun 17 18:56:30 UTC 2019
2019-06-23 06:37:50 pm cthulhu kernel: KERNEL supported cpus:
2019-06-23 06:37:50 pm cthulhu kernel: Intel GenuineIntel
2019-06-23 06:37:50 pm cthulhu kernel: AMD AuthenticAMD
2019-06-23 06:37:50 pm cthulhu kernel: Hygon HygonGenuine
2019-06-23 06:37:50 pm cthulhu kernel: Centaur CentaurHauls
2019-06-23 06:37:50 pm cthulhu kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'We tell journalctl to print Unix timestamps, and use awk to replace that with a timestamp using 12-hour format.
|
Looking through the Archwiki on journalctl and systemd I do not see a means of configuring the system log to display in a 12 hour format with am and pm. The 24 hour format works fine; however, I need to convert from 24 hour to 12 hour to check the log against the current system time. Is there a systemd config file or journalctl argument (I know about -o) that causes journalctl to display 12 hour format with am and pm?
| journalctl 12 hour format? |
Simplest thing to do is to pipe through 2>&1 | grep -v 'freed 0B'
If cron runs a command that produces zero lines of output, then cron will not send an email.
|
When cron runs
0 16 * * * journalctl --vacuum-time=10dI get an email with the content like
Vacuuming done, freed 0B of archived journals from /var/log/journal.
Vacuuming done, freed 0B of archived journals from /var/log/journal/68eb3115209f4deb876284bab504772b.
Vacuuming done, freed 0B of archived journals from /run/log/journal.sometimes there is some bytes freed, but how do I suppress those emails if there were 0B freed?
| suppress cron emails from cleaning systemlog if 0B were cleaned |
Now I discovered that when and after using command "cat" to read .journal file, gibberish will appear on the console displayIndeed. It's a binary fileand I have no choice but to reboot to clear out the mess.This is not correct. You can usually fix this by using the Control-j, reset, Control-j combination (see here for more details about the "messed up terminal" in these situations), or just open a new terminal.So, is command "journalctl" the only and direct way to browse the .journal file normally?As it is a binary file, the only choice is to use a program that can parse that binary format. This is what journalctl does out of the box. Alternatively, you could use a custom script: in Python, for example, you can use the systemd.journal library (https://stackoverflow.com/questions/26331116/reading-systemd-journal-from-python-script), although I'm not sure it is portable to Windows.
You could also choose a different approach: copy the files somewhere (a virtual machine, even a Linux container is probably enough) and then run journalctl -D <path to exported journal>.
|
Well, the answer is simple - using command "journalctl".
But let me describe the problem first,
I encountered an ubuntu server crash event and some log/.journal files were extracted successfully, and the next way to deal with this event is to read/analyze these files.
There were 2 requests being discussed:
A. How to read the .journal files under Windows environment
B. other way to read the .journal files than using Windows OS
The first one is the most important request but I skipped it because it seemed too difficult and using another ubuntu host with command "journalctl --file /path_to_the_file" can solve the goal perfectly.(and also export the content to .txt file via ">" symbol)
Now I discovered that when and after using command "cat" to read .journal file, gibberish will appear on the console display and I have no choice but to reboot to clear out the mess.
Command "vi" doesn't work either.
So, is command "journalctl" the only and direct way to browse the .journal file normally?
| Commands to read the content of .journal file? |
You have a modern multi-core processor and your distribution uses systemd. As a result, at boot time, many things will happen in parallel, sometimes with no fixed ordering. Some of the log messages might be slightly out of order with each other, if they used different routes (native systemd journaling vs. the kernel's audit subsystem vs. the syslog system calls).
I'll go through the messages in mostly-sequential order, but grouping some similar messages together.
Jan 22 01:42:53 hostname kernel: Linux version 5.10.9-arch1-1 (linux@archlinux) (gcc (GCC) 10.2.0, GNU ld (GNU Binutils) 2.35.1) #1 SMP PREEMPT Tue, 19 Jan 2021 22:06:06 +0000
Jan 22 01:42:53 hostname kernel: Command line: BOOT_IMAGE=/vmlinuz-linux root=UUID=d825234f-4397-494f-9c61-e719a008ecbd rw loglevel=3 quietThese are normally the very first lines the Linux kernel outputs right after the it has begun executing. At this point, the system clock is just using the time value the system firmware initialized it to, which usually originates from the battery-backed clock chip. Note the loglevel=3 quiet kernel options: these will silence a lot of early boot messages, so quite a lot can happen after this and before the next messages.
Jan 22 01:38:09 hostname kernel: input: HDA NVidia HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:02.1/0000:01:00.1/sound/card1/input15
Jan 22 01:38:09 hostname kernel: input: HDA NVidia HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:02.1/0000:01:00.1/sound/card1/input16
Jan 22 01:38:09 hostname kernel: input: HDA NVidia HDMI/DP,pcm=8 as /devices/pci0000:00/0000:00:02.1/0000:01:00.1/sound/card1/input17
Jan 22 01:38:09 hostname kernel: input: HDA NVidia HDMI/DP,pcm=9 as /devices/pci0000:00/0000:00:02.1/0000:01:00.1/sound/card1/input18
Jan 22 01:38:09 hostname kernel: input: HDA NVidia HDMI/DP,pcm=10 as /devices/pci0000:00/0000:00:02.1/0000:01:00.1/sound/card1/input19
Jan 22 01:38:09 hostname kernel: input: HDA NVidia HDMI/DP,pcm=11 as /devices/pci0000:00/0000:00:02.1/0000:01:00.1/sound/card1/input20
Jan 22 01:38:09 hostname kernel: input: HDA NVidia HDMI/DP,pcm=12 as /devices/pci0000:00/0000:00:02.1/0000:01:00.1/sound/card1/input21
Jan 22 01:38:09 hostname kernel: input: HD-Audio Generic Front Mic as /devices/pci0000:00/0000:00:14.2/sound/card0/input22
Jan 22 01:38:09 hostname kernel: input: HD-Audio Generic Rear Mic as /devices/pci0000:00/0000:00:14.2/sound/card0/input23
Jan 22 01:38:09 hostname kernel: input: HD-Audio Generic Line as /devices/pci0000:00/0000:00:14.2/sound/card0/input24
Jan 22 01:38:09 hostname kernel: input: HD-Audio Generic Line Out as /devices/pci0000:00/0000:00:14.2/sound/card0/input25
Jan 22 01:38:09 hostname kernel: input: HD-Audio Generic Front Headphone as /devices/pci0000:00/0000:00:14.2/sound/card0/input26These messages are from the input subsystem registering the plug detection circuits on various audio connectors as inputs. This means the actual sound drivers have already been loaded at this point. Sound chips are not usually considered "essential hardware", so their drivers are not typically built into the kernel and not always even included in the initramfs, so at this point we are probably quite a bit farther along the boot process.
It is possible that the boot process may have already activated the network interface(s) and fetched more accurate time information from an NTP server, which might explain the apparent time jump backwards. Or the boot process might take the time of the last disk write operation as the best guess of the current time, if no battery-backed clock is available and NTP servers cannot be reached yet.
Jan 22 01:38:09 hostname audit[425]: NETFILTER_CFG table=filter family=10 entries=144 op=xt_replace pid=425 comm="ip6tables-resto"
Jan 22 01:38:09 hostname audit[426]: NETFILTER_CFG table=filter family=10 entries=160 op=xt_replace pid=426 comm="ip6tables-resto"
Jan 22 01:38:09 hostname audit[435]: NETFILTER_CFG table=filter family=10 entries=168 op=xt_replace pid=435 comm="ip6tables-resto"Something (probably ip6tables-restore) is causing IPv6 netfilter rules to be added.
Jan 22 01:38:09 hostname kernel: EDAC amd64: F15h detected (node 0).
Jan 22 01:38:09 hostname kernel: EDAC amd64: Error: F1 not found: device 0x1601 (broken BIOS?)These messages are from the EDAC (Error Detection And Correction) subsystem. It's mostly useful on systems with ECC error-correcting memory only (i.e. servers and possibly high-grade workstations). Your processor seems to have the necessary features to work with ECC memory, but your system firmware apparently does not have the necessary information for the EDAC subsystem to map memory/bus errors to a physical component (e.g. a memory module slot). Perhaps your motherboard is not capable of using ECC memory? If that's true, then the EDAC subsystem might not be useful to you, and you could blacklist the EDAC modules to skip loading them at all.
Jan 22 01:38:09 hostname systemd-udevd[258]: controlC0: Process '/usr/bin/alsactl restore 0' failed with exit code 99.
Jan 22 01:38:10 hostname systemd-udevd[267]: controlC1: Process '/usr/bin/alsactl restore 1' failed with exit code 99.For some reason, the attempts to restore sound card volume settings failed for both the motherboard's integrated sound chip and the GPU's HDMI/DP connectors. Perhaps because alsactl store has never been used to persistently store the current settings? This might be important only if you use raw ALSA and/or text mode: most GUI desktop environments and/or Pulseaudio will usually override the ALSA volume settings by user-specific settings anyway.
Jan 22 01:38:09 hostname systemd[1]: Condition check resulted in First Boot Wizard being skipped.
Jan 22 01:38:09 hostname systemd[1]: Condition check resulted in File System Check on Root Device being skipped.
Jan 22 01:38:09 hostname systemd[1]: Condition check resulted in Rebuild Hardware Database being skipped.
Jan 22 01:38:09 hostname systemd[1]: Condition check resulted in Repartition Root Disk being skipped.
Jan 22 01:38:09 hostname systemd[1]: Condition check resulted in Create System Users being skipped.systemd is skipping some conditional boot items, as the requisite conditions are not satisfied.
Jan 22 01:38:10 hostname kernel: nvidia: module license 'NVIDIA' taints kernel.
Jan 22 01:38:10 hostname kernel: Disabling lock debugging due to kernel taint
Jan 22 01:38:10 hostname kernel: nvidia-nvlink: Nvlink Core is being initialized, major device number 237
Jan 22 01:38:10 hostname kernel:
Jan 22 01:38:10 hostname kernel: nvidia 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=io+mem
Jan 22 01:38:10 hostname kernel: NVRM: loading NVIDIA UNIX x86_64 Kernel Module 460.32.03 Sun Dec 27 19:00:34 UTC 2020The proprietary NVIDIA GPU driver is being loaded.
Jan 22 01:38:10 hostname audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'This looks like the root filesystem is mounted and the "real" systemd start-up is confirmed by the audit subsystem. Before this point, the boot process was handled by a "mini-systemd" within the initramfs. This might explain why some messages seem to be repeated: the initramfs mini-systemd made those checks once, and now the real systemd does them again.
Jan 22 01:38:10 hostname kernel: random: crng init done
Jan 22 01:38:10 hostname kernel: random: 7 urandom warning(s) missed due to ratelimiting
Jan 22 01:38:10 hostname systemd[1]: Finished Load/Save Random Seed.systemd has restored the random seed from the last startup or orderly shutdown.
Jan 22 01:38:10 hostname kernel: nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms 460.32.03 Sun Dec 27 18:51:11 UTC 2020
Jan 22 01:38:10 hostname kernel: [drm] [nvidia-drm] [GPU ID 0x00000100] Loading driver
Jan 22 01:38:10 hostname kernel: [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:01:00.0 on minor 0More components of the proprietary NVIDIA GPU driver are being started.
Jan 22 01:38:10 hostname systemd[1]: Created slice system-systemd\x2dbacklight.slice.
Jan 22 01:38:10 hostname systemd[1]: Starting Load/Save Screen Backlight Brightness of backlight:acpi_video0...
Jan 22 01:38:10 hostname systemd[1]: Finished Load/Save Screen Backlight Brightness of backlight:acpi_video0.
Jan 22 01:38:10 hostname audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-backlight@backlight:acpi_video0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? termi>The firmware ACPI tables include a backlight control interface, and systemd is starting up a service that remembers the current backlight setting from one boot to the next.
Jan 22 01:38:18 hostname kernel: ata2: softreset failed (1st FIS failed)
Jan 22 01:38:28 hostname kernel: ata2: softreset failed (1st FIS failed)This suggests problems with the second SATA device. Your second snippet seems to have more information about this.
From the second snippet:
Jan 22 01:38:08 hostname kernel: ACPI BIOS Warning (bug): Optional FADT field Pm2ControlBlock has valid Length but zero Address: 0x0000000000000000/0x1 (20200925/tbfadt-615)Your ACPI system firmware has incomplete power management information. A BIOS update might fix this, but since it's just a warning, the kernel can live with the problem.
Jan 22 01:38:08 hostname kernel: ata2.00: exception Emask 0x0 SAct 0x30000 SErr 0x0 action 0x6
Jan 22 01:38:08 hostname kernel: ata2.00: irq_stat 0x40000008
Jan 22 01:38:08 hostname kernel: ata2.00: failed command: READ FPDMA QUEUED
Jan 22 01:38:08 hostname kernel: ata2.00: cmd 60/78:80:88:00:00/00:00:00:00:00/40 tag 16 ncq dma 61440 in
res 43/84:78:f0:00:00/00:00:00:00:00/00 Emask 0x410 (ATA bus error) <F>
Jan 22 01:38:08 hostname kernel: ata2.00: status: { DRDY SENSE ERR }
Jan 22 01:38:08 hostname kernel: ata2.00: error: { ICRC ABRT }
Jan 22 01:38:08 hostname kernel: ata2: hard resetting link
Jan 22 01:38:08 hostname kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Jan 22 01:38:08 hostname kernel: ata2.00: NCQ Send/Recv Log not supported
Jan 22 01:38:08 hostname kernel: ata2.00: NCQ Send/Recv Log not supported
Jan 22 01:38:08 hostname kernel: ata2.00: configured for UDMA/133
Jan 22 01:38:08 hostname kernel: sd 1:0:0:0: [sdb] tag#16 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=1s
Jan 22 01:38:08 hostname kernel: sd 1:0:0:0: [sdb] tag#16 Sense Key : Aborted Command [current]
Jan 22 01:38:08 hostname kernel: sd 1:0:0:0: [sdb] tag#16 Add. Sense: Scsi parity error
Jan 22 01:38:08 hostname kernel: sd 1:0:0:0: [sdb] tag#16 CDB: Read(10) 28 00 00 00 00 88 00 00 78 00
Jan 22 01:38:08 hostname kernel: blk_update_request: I/O error, dev sdb, sector 136 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0This indicates a data transfer error from your /dev/sdb disk. This might be caused by a bad SATA cable or connector, so you might first try moving the disk to a different SATA port or replacing the cable first. If those things won't help, it might be necessary to replace the disk. If the fault seems to come and go, make an extra backup of that disk ASAP.
|
My system failed to boot several times today.
After completely disconnecting it from power, I managed to get it back on, luckily, but I'd like to comprehend what has happened.
From a user's point of view, it was like this:
I turn on the PC, the fans are noisier than usual and the boot process would get stuck sometimes at the boot screen, sometimes after GRUB and sometimes in the short timespan where the image of my graphics card is displayed.
I checked the journal and discovered the following lines that didn't appear before. Here's the complete journal entry of one of the failed boots. The other entries look similar.
-- Boot 4c3326651829453c89c08358e88b8071 --
Jan 22 01:42:53 hostname kernel: Linux version 5.10.9-arch1-1 (linux@archlinux) (gcc (GCC) 10.2.0, GNU ld (GNU Binutils) 2.35.1) #1 SMP PREEMPT Tue, 19 Jan 2021 22:06:06 +0000
Jan 22 01:42:53 hostname kernel: Command line: BOOT_IMAGE=/vmlinuz-linux root=UUID=d825234f-4397-494f-9c61-e719a008ecbd rw loglevel=3 quiet
Jan 22 01:38:09 hostname kernel: input: HDA NVidia HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:02.1/0000:01:00.1/sound/card1/input15
Jan 22 01:38:09 hostname kernel: input: HDA NVidia HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:02.1/0000:01:00.1/sound/card1/input16
Jan 22 01:38:09 hostname kernel: input: HDA NVidia HDMI/DP,pcm=8 as /devices/pci0000:00/0000:00:02.1/0000:01:00.1/sound/card1/input17
Jan 22 01:38:09 hostname kernel: input: HDA NVidia HDMI/DP,pcm=9 as /devices/pci0000:00/0000:00:02.1/0000:01:00.1/sound/card1/input18
Jan 22 01:38:09 hostname kernel: input: HDA NVidia HDMI/DP,pcm=10 as /devices/pci0000:00/0000:00:02.1/0000:01:00.1/sound/card1/input19
Jan 22 01:38:09 hostname kernel: input: HDA NVidia HDMI/DP,pcm=11 as /devices/pci0000:00/0000:00:02.1/0000:01:00.1/sound/card1/input20
Jan 22 01:38:09 hostname kernel: input: HDA NVidia HDMI/DP,pcm=12 as /devices/pci0000:00/0000:00:02.1/0000:01:00.1/sound/card1/input21
Jan 22 01:38:09 hostname kernel: input: HD-Audio Generic Front Mic as /devices/pci0000:00/0000:00:14.2/sound/card0/input22
Jan 22 01:38:09 hostname kernel: input: HD-Audio Generic Rear Mic as /devices/pci0000:00/0000:00:14.2/sound/card0/input23
Jan 22 01:38:09 hostname kernel: input: HD-Audio Generic Line as /devices/pci0000:00/0000:00:14.2/sound/card0/input24
Jan 22 01:38:09 hostname kernel: input: HD-Audio Generic Line Out as /devices/pci0000:00/0000:00:14.2/sound/card0/input25
Jan 22 01:38:09 hostname audit[425]: NETFILTER_CFG table=filter family=10 entries=144 op=xt_replace pid=425 comm="ip6tables-resto"
Jan 22 01:38:09 hostname kernel: input: HD-Audio Generic Front Headphone as /devices/pci0000:00/0000:00:14.2/sound/card0/input26
Jan 22 01:38:09 hostname kernel: EDAC amd64: F15h detected (node 0).
Jan 22 01:38:09 hostname kernel: EDAC amd64: Error: F1 not found: device 0x1601 (broken BIOS?)
Jan 22 01:38:09 hostname audit[426]: NETFILTER_CFG table=filter family=10 entries=160 op=xt_replace pid=426 comm="ip6tables-resto"
Jan 22 01:38:09 hostname systemd-udevd[258]: controlC0: Process '/usr/bin/alsactl restore 0' failed with exit code 99.
Jan 22 01:38:09 hostname systemd[1]: Condition check resulted in First Boot Wizard being skipped.
Jan 22 01:38:09 hostname systemd[1]: Condition check resulted in File System Check on Root Device being skipped.
Jan 22 01:38:09 hostname systemd[1]: Condition check resulted in Rebuild Hardware Database being skipped.
Jan 22 01:38:09 hostname systemd[1]: Condition check resulted in Repartition Root Disk being skipped.
Jan 22 01:38:09 hostname systemd[1]: Condition check resulted in Create System Users being skipped.
Jan 22 01:38:09 hostname audit[435]: NETFILTER_CFG table=filter family=10 entries=168 op=xt_replace pid=435 comm="ip6tables-resto"
Jan 22 01:38:09 hostname kernel: EDAC amd64: F15h detected (node 0).
Jan 22 01:38:09 hostname kernel: EDAC amd64: Error: F1 not found: device 0x1601 (broken BIOS?)
Jan 22 01:38:09 hostname systemd[1]: Finished CLI Netfilter Manager.
Jan 22 01:38:09 hostname audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=ufw comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jan 22 01:38:09 hostname kernel: EDAC amd64: F15h detected (node 0).
Jan 22 01:38:09 hostname kernel: EDAC amd64: Error: F1 not found: device 0x1601 (broken BIOS?)
Jan 22 01:38:10 hostname kernel: nvidia: module license 'NVIDIA' taints kernel.
Jan 22 01:38:10 hostname kernel: Disabling lock debugging due to kernel taint
Jan 22 01:38:10 hostname kernel: nvidia-nvlink: Nvlink Core is being initialized, major device number 237
Jan 22 01:38:10 hostname kernel:
Jan 22 01:38:10 hostname kernel: nvidia 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=io+mem
Jan 22 01:38:10 hostname kernel: NVRM: loading NVIDIA UNIX x86_64 Kernel Module 460.32.03 Sun Dec 27 19:00:34 UTC 2020
Jan 22 01:38:10 hostname systemd-udevd[267]: controlC1: Process '/usr/bin/alsactl restore 1' failed with exit code 99.
Jan 22 01:38:10 hostname systemd[1]: Condition check resulted in First Boot Wizard being skipped.
Jan 22 01:38:10 hostname systemd[1]: Condition check resulted in File System Check on Root Device being skipped.
Jan 22 01:38:10 hostname systemd[1]: Condition check resulted in Rebuild Hardware Database being skipped.
Jan 22 01:38:10 hostname systemd[1]: Condition check resulted in Repartition Root Disk being skipped.
Jan 22 01:38:10 hostname systemd[1]: Condition check resulted in Create System Users being skipped.
Jan 22 01:38:10 hostname kernel: random: crng init done
Jan 22 01:38:10 hostname kernel: random: 7 urandom warning(s) missed due to ratelimiting
Jan 22 01:38:10 hostname systemd[1]: Finished Load/Save Random Seed.
Jan 22 01:38:10 hostname audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jan 22 01:38:10 hostname systemd[1]: Condition check resulted in First Boot Complete being skipped.
Jan 22 01:38:10 hostname kernel: nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms 460.32.03 Sun Dec 27 18:51:11 UTC 2020
Jan 22 01:38:10 hostname kernel: [drm] [nvidia-drm] [GPU ID 0x00000100] Loading driver
Jan 22 01:38:10 hostname kernel: [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:01:00.0 on minor 0
Jan 22 01:38:10 hostname systemd[1]: Created slice system-systemd\x2dbacklight.slice.
Jan 22 01:38:10 hostname systemd[1]: Starting Load/Save Screen Backlight Brightness of backlight:acpi_video0...
Jan 22 01:38:10 hostname systemd[1]: Finished Load/Save Screen Backlight Brightness of backlight:acpi_video0.
Jan 22 01:38:10 hostname audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-backlight@backlight:acpi_video0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? termi>
Jan 22 01:38:18 hostname kernel: ata2: softreset failed (1st FIS failed)
Jan 22 01:38:28 hostname kernel: ata2: softreset failed (1st FIS failed)Note the timestamps. These jumps back to the past happened several times. Is that normal behavior or already an indication for problems?
I learned that they should always be in the correct time sequence.
When I closed and reopened the journal, the journal entries were ordered correctly though, regarding their timestamps.
The very first boot entry of this failing series is far longer (probably due to the soft reset).
I noticed the following entries in addition to the above:
Jan 22 01:38:08 hostname kernel: ACPI BIOS Warning (bug): Optional FADT field Pm2ControlBlock has valid Length but zero Address: 0x0000000000000000/0x1 (20200925/tbfadt-615)
...
Jan 22 01:38:08 hostname kernel: ata2.00: exception Emask 0x0 SAct 0x30000 SErr 0x0 action 0x6
Jan 22 01:38:08 hostname kernel: ata2.00: irq_stat 0x40000008
Jan 22 01:38:08 hostname kernel: ata2.00: failed command: READ FPDMA QUEUED
Jan 22 01:38:08 hostname kernel: ata2.00: cmd 60/78:80:88:00:00/00:00:00:00:00/40 tag 16 ncq dma 61440 in
res 43/84:78:f0:00:00/00:00:00:00:00/00 Emask 0x410 (ATA bus error) <F>
Jan 22 01:38:08 hostname kernel: ata2.00: status: { DRDY SENSE ERR }
Jan 22 01:38:08 hostname kernel: ata2.00: error: { ICRC ABRT }
Jan 22 01:38:08 hostname kernel: ata2: hard resetting link
Jan 22 01:38:08 hostname kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Jan 22 01:38:08 hostname kernel: ata2.00: NCQ Send/Recv Log not supported
Jan 22 01:38:08 hostname kernel: ata2.00: NCQ Send/Recv Log not supported
Jan 22 01:38:08 hostname kernel: ata2.00: configured for UDMA/133
Jan 22 01:38:08 hostname kernel: sd 1:0:0:0: [sdb] tag#16 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=1s
Jan 22 01:38:08 hostname kernel: sd 1:0:0:0: [sdb] tag#16 Sense Key : Aborted Command [current]
Jan 22 01:38:08 hostname kernel: sd 1:0:0:0: [sdb] tag#16 Add. Sense: Scsi parity error
Jan 22 01:38:08 hostname kernel: sd 1:0:0:0: [sdb] tag#16 CDB: Read(10) 28 00 00 00 00 88 00 00 78 00
Jan 22 01:38:08 hostname kernel: blk_update_request: I/O error, dev sdb, sector 136 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0I suspect the power cord as the culprit as I remember having had boot problems before that I could fix with reconnecting the power cord. Is there anyone here that can help me understand these logs better.
I am especially curious about the entries that refer to the BIOS. Without knowing the circumstances, what information can you extract from that?
| How to read these journal entries after failed boot? |
Actually, this is a GPU driver bug. It's already in your paste.
ene 22 18:59:01 dlag-pc kernel: i915 0000:00:02.0: GPU HANG: ecode 9:1:0x00000000, hang on rcs0
ene 22 18:59:01 dlag-pc kernel: GPU hangs can indicate a bug anywhere in the entire gfx stack, including userspace.
ene 22 18:59:01 dlag-pc kernel: Please file a _new_ bug report on bugs.freedesktop.org against DRI -> DRM/Intel
ene 22 18:59:01 dlag-pc kernel: drm/i915 developers can then reassign to the right component if it's not a kernel issue.
ene 22 18:59:01 dlag-pc kernel: The GPU crash dump is required to analyze GPU hangs, so please always attach it.
ene 22 18:59:01 dlag-pc kernel: GPU crash dump saved to /sys/class/drm/card0/error
ene 22 18:59:01 dlag-pc kernel: i915 0000:00:02.0: Resetting rcs0 for hang on rcs0
ene 22 18:59:01 dlag-pc kernel: [drm:gen8_reset_engines [i915]] *ERROR* rcs0 reset request timed out: {request: 00000001, RESET_CTL: 00000001}
ene 22 18:59:01 dlag-pc kernel: i915 0000:00:02.0: Resetting chip for hang on rcs0
ene 22 18:59:01 dlag-pc kernel: [drm:gen8_reset_engines [i915]] *ERROR* rcs0 reset request timed out: {request: 00000001, RESET_CTL: 00000001}The bug is known to affect kernel:5.3 & kernel:5.4. It's already resolved in kernel:5.5 tree.
|
My system freezes and the only solution is to force shutdown it. I'm not sure but maybe the problem is due chromium-vaapi , any missconfiguration or something.
I'm in 5.4.13-arch1-1 #1 SMP PREEMPT.
Here my journalctl with the freeze.
Thanks a lot.
| System completly freezes often (maybe chromium-vaapi) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.