output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
Being a virtuously lazy sysadmin, I came up with the following which will start and enable the [emailprotected] on all currently mounted btrfs filesystems: awk '$3=="btrfs" { system("systemd-escape " $2 "| cut -c2-") }' /etc/fstab | while read -r fs; do [[ -z $fs ]] && fs=- # Set to '-' for the root FS sudo systemctl enable btrfs-scrub@"$fs".timer sudo systemctl start btrfs-scrub@"$fs".timer doneCredit to @Head_on_a_Stick for pointing me in the right direction.
The btrfs-scrub manpage says:The user is supposed to run it manually or via a periodic system service. The recommended period is a month but could be less.For systemd users how is this automated, capturing all output in the journal? I am running Manjaro based on Arch Linux.
Periodically running btrfs-scrub
This is very well answered by this answer to Prevent systemd timer from running on startup. I will summarizeThe problem is that I have always included something like WantedBy=basic.target in the [Install] section of the .service file (because its part of the standard systemd service copy pasta). It turns out this actually causes the unit to be started whenever basic.target is (aka system boot). https://www.freedesktop.org/software/systemd/man/systemd.unit.html#WantedBy= https://www.freedesktop.org/software/systemd/man/systemd.special.html#basic.target TLDR; You do not want an [Install] section in a .service file that is triggered by a .timer file.As an additional step you have to disable your service with systemctl disable [service_name] Then you will notice that if you want to enable it again, you cannot and a similar error will be displayed: Note the part that says... started when needed via activation (socket, path, timer, ...The unit files have no installation config (WantedBy=, RequiredBy=, Also=, Alias= settings in the [Install] section, and DefaultInstance= for template units). This means they are not meant to be enabled using systemctl. Possible reasons for having this kind of units are:• A unit may be statically enabled by being symlinked from another unit's .wants/ or .requires/ directory. • A unit's purpose may be to act as a helper for some other unit which has a requirement dependency on it. • A unit may be started when needed via activation (socket, path, timer, D-Bus, udev, scripted systemctl call, ...). • In case of template units, the unit is meant to be enabled with some instance name specified.
I have the following unit enabled and started [Unit] Description=Schedule a nightly execution at 03.15 for Backup ROOT # Allow manual start RefuseManualStart=no # Allow manual stop RefuseManualStop=no[Timer] #Execute job if it missed a run due to machine being off Persistent=false # Run every night 03.15 OnCalendar=*-*-* 03:15:00 #File describing job to execute [emailprotected][Install] WantedBy=timers.targetIt will correctly run every night at 3.15am, but it also runs on boot, when it creates a mess! Why is this happening and how to stop it?
OnCalendar systemd timer unit still executes at boot, how to stop it?
For a monotonic timer that runs at regular intervals (e.g. every X minutes and Y seconds, every X days and Y minutes etc) you have to define a starting point and a repetition value. This can be accomplished by using two directives in the [Timer] section. The settings and their their starting points are explained in sytemd.timer man page which also mentions thatMultiple directives may be combined of the same and of different types, in which case the timer unit will trigger whenever any of the specified timer expressions elapse. For example, by combining OnBootSec= and OnUnitActiveSec=, it is possible to define a timer that elapses in regular intervals and activates a specific service each time.So, for a timer that runs every 1 hour and 18 minutes you could use something like: [Timer] Unit=test.service AccuracySec=1s OnActiveSec=5s OnUnitActiveSec=1h 18minIn the above, the initial trigger isOnActiveSec= Defines a timer relative to the moment the timer unit itself is activated.and the repetition is done byOnUnitActiveSec= Defines a timer relative to when the unit the timer unit is activating was last activated.Note that for the time spans provided as arguments you can use any of the time units described in systemd.time man page which means that OnUnitActiveSec=1h 18minis the same as OnUnitActiveSec=78mor OnUnitActiveSec=4680
I'm looking for a systemd timer to run every 1 hour and 18 minutes but I have this only lead. systemd-analyze --iterations=3 calendar *:0/18 which is every 18 minutes.
systemd timer every 1 hour and X minutes?
You probably have hit this Systemd bug which occurs when your RTC is set to the local time (timedatectl will confirm this). Either upgrade Systemd or set your RTC to UTC: # timedatectl set-local-rtc 0The latter is preferable. Quoting the timedatectl manual:Note that maintaining the RTC in the local timezone is not fully supported and will create various problems with time zone changes and daylight saving adjustments. If at all possible, keep the RTC in UTC mode.Here "problems with daylight saving adjustments" means that if your machine is off during daylight saving time change (which occured not long ago) then the time read from the RTC "will NOT be adjusted for the change" (quoted from the hwclock manual).
When we issue "systemctl status", we usually get in the output, a line showing the status and for how long it has been in that status. Like: (I issued that few minutes ago) Active: active (running) since Wed 2023-11-22 01:56:06 CST; 10h agoHowever, it happened to get the following line for the same service when the system time was 01:19:27 CST Active: active (running) since Wed 2023-11-22 **01:56:06** CST; 36min **left**Why the time after "since" is in the future? And why it shows the time "left"? Left for what? I expected to see a time in the past and to see x time units ago I tried to issue "systemctl list-timers --all" to find out if there is a timer related to that service, but I found none related.
Why systemctl status shows a time in the future and the amount of time left?
From systemd.timer(5):If a timer configured with OnBootSec= or OnStartupSec= is already in the past when the timer unit is activated, it will immediately elapse and the configured unit is started. This is not the case for timers defined in the other directives.Since your timer unit sets OnBootSec=0min, it will always start the service unit immediately.
So I have my service unit (runs some node code) and my timer unit Service: [Unit] Description=foo[Service] Type=oneshot ExecStart=/home/ubuntu/services/foo/start.shTimer [Unit] Description=foo timer [Timer] OnBootSec=0min OnCalendar=*-*-* 05:01:00 UTC Unit=foo.service [Install] WantedBy=multi-user.targetI understand that systemctl start foo.timer , will start the timer unit but not the service unit immediately (without reboot) and systemctl enable foo.timer will not start the timer unit but start it when the system boots up. I wanted to start the timer immediatey so I used the former. It did start the timer but it also started my foo.service as I started the timer even though the OnCalendar conditions were not met. It still started correctly at the OnCalendar time, too (I was testing so I chose a time in the near future). I'm wondering if there is anything in my unit file causing this. I thought that starting the timer would not start the service but only start the service when the condition is fulfilled in the timer. I did also test the system enable way and that one actually did behave correctly. I enabled my foo.timer. Reboot the system. Timer was active and service wasn't immediately. When the OnCondition time came, the service ran.
systemctl start foo.timer also starts foo.service even though the OnCalendar criteria hasn't been met
Run systemctl --user without any other parameters to see a listing of all units the user-level services can interact with. You will probably find something like sys-subsystem-net-devices-eno1.device. But note that this might not be the optimal way to react on network status changes: instead, you could drop a script into /etc/NetworkManager/dispatcher.d/ or any of its sub-directories to be executed any time there is a network event. Read the DISPATCHER SCRIPTS chapter on man NetworkManager for details. Or if it needs to be an user-level thing with no root access at all, you could connect into system D-Bus and monitor NetworkManager events. You might start with: gdbus monitor --system --dest org.freedesktop.NetworkManagerand refine from there according to your specific needs. You might be looking for org.freedesktop.NetworkManager.StateChanged events, or some specific variety of org.freedesktop.NetworkManager.Connection.Active.PropertiesChanged events, for example. Connecting into D-Bus might be the appropriate solution if you use more advanced scripting languages like Perl or Python, instead of just shell scripting; those languages have modules that can more easily interface with D-Bus.
I want to bind a user systemd timer (or service) to network events. For example consider this service: [Unit] Description=shows if connection changed[Service] Type=oneshot Environment=DISPLAY=:0 ExecStart=notify-send "Network" "Status changed!"How can I force this user service to run on network up/down events? I asked a similar question before. It seems I should use the PartOf= directive but what target should I use for this? 1- Note that I've defined this service in ~/.config/systemd/user/ so its scope is user-level. That means it can't depend on system targets. 2- If we define it as a system-level service, what is the proper hook (.target) that causes this service to trigger? I've monitored system service when I toggle the WiFi switch. Only NetworkManager-dispatcher.service gets triggered on such event and after doing its task, it gets de-activated. So it seems I can't depend on it. network.target, network-online.target, NetworkManager.service, network-manager.service are all loaded and active even when I turn off system's WiFi.
How to bind a user-level systemd service to network events?
Try taking the Requires=btrfs_backup.service out of the timer. The systemd.unit(5) man page says Requires= will activate the requirements as well. So activating the Timer will activate the requirement of btrfs_backup.service.
After trying the solutions posted here (Prevent systemd timer from running on startup), I thought I had my systemd timer problems corrected. However, after my last reboot, my service fired off during boot (evidently making up for a missed event). Here are the files in question: btrfs_backup.timer [Unit] Description=Create mirror of current state of all BTRFS snapshots Requires=btrfs_backup.service[Timer] # hourly, with a 5-minute delay, as to not interfere with the # snapper-timeline.service. Unit=btrfs_backup.service OnCalendar=*-*-* *:05:00 Persistent=false[Install] WantedBy=timers.targetbtrfs_backup.service [Unit] Description=Create mirror of current state of all BTRFS snapshots Wants=btrfs_backup.timer[Service] Type=oneshot ExecStart=/usr/local/sbin/btrfs_backup Environment="DISPLAY=:0.0" Environment="XDG_RUNTIME_DIR=/run/user/0"I basically copied these (2) files from snapper's timeline systemd files, so I don't understand why this setup is not working. I even looked into the suggestion of removing the [Install] section from the timer, but every timer on my system (including the snapper one) have an [Install] section. Everything else works great - service completes correctly, notifications are seen on the desktop. UPDATE #1: journal entries for the last couple of hours that surround the problem event: Mar 01 14:05:09 dss-mint systemd[1]: Starting Create mirror of current state of all BTRFS snapshots... Mar 01 14:05:09 dss-mint systemd[1]: btrfs_backup.service: Succeeded. Mar 01 14:05:09 dss-mint systemd[1]: Finished Create mirror of current state of all BTRFS snapshots. Mar 01 15:05:09 dss-mint systemd[1]: Starting Create mirror of current state of all BTRFS snapshots... Mar 01 15:05:09 dss-mint systemd[1]: btrfs_backup.service: Succeeded. Mar 01 15:05:09 dss-mint systemd[1]: Finished Create mirror of current state of all BTRFS snapshots. Mar 01 16:05:09 dss-mint systemd[1]: Starting Create mirror of current state of all BTRFS snapshots... Mar 01 16:05:09 dss-mint systemd[1]: btrfs_backup.service: Succeeded. Mar 01 16:05:09 dss-mint systemd[1]: Finished Create mirror of current state of all BTRFS snapshots. -- Reboot -- Mar 01 17:24:01 dss-mint systemd[1]: Starting Create mirror of current state of all BTRFS snapshots... Mar 01 17:24:01 dss-mint systemd[1]: btrfs_backup.service: Succeeded. Mar 01 17:24:01 dss-mint systemd[1]: Finished Create mirror of current state of all BTRFS snapshots. Mar 01 18:05:19 dss-mint systemd[1]: Starting Create mirror of current state of all BTRFS snapshots... Mar 01 18:05:19 dss-mint systemd[1]: btrfs_backup.service: Succeeded. Mar 01 18:05:19 dss-mint systemd[1]: Finished Create mirror of current state of all BTRFS snapshots.UPDATE #2: OK, powered on the PC this morning. While the service ran soon after the boot was complete, the timer was not triggered: $ ls-timers NEXT LEFT LAST PASSED UNIT ACTIVATES Tue 2021-03-02 08:43:16 PST 12min left n/a n/a systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service Tue 2021-03-02 09:00:00 PST 29min left n/a n/a snapper-timeline.timer snapper-timeline.service Tue 2021-03-02 09:01:57 PST 31min left n/a n/a btrfs_backup.timer btrfs_backup.service Tue 2021-03-02 11:17:38 PST 2h 46min left Mon 2021-03-01 19:30:22 PST 13h ago fwupd-refresh.timer fwupd-refresh.service Tue 2021-03-02 16:53:23 PST 8h left Mon 2021-03-01 12:52:17 PST 19h ago motd-news.timer motd-news.service Wed 2021-03-03 00:00:00 PST 15h left Tue 2021-03-02 08:28:52 PST 1min 53s ago logrotate.timer logrotate.service Wed 2021-03-03 00:00:00 PST 15h left Tue 2021-03-02 08:28:52 PST 1min 53s ago man-db.timer man-db.service Sun 2021-03-07 03:10:21 PST 4 days left Sun 2021-02-28 08:27:47 PST 2 days ago e2scrub_all.timer e2scrub_all.service Mon 2021-03-08 00:00:00 PST 5 days left Mon 2021-03-01 08:33:26 PST 23h ago fstrim.timer fstrim.service n/a n/a Tue 2021-03-02 08:30:06 PST 39s ago anacron.timer anacron.service n/a n/a Tue 2021-03-02 08:28:52 PST 1min 53s ago snapper-boot.timer snapper-boot.serviceI don't understand this - I've never enabled the service so that it would only be triggered by the timer: $ systemctl status btrfs_backup.service ● btrfs_backup.service - Create mirror of current state of all BTRFS snapshots Loaded: loaded (/etc/systemd/system/btrfs_backup.service; static; vendor preset: enabled) Active: inactive (dead) since Tue 2021-03-02 08:35:24 PST; 1min 8s ago TriggeredBy: ● btrfs_backup.timer Process: 1206 ExecStart=/usr/local/sbin/btrfs_backup (code=exited, status=0/SUCCESS) Main PID: 1206 (code=exited, status=0/SUCCESS)
Trying to stop systemd timers from triggering missed events
Found my issue, I needed to upgrade systemd. Which is what the jobs were supposed to do, of course...
I have a very weird issue on Debian Buster. I've enabled unattended-upgrades on the server as this is a very bare bones server and it should just update automatically. However, it seems that the apt timers for this never start. When I check all timers, I get the following: # systemctl list-timers NEXT LEFT LAST PASSED UNIT ACTIVATES Fri 2021-01-15 20:00:00 UTC 58min left Fri 2021-01-15 19:00:00 UTC 1min 13s ago logrotate.timer logrotate.service Sat 2021-01-16 00:00:00 UTC 4h 58min left Fri 2021-01-15 00:00:00 UTC 19h ago man-db.timer man-db.service Sat 2021-01-16 18:03:26 UTC 23h left Fri 2021-01-15 18:03:26 UTC 57min ago systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service n/a n/a Thu 2020-11-26 06:15:45 UTC 1 months 20 days ago apt-daily-upgrade.timer apt-daily-upgrade.service n/a n/a Wed 2020-11-25 20:32:27 UTC 1 months 20 days ago apt-daily.timer apt-daily.serviceWhen I check the definition of the timer, it seems fine: # systemctl cat apt-daily.timer # /lib/systemd/system/apt-daily.timer [Unit] Description=Daily apt download activities[Timer] OnCalendar=*-*-* 6,18:00 RandomizedDelaySec=12h Persistent=true[Install] WantedBy=timers.targetAnd analyze seems to agree that the syntax is fine: # systemd-analyze calendar "*-*-* 6,18:00" Original form: *-*-* 6,18:00 Normalized form: *-*-* 06,18:00:00 Next elapse: Sat 2021-01-16 06:00:00 UTC From now: 10h leftBut they do not fire. I tried running systemctl start apt-daily.timer, systemctl enable --now apt-daily.timer and systemctl restart timers.target, no effect. The command returns without an error, but nothing changes. I'm at a loss on how to debug this further, any hints would be much appreciated.
Systemd timers will not fire
I determined the issue by looking at the systemctl --user list-units --all ffmpeg* output. The ffmpeg-timelapse.target was remaining loaded/active/active. Prior to the actual triggered event the ffmpeg-timelapse.timer have the SUB set to waiting. UNIT LOAD ACTIVE SUB JOB DESCRIPTION ffmpeg-timelapse.target loaded active active start Triggers the individual timelapse units for each camera. ffmpeg-timelapse.timer loaded active running Runs ffmpeg timelapse units every minute The fault lies in the [Unit] configuration of the ffmpeg-timelapse.target. I needed to add the oneshot configuration to it, otherwise the target unit remained active. $ cat ffmpeg-timelapse.target [Unit] Type=oneshot Description=Triggers the individual timelapse units for each camera. StopWhenUnneeded=yesIt's now repeating every minute as expected. $ systemctl --user list-timers NEXT LEFT LAST PASSED UNIT ACTIVATES Sat 2020-06-06 12:58:00 EDT 13s left Sat 2020-06-06 12:57:42 EDT 3s ago ffmpeg-timelapse.timer ffmpeg-timelapse.targetThis is what the units look like now. UNIT LOAD ACTIVE SUB DESCRIPTION ffmpeg-timelapse01-front-yard.service loaded inactive dead Front Yard Timelapse Unit ffmpeg-timelapse.target loaded inactive dead Triggers the individual timelapse units for each camera. ffmpeg-timelapse.timer loaded active waiting Runs ffmpeg timelapse units every minute Status output for each one. $ systemctl --user status ffmpeg-timelapse.timer ● ffmpeg-timelapse.timer - Runs ffmpeg timelapse units every minute Loaded: loaded (/home/timelapse/.config/systemd/user/ffmpeg-timelapse.timer; enabled; vendor preset: enabled) Active: active (waiting) since Sat 2020-06-06 12:59:30 EDT; 3min 43s ago$ systemctl --user status ffmpeg-timelapse.target ● ffmpeg-timelapse.target - Triggers the individual timelapse units for each camera. Loaded: loaded (/home/timelapse/.config/systemd/user/ffmpeg-timelapse.target; static; vendor preset: enabled) Active: inactive (dead) since Sat 2020-06-06 13:03:11 EDT; 6s ago$ systemctl --user status ffmpeg-timelapse01-front-yard.service ● ffmpeg-timelapse01-front-yard.service - Front Yard Timelapse Unit Loaded: loaded (/home/timelapse/.config/systemd/user/ffmpeg-timelapse01-front-yard.service; enabled; vendor preset: enabled) Active: inactive (dead) since Sat 2020-06-06 13:03:12 EDT; 8s ago Process: 9607 ExecStart=/bin/bash -ac '. camera01.conf ; exec ffmpeg-timelapse.sh' Main PID: 9607 (code=exited, status=0/SUCCESS)
I'm trying to run a systemd timer every minute as a user, but it isn't repeating after the initial trigger. The ffmpeg-timelapse.timer is configured with the OnCalendar=minutely to fire every minute, and the ffmpeg-timelapse.target is WantedBy the dependent services. This allows me to easily add/remove cameras from the timelapse configuration. The issue I am encountering is when I start the ffmpeg-timelapse.timer unit it will schedule for the next minute, but it will not repeat. The same issue occurs if I start it with the --now argument. ffmpeg-timelapse.timer [Unit] Description=Runs ffmpeg timelapse units every minute[Timer] OnCalendar=minutely Unit=ffmpeg-timelapse.target[Install] WantedBy=timers.targetffmpeg-timelapse.target [Unit] Description=Triggers the individual timelapse units for each camera. StopWhenUnneeded=yesAn example of the service file for a camera. ffmpeg-timelapse01-front-yard.service [Unit] Description=Front Yard Timelapse Unit Wants=ffmpeg-timelapse.timer[Service] ExecStart=/bin/bash -ac '. camera01.conf ; exec ffmpeg-timelapse.sh'[Install] WantedBy=ffmpeg-timelapse.targetEnabling and starting the service schedules it for the next minute. $ systemctl --user start ffmpeg-timelapse.timer $ systemctl --user list-timers NEXT LEFT LAST PASSED UNIT ACTIVATES Sat 2020-06-06 12:08:00 EDT 12s left n/a n/a ffmpeg-timelapse.timer ffmpeg-timelapse.target1 timers listed. Pass --all to see loaded but inactive timers, too.However once it runs it does not fire a second time. $ systemctl --user list-timers NEXT LEFT LAST PASSED UNIT ACTIVATES n/a n/a Sat 2020-06-06 12:08:42 EDT 1min 2s ago ffmpeg-timelapse.timer ffmpeg-timelapse.target1 timers listed. Pass --all to see loaded but inactive timers, too.The user I am running this as has linger enabled. $ loginctl show-user timelapse UID=1000 GID=1000 Name=timelapse Timestamp=Tue 2020-04-07 16:16:20 EDT TimestampMonotonic=3291000946930 RuntimePath=/run/user/1000 [emailprotected] Slice=user-1000.slice Display=411982 State=active Sessions=412092 411982 163185 IdleHint=no IdleSinceHint=0 IdleSinceHintMonotonic=0 Linger=yesThe status output looks correct to me. $ systemctl --user status ffmpeg-timelapse.target ● ffmpeg-timelapse.target - Triggers the individual timelapse units for each camera. Loaded: loaded (/home/timelapse/.config/systemd/user/ffmpeg-timelapse.target; static; vendor preset: enabled) Active: active since Sat 2020-06-06 10:50:42 EDT; 1h 23min ago$ systemctl --user status ffmpeg-timelapse.timer ● ffmpeg-timelapse.timer - Runs ffmpeg timelapse units every 5 minutes Loaded: loaded (/home/timelapse/.config/systemd/user/ffmpeg-timelapse.timer; enabled; vendor preset: enabled) Active: active (running) since Sat 2020-06-06 12:13:25 EDT; 1min 1s ago$ systemctl --user status ffmpeg-timelapse01-front-yard.service ● ffmpeg-timelapse01-front-yard.service - Front Yard Timelapse Unit Loaded: loaded (/home/timelapse/.config/systemd/user/ffmpeg-timelapse01-front-yard.service; enabled; vendor preset: enabled) Active: inactive (dead) since Sat 2020-06-06 12:14:03 EDT; 35s ago Process: 4491 ExecStart=/bin/bash -ac '. camera01.conf ; exec ffmpeg-timelapse.sh' Main PID: 4491 (code=exited, status=0/SUCCESS)Below is the output of journalctl -xe Jun 06 12:13:25 srv01 systemd[26482]: Started Runs ffmpeg timelapse units every minute. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://www.debian.org/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done. Jun 06 12:14:02 srv01 systemd[26482]: Started Front Yard Timelapse Unit. -- Subject: Unit UNIT has finished start-up -- Defined-By: systemd -- Support: https://www.debian.org/support -- -- Unit UNIT has finished starting up. -- -- The start-up result is done.
systemd User Timer Not Repeating
I had the same requirements. I can confirm that two timers catching up at boot result in a race condition, with a random outcome. And what about a scenario where the system reboots more than once between 12 and 13. Persistent may not be of any help in that case. My conclusion (but I would love to be proven wrong) is that systemd is simply not made for this use case. My work around is to use the above configuration, without the Persistent flags. Then I create a new idle service dedicated to handle state after startup. This service compares the current time with the current target service state, and starts/stops the service if they do not match. Note that the start/stop times need to be duplicated in the timers and in the 'startup manager' service script, which is not ideal. That's the only reliable way I found to ensure my service is running during the selected time period, regardless of the various reboots.
Say I want a a program to be active between 12:00h and 13:00h on a desktop machine (i.e. if PC is on at either time cannot be guaranteed). I'll assume I can use systemd units for this. Starting service A with a timer and accounting for poweroff times is simple: A.timer [Timer] Unit=A OnCalendar=*-*-* 12:00:00 Persistent=trueStopping A could be done with a conflicting service as suggested earlier A.service (with conflict): #add conflict to A.service [Unit] Description=This is A.service Conflicts=killA.service #make sure killA considers A to be active RemainAfterExit=yeskillA.service: [Unit] Description=kills A [Service] Type=oneshot ExecStart=/bin/true #kill only if A is running: Requires=A.servicekillA.timer [Timer] Unit=killA OnCalendar=*-*-* 13:00:00 Persistent=trueNow there are several possible scenarios:Powered on from 11h-14h: start and stop as expected Powered on from 11h to 12:30h: start as expected but never stopped -> what will happen at next poweron?at 12:45: will A start? (last start less than 24h ago) at 14h: killA should not run as A should not be active at 12:15 next day? (last start less than 24 h ago, but killA start was missed) at 12:45 next day? (last start >24h ago, but killA <24h ago. Will it be stopped by killA?)Powered on from 12:30 to 14h: start and stop as expected (due to persistence in A.timer) powered on at 14h: A starts due to persistence. Will killA deactivate it immediately due to dependency? Can starting A be avoided in the first place?In short: How to use systemd timers to make sure a service is active in a given time interval and inactive outside the time interval independently of shutdowns/boots ? Hands-on use-case example: allow ssh-access only at working hours. PS: Maybe I just don't get systemd timers very well, yet?
systemd timer: enable service for specific time ranges only
Treat the *.service and *.timer independently by explicitly defining the unit file: override_dh_installsystemd: dh_installsystemd --name=myscript myscript.service --no-start dh_installsystemd --name=myscript myscript.timer
I have a systemd service+timer that I'd like to install which does not match my packagename. # debian/mypackage.myscript.timer [Timer] OnCalendar=weekly Persistent=true[Install] WantedBy=timers.target# debian/mypackage.myscript.service [Service] ExecStart=/usr/bin/myscript# debian/rules %: dh $@override_dh_installsystemd: dh_installsystemd --name=myscriptBut then on install I get: Setting up mypackage (1.38) ... Created symlink /etc/systemd/system/timers.target.wants/myscript.timer → /lib/systemd/system/myscript.timer. myscript.service is a disabled or a static unit, not starting it.How can I hide that last message?I tried: dh_installsystemd --name=myscript --no-startThis does address the installation message. However, it prevents the timer from starting.
Install systemd timer + service silently with dh_installsystemd
Okay, in the process of writing this up I noticed a Requires=mysql_tzinfo.service in the [Unit] section of my timer unit. It occurred to me that the starting of the timer at boot (dependency resolution by systemd) might be starting the service due to this Requires. Sure enough, removing this line from the timer and rebooting....the service no longer starts. That is what I get for not closely checking every config option when following "technical blogs". Summary of important points for people that are trying to move from cron to systemd timers:Do not add an [Install] section to your service unit and do not systemctl enable your SERVICE unit. Do not add a Requires=<SERVICE_NAME>.service to your TIMER unit. If you want your timer to remember when it ran and run missed runs on boot, add Persistent=true to your TIMER. The timers run fairly early at boot and so you should add the necessary After= requirements to the SERVICE unit (that your TIMER unit runs) to make sure the service unit doesn't run until all required services are online (network, database, etc.). Otherwise it will likely fail.
Summary A systemd service unit that is disabled ("static") and is only supposed to be triggered by a timer is run on every reboot. Background This was a service unit (in /etc/systemd/system/) that previously had an [Install] section and was systemctl enabled. The [Install] section was removed and the service disabled. The service is now triggered by a timer. The timer is set to run once per month and is persistent so it tracks last runs and won't trigger on reboot unless a run was missed while shutdown. I ran systemctl --system daemon-reload after making the changes. The timer works fine and triggers the service on schedule as expected. The problem On reboot the service unit always runs regardless of last run of the timer and the fact the timer is persistent. I've verified (via systemctl list-timers) that it is not the timer unit that is triggering the service unit (unless the list-timers output about last triggered time is wrong). systemctl is-enabled <service-unit> shows static (aka disabled, no [Install] section in unit). find /etc/systemd/system/*.wants -name <service-unit> does not show any installed symlinks left-over from when this service was previously installed/enabled. I suspect there is something "left over" from when this service-unit was previously installed that is causing this service to start on reboot but don't know where to look. This is on ubuntu 20.04 (in case there happens to be a known bug/issue). Is there a way to debug why systemd started a unit? (e.g. unit X started because wants Y in file Z). Is there a way to double-check this service really wasn't started by the timer (instead of just going by the list-timers output)? Service Unit # cat /etc/systemd/system/mysql_tzinfo.service [Unit] Description=mysql_tzinfo Wants=mysql_tzinfo.timer[Service] Type=oneshot Environment= WorkingDirectory=/tmp ExecStart=/bin/sh -c "/usr/bin/mysql_tzinfo_to_sql /usr/share/zoneinfo | /usr/bin/mysql --user=root mysql"User=root Group=rootTimer Unit # cat /etc/systemd/system/mysql_tzinfo.timer [Unit] Description=Timer for mysql_tzinfo.service Requires=mysql_tzinfo.service[Timer] Unit=mysql_tzinfo.service OnCalendar=*-*-05 04:00:00 AccuracySec=30s Persistent=true[Install] WantedBy=timers.target
disabled/static systemd service unit is always started at reboot (systemd timers as cron replacement)
You could be looking in the wrong place. Units can be in several places. $ systemctl cat systemd-tmpfiles-clean.service # /lib/systemd/system/systemd-tmpfiles-clean.service ...(you can also see a command here: $ systemctl status systemd-tmpfiles-clean.service ● systemd-tmpfiles-clean.service - Cleanup of Temporary Directories Loaded: loaded (/lib/systemd/system/systemd-tmpfiles-clean.service; static) Active: inactive (dead) since Sun 2017-07-16 17:34:00 BST; 16h ago Docs: man:tmpfiles.d(5) man:systemd-tmpfiles(8) Process: 28580 ExecStart=/bin/systemd-tmpfiles --clean (code=exited, status=0/SUCCESS) Main PID: 28580 (code=exited, status=0/SUCCESS)To doublecheck the associated service: $ systemctl show -p Unit systemd-tmpfiles-clean.timer Unit=systemd-tmpfiles-clean.service
As an example, take the phpsessionclean schedule. The cron.d file for this looks like this: 09,39 * * * * root [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fiIt's saying if systemd doesn't exist on the system run the script /usr/lib/php/sessionclean. If systemd does exist it doesn't run and the systemd timer runs instead. The phpsessionclean.timer file looks like this: [Unit] Description=Clean PHP session files every 30 mins[Timer] OnCalendar=*-*-* *:09,39:00 Persistent=true[Install] WantedBy=timers.targetI read about creating your own .timer files and creating an associated .service file containing the details of the script you're running, but in this case, and in the case of other .timer files installed by packages (such as certbot, apt etc.) there are no associated .service files. So, how do I infer what command is going to be executed when this timer runs?
How to see what command is being run by a systemd .timer file?
Your primary question seems to be how to start a second systemd service immediately after a first one was successfully started. It doesn't state what to do in case the pyznap.service is finished but the pyznap-send.service lacks the network connection to actually send the snapshots, so I will assume that you don't intend a sophisticated retry-logic. In that case, you may consider omitting the second service and adding an ExecStartPost line to the pyznap.service that calls pyznap with the send option: ExecStartPost=/usr/bin/pyznap sendAlternatively, if you want to have slightly more of systemd's control features, you could still define the pyznap-send.service, but activate it from the pyznap.service via an ExecStartPost statement: ExecStartPost=/usr/sbin/systemctl start pyznap-send.serviceIn that case, you will need to remove the After=pyznap.service statement in the pyznap-send.service file, since the pyznap.service may not yet be registered as "started" when the ExecStartPost line starting the pyznap-send.service is called.
I have two service oneshots:one that creates snapshots one that sends snapshotsThe second one (send) should always be executed after the first one (create) is finished. Currently the services, which are of type oneshot, run at the same time. The services are defined as follows (the exact used command from the following examples doesn't matter for this question, detailed requirements below the examples):Service to create snapshots (pyznap.service): [Unit] Description=Create ZFS snapshots Documentation=man:pyznap(1) Requires=local-fs.target After=local-fs.target[Service] Type=oneshot ExecStart=/usr/bin/pyznap snapService to send snapshots (pyznap-send.service): [Unit] Description=Send ZFS snapshots Documentation=man:pyznap(1) Requires=local-fs.target network-online.target After=local-fs.target network-online.target pyznap.service[Service] Type=oneshot ExecStart=/usr/bin/pyznap sendCurrently, they are triggered by individual (and independent) timers:The timer for the create snapshot (pyznap.timer): [Unit] Description=Run pyznap snap every 15 minutes[Timer] OnCalendar=*:0/15 Persistent=true[Install] WantedBy=timers.targetThe timer for the send snapshot (pyznap-send.timer): [Unit] Description=Run pyznap send every 15 minutes[Timer] OnCalendar=*:0/15 Persistent=true[Install] WantedBy=timers.targetAdditional notes:The question is related to "How do you configure multiple systemd services to use one timer?, but this solution still runs the services at the same time As per systemd design, only one service can be referenced by Timer.Unit. The units are separated due to different requirements:only the "send" service needs network the "create" service must run also without network I am fine with skipping those services if requirements are not metThe "send" service should run immediately after the previous "create" service. If there's only one timer:the timer cannot activate only the send snapshot service as the create snapshot service needs also to run without network but the timer could activate only the create snapshot service if there's a way to run the send snapshot service directly afterwardsWe don't know how long the first create snapshot service takes.
Activate two "oneshot" services from one systemd timer so that they are started one after another
While as of the time of writing this is still an outstanding issue, it is possible, as per this post, run action every 1st and 3rd Monday of the month. This was good enough for my needs, although is likely to leave a 3 week gap at the end of the month. Mon *-*-01..07,15..21 02:00:00This matches every Monday between 1st and 7th as well as 15th and 21st of any month, at 2am.
It is possible to run a timer every 15 minutes like so: OnCalendar=*:0/15Is there a way to run that timer every second Monday?
Systemd timer every second Monday?
Thanks for the suggestions and your time. I achieved this via systemd service with Restart=always and two crontab entries as suggested by @JdeBPone to start at 08:00 other to stop at 17:00
I have systemd unit file, I know it can be restarted on failure by providing parameters, like : Restart=always RestartSec=90It will restart after 90 seconds whenever it fails, But, I want to restart only if system time is in between given time-frame, say between 08:00 and 17:00, only then restart. Is there way to do this via systemd ?
restart systemd service within a timeframe
You can create a single timer unit with multiple OnCalendar= settings, which will allow you to specify the exact interval you want. If you look at the man page for systemd.timer, the OnCalendar= section says:May be specified more than once.So use three separate settings for the start, middle and end: [Timer] OnCalendar=*-*-* 09:15..59:00 OnCalendar=*-*-* 10..16:*:00 OnCalendar=*-*-* 17:00..15:00This should trigger the timer every minute between the times of 9:15 to 17:15, inclusive.
i need a service to start every minute between 09:15 - 17:15. Whats the best way to achieve this? I could make 3 timers, one to start (1) the timer (2) which runs the service every minute and one to stop it (3). but then it wouldn't be robust for reboots in between.
Systemd timer every minute between 09:15 - 17:15 [duplicate]
For automatic timer support, you need dh_installsystemd, which is available in debhelper compatibility levels 11 and up. You should use level 12 or above. Specify this in your control file: Build-Depends: debhelper-compat (= 12)Delete the compat file, and change your rules to omit the explicit systemd sequence: %: dh $@Debhelper compatibility level 12 is available in Debian 10 and later, and in Debian 9 through backports. If you need to use an older level, you’ll have to install the support files manually, as done for example in anacron: override_dh_auto_install: ... install -D -m 644 debian/anacron.timer debian/anacron/lib/systemd/system/anacron.timer
I'm creating a debian package which comprises of a service and some shell scripts and would like to also install a timer in the /lib/systemd/system folder so that the service will get called periodically. According to the debian helper guide https://manpages.debian.org/testing/debhelper/dh_systemd_enable.1.en.html this can be achieved by simply creating a package.timer file along with the package.service file in the debian folder and it will automatically get included in the package when building (sudo debuild -us -uc -d). When I build, only the service is included and installed, not the timer file. For info, I can add a package.socket file and this gets included but not timer or tmpfile . I hope someone can help me. For illustration, some of my package files are as follows. hello-world.service [Unit] Description=Hello world service.[Service] Type=oneshot ExecStart=/bin/echo HELLO WORLD![Install] WantedBy=default.targethello-world.timer [Unit] Description=Timer for periodic execution of hello-world service.[Timer] OnUnitActiveSec=5s OnBootSec=30s[Install] WantedBy=timers.targetcontrol file Source: hello-world Maintainer: Joe Bloggs <[emailprotected]> Section: misc Priority: optional Standards-Version: 1.0.0 Build-Depends: debhelper (>= 9), dh-systemd (>= 1.5)Package: hello-world Architecture: amd64 Depends: Description: Hello world test app.rules file #!/usr/bin/make -f %: dh $@ --with=systemdoverride_dh_auto_build: echo "Not Running dh_auto_build"override_dh_auto_install: echo "Not Running dh_auto_install"override_dh_shlibdeps: echo "Not Running dh_shlibdeps"override_dh_usrlocal: echo "Not Running dh_usrlocal"
How to include and install debian/package.timer file inside deblan package, alongside the package.service
You should run those as user service/timer... You would then not need to set the DISPLAY in the service file e.g. [Unit] Description=Change background image periodically[Service] ExecStart=/home/emobe/scripts/changebg.shshould be enough. systemd user files usually go in ~/.config/systemd/user/ so place them there ~/.config/systemd/user/bgchange.timer ~/.config/systemd/user/bgchange.servicethen run as a regular user systemctl --user daemon-reload systemctl --user enable --now bgchange.timerCheck that the timer is active, always as --user: systemctl --user list-timers
I am trying to use systemd timers to change my background wallpaper and it doesn't seem to be doing what I want. Blelow I have listed the relevant files and outputs that I have. bgchange.timer [Unit] Description=Timer for background change[Timer] OnUnitActiveSec=10sec OnActiveSec=5sec OnBootSec=1sec Persistent=true[Install] WantedBy=timers.targetbgchange.service [Unit] Description=Change background image periodically[Service] Type=oneshot Environment=DISPLAY=:0 ExecStart=/home/emobe/scripts/changebg.sh/home/emobe/scripts/changebg.sh #!/bin/bash feh --no-fehbg --bg-scale --randomize /home/emobe/Pictures/wallpapers/*bgchange.timer status ● bgchange.timer - Timer for background change Loaded: loaded (/etc/systemd/system/bgchange.timer; enabled; preset: disabled) Active: active (waiting) since Fri 2023-01-06 09:33:44 GMT; 4h 12min ago Until: Fri 2023-01-06 09:33:44 GMT; 4h 12min ago Trigger: Fri 2023-01-06 13:46:24 GMT; 4s left Triggers: ● bgchange.servicesystemctl list-timers Fri 2023-01-06 17:39:24 GMT 3h 52min left Thu 2023-01-05 23:57:47 GMT 13h ago updatedb.timer updatedb.service Sat 2023-01-07 00:00:00 GMT 10h left Fri 2023-01-06 00:00:01 GMT 13h ago logrotate.timer logrotate.service Sat 2023-01-07 00:00:00 GMT 10h left Fri 2023-01-06 00:00:01 GMT 13h ago shadow.timer shadow.service Sat 2023-01-07 09:48:44 GMT 20h left Fri 2023-01-06 09:48:44 GMT 3h 58min ago systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service Sat 2023-01-07 09:54:52 GMT 20h left Fri 2023-01-06 00:34:15 GMT 13h ago man-db.timer man-db.service Sat 2023-01-07 15:00:00 GMT 1 day 1h left Tue 2022-12-06 22:57:11 GMT 1 month 0 days ago pamac-cleancache.timer pamac-cleancache.service Thu 2023-01-12 08:45:44 GMT 5 days left Thu 2023-01-05 23:57:47 GMT 13h ago pamac-mirrorlist.timer pamac-mirrorlist.service Thu 2023-01-12 20:26:18 GMT 6 days left Fri 2023-01-06 00:25:48 GMT 13h ago archlinux-keyring-wkd-sync.timer archlinux-keyring-wkd-sync.service - - Fri 2023-01-06 13:46:53 GMT 74ms ago bgchange.timer bgchange.service
Using systemd timer to change the background wallpaper
OK, I believe the problem is with the script.service file. According to the systemd.timer man page: DESCRIPTION Note that in case the unit to activate is already active at the time the timer elapses it is not restarted, but simply left running. There is no concept of spawning new service instances in this case. Due to this, services with RemainAfterExit= set (which stay around continuously even after the service's main process exited) are usually not suitable for activation via repetitive timers, as they will only be activated once, and then stay around forever.Remove the RemainAfterExit= line and you should be good to go.
I am fairly new using systemd timers, and I am having some issues. I am trying to schedule a script that runs daily, every 8 hours, at 6 AM, 2 PM, and 10 PM. The time starts correctly, and it shows the next scheduled time to run (which it does), but then it never seems to run the 3rd (or any other) time. What am I doing wrong? I have this in my timer: [Unit] Description=Run every 8 hours Requires=script.service[Timer] OnCalendar=*-*-* 03,11,19:00:00 Persistent=true[Install] WantedBy=timers.targetI have also tried this: [Unit] Description=Run every 8 hours Requires=script.service[Timer] OnCalendar=*-*-* 03,11,19:00:00 OnUnitActiveSec=1d Persistent=true[Install] WantedBy=timers.targetAnd this: [Unit] Description=Run every 8 hours Requires=script.service[Timer] OnCalendar=*-*-* 03:00:00 OnCalendar=*-*-* 11:00:00 OnCalendar=*-*-* 19:00:00 Persistent=true[Install] WantedBy=timers.targetService: [Unit] Description=Renews Kerberos ticket every 8 hours After=network-online.target firewalld.service Wants=network-online.target script.timer[Service] Type=oneshot RemainAfterExit=yes ExecStartPre=/usr/bin/kdestroy ExecStart=/usr/bin/kinit -R -V [emailprotected] -k -t /etc/krb5.keytab IOSchedulingClass=best-effort[Install] WantedBy=default.target '''
systemd timer every 8 hours
Your .timer unit (not the .service unit, which has one but probably shouldn't) is missing an [Install] section. You probably want to add: [Install] WantedBy=timers.targetYour .service file is intended to be activated only by the timer, not directly during boot (etc.). So it shouldn't have an [Install] section (and shouldn't be systemctl enable'd).
I wrote a very simple script (checkaudio.sh) that publishes a message from a file on a mqtt topic. I would like the script to run continuously (I would be happy even with every second). I first tried with cron, which is technically possible but "dirty" as a solution (multiple cron jobs with a 1 second delay each). I have the tried with systemd and its timer function. I'm not very proficient with systemd, and this is what I came up with: /etc/systemd/system/[emailprotected] contents: [Unit] Description=Announce every second[Install] WantedBy=default.target[Service] Type=oneshot ExecStart=/root/checkaudio.sh/etc/systemd/system/[emailprotected] contents: [Timer] OnUnitActiveSec=1s AccuracySec=1ms [emailprotected]I activated the two above through systemctl enable. Everything was running smoothly until I rebooted the system and I could not enable /etc/systemd/system/[emailprotected] anymore. I am getting the following error: The unit files have no installation config (WantedBy, RequiredBy, Also, Alias settings in the [Install] section, and DefaultInstance for template units). This means they are not meant to be enabled using systemctl. Possible reasons for having this kind of units are: 1) A unit may be statically enabled by being symlinked from another unit's .wants/ or .requires/ directory. 2) A unit's purpose may be to act as a helper for some other unit which has a requirement dependency on it. 3) A unit may be started when needed via activation (socket, path, timer, D-Bus, udev, scripted systemctl call, ...). 4) In case of template units, the unit is meant to be enabled with some instance name specified.What am I doing wrong? Is there a better way to achieve my initial objective of running the script continuously?
How to continuously run a script with systemd
Add an extra directive OnActiveSec=0s to the [Timer] stanza. The systemd maintainer explains how this works.
I use OnUnitActive property which will run each N secods using last activation time of service as starting point, but i want it always run once when timer started and use 'OnUnitActive' property after. How can i force this behavior?
Start service on timer start and each N seconds after
Thanks to @AlexD for the hint. Adding the AccuracySec= option to the timer makes the magic of running the service in the right time every run.
I've set up a systemd timer to trigger a service every minute on Ubuntu 22.04, but I'm encountering an issue where the execution time of the service significantly increases after the first run. The service simply logs some text using /bin/echo. Here are the details: Service file (testA.service): [Unit] Description=Sample Service A[Service] ExecStart=/bin/echo "Hello from Service A"Timer file (testA.timer): [Unit] Description=1min timer[Timer] OnCalendar=*-*-* *:*:00[Install] WantedBy=timers.targetObserved Behavior: The first execution of the service completes in 6 seconds. Subsequent executions take 17 seconds. System logs: Feb 16 10:47:06 test systemd[923]: Started Sample Service A. Feb 16 10:47:06 test echo[2740609]: Hello from Service A Feb 16 10:48:17 test echo[2740652]: Hello from Service A Feb 16 10:48:17 test systemd[923]: Started Sample Service A. Feb 16 10:49:17 test systemd[923]: Started Sample Service A. Feb 16 10:49:17 test echo[2740691]: Hello from Service A Feb 16 10:50:17 test systemd[923]: Started Sample Service A. Feb 16 10:50:17 test echo[2740731]: Hello from Service AAnother test even slower - 58 seconds: Feb 19 13:43:58 test systemd[917]: Started Sample Service A. Feb 19 13:43:58 test echo[119031]: Hello from Service ASystem Details: Ubuntu 22.04.2 8 Processors 8GB RAM Ample disk space Monitoring with htop shows the system is not overloaded. Question: How can I diagnose and optimize the execution time of this systemd service to ensure it consistently completes quickly? Why might the execution time increase after the first run? I've checked for common issues like system load and disk space but haven't found anything that would explain this behavior.
Ubuntu systemd service with timer is slow
Unfortunately, I'm not reproducing the issue you're seeing. Judging by the commented ;OnCalendar=, you've been changing the field. Are you sure that you used systemctl daemon-reload between the edit and starting the timer? When I test it out on my system I see: $ systemctl --user cat mytime.timer # /home/stew/.config/systemd/user/mytime.timer [Unit] Description=Test timer[Timer] OnCalendar=hourly$ systemctl --user start mytime.timer $ systemctl --user status mytime.timer ● mytime.timer - Test timer Loaded: loaded (/home/stew/.config/systemd/user/mytime.timer; static) Active: active (waiting) since Tue 2020-09-01 09:49:14 CEST; 7s ago Trigger: Tue 2020-09-01 10:00:00 CEST; 10min left Triggers: ● mytime.serviceSep 01 09:49:14 stewbian systemd[1691]: Started Test timer.Then I waited 10m for the first timer to expire and got:$ journalctl --user -u mytime.timer -u mytime.service -- Logs begin at Mon 2020-07-06 04:41:08 CEST, end at Tue 2020-09-01 10:00:00 CEST. -- Sep 01 09:49:14 stewbian systemd[1691]: Started Test timer. Sep 01 10:00:00 stewbian systemd[1691]: Starting mytime.service... Sep 01 10:00:00 stewbian systemd[1691]: mytime.service: Succeeded. Sep 01 10:00:00 stewbian systemd[1691]: Finished mytime.service.$ systemctl --user status mytime.timer ● mytime.timer - Test timer Loaded: loaded (/home/stew/.config/systemd/user/mytime.timer; static) Active: active (waiting) since Tue 2020-09-01 09:49:14 CEST; 10min ago Trigger: Tue 2020-09-01 11:00:00 CEST; 59min left Triggers: ● mytime.serviceSep 01 09:49:14 stewbian systemd[1691]: Started Test timer.In this case, I used OnCalendar=hourly. The first trigger was at the start of the next hour. The second trigger is set for the start of the following hour.Since I suspect the issue is a daemon-reload, I tried to reproduce your problem by changing OnCalendar=. I found:If I use systemctl daemon-reload the change is applied If I systemctl stop then systemctl start, the change is applied, even without a daemon-reload. If I systemctl start without stopping the previous timer, the change is not applied and I get a warning about this:$ systemctl --user start mytime.timer Warning: The unit file, source configuration file or drop-ins of mytime.timer changed on disk. Run 'systemctl --user daemon-reload' to reload units.
I have myscript.service and I want this service to start every hour. So I wrote myscript.timer Description=My script timer[Timer] OnCalendar=hourly ;OnCalendar=*-*-* 0/2:00:00[Install] WantedBy=timers.targetI started systemctl enable --now myscript.timer, then systemctl status myscript.timer And I got [user@localhost ~]$ sudo systemctl status myscript.timer ● myscript.timer - My script timer Loaded: loaded (/etc/systemd/system/myscript.timer; enabled; vendor preset: disabled) Active: active (waiting) since Tue 2020-09-01 12:10:54 +05; 4s ago Trigger: Tue 2020-09-01 16:31:29 +05; 4h 20min left Triggers: ● myscript.serviceSep 01 12:10:54 localhost.localdomain systemd[1]: Started My script timer.And I can't understand why it's not trigger in an hour?
How systemd timer work?
By (ab?)using the WatchdogSec the service will terminate when it fails to acknowledge within the time. It will then restart, but execute the script first. WatchdogSec={interval} Restart=on-watchdog ExecStopPost=/script.shref: WatchdogSec Having the service being able to be backed up while running would be a much nicer service.
I would like to set up a Timer which will stop a service, execute a script and restart the service. One of the possibilities is to use Type=oneshot ExecStartPre=/bin/systemctl stop myservice ExecStart=/usr/local/bin/myscript.sh ExecStartPost=/bin/systemctl start myserviceAnother one is to have myscript.sh handling the whole thing, including systemctl. I find it awkward, though, to use systemctl within a service declaration, when there may be systemd built-in mechanisms to interact with services. Is there a cleaner way to perform these operations?
How to stop a service before executing an ExecStart entry?
First, create a systemd service file at ~/.config/systemd/user/send-mail.service with the following contents: [Unit] Description=Sends mail that reminds me of an anniversary[Service] ; The l flag for bash creates a login shell so Mutt can access our environment variables which contain configuration ExecStart=/bin/bash -lc "echo \"$(whoami) created this message on $(date) to remind you about...\" | mutt -s \"Don't forget...\" [emailprotected]"You can test whether sending mail works by executing systemctl --user daemon-reload && systemctl --user start send-mail.serviceThis should send an email to [emailprotected]. Then, create a timer at ~/.config/systemd/user/send-mail.timer with these contents: [Unit] Description=Timer for writing mail to myself to remind me of anniversaries[Timer] ; Trigger the service yearly on September 5th OnCalendar=*-09-05 ; Send a mail immediately when the date has passed while the machine was shut down Persistent=true AccuracySec=1us ; Set the timer to every ten seconds (for testing) ; OnCalendar=*:*:0/10[Install] WantedBy=timers.targetNote that the timer's contents don't reference the service. It still works because the service and the timer have the same name apart from their suffixes .service and .timer. If you want to name timer and service differently, use Unit= in the timer's [Timer] section. Make your timer start at boot with systemctl --user daemon-reload && systemctl --user enable send-mail.timerYou should be able to see the timer now with systemctl --user list-timers --all. To start the timer, do systemctl --user start send-mail.timerTo check how systemd interprets your dates, you can use systemd-analyze calendar *:0/2 or systemd-analyze calendar quarterly. Also, check out the manual on systemd's time format.
I'd like to use systemd timers to send emails periodically to remind me of certain things like anniversaries or filing taxes. I send my regular emails with Mutt; it would be nice if I could reuse that to send the automated emails, and not having to install additional software like Sendmail. I'm on Arch Linux 4.18.5, systemctl --version says systemd 239.
Send email periodically with systemd
Eureka! The issue was that I hadn't initialized the time correctly. Yes I followed this post but I used a script that wrapped the /sbin/hwclock --hctosys --utc --noadjfile command, delaying it in time. This let the boot sequence to start already creating a mess with the OnCalendar= entry (as described in the timer manual page). I was enough to add a service exactly as this, no need to change anything else.
I have a realtime timer with Persistent=false running immediately after boot although my objective is to run it periodically! I saw it is a rather common question but none of the answers I found in StackExchange solved my issue. I followed the advices of this post and this post. Here I report a simplified example to reproduce my issue. I want the timer to be executed every 5 minutes (0,5,10,15,...55) but not after boot. I have the following two files, generated using sudo systemctl edit --force --full test.service and sudo systemctl edit --force --full test.timer # test.service [Unit] Description=test[Service] Type=simple ExecStart=echo "TEST"# test.timer [Unit] Description=test[Timer] OnCalendar=*:0/5 Persistent=false[Install] WantedBy=default.targetThen I made sure to disable the service using: sudo systemctl disable test.service and enabled the timer using: sudo systemctl enable test.timer Now, when running sudo reboot, the test.service is immediately executed. journalctl -u test looks like: -- Journal begins at Thu 2023-08-24 02:39:59 UTC, ends at Thu 2023-08-24 19:40:14 UTC. -- Aug 24 19:33:02 rbpi0 systemd[1]: Started test. Aug 24 19:33:02 rbpi0 echo[463]: TEST Aug 24 19:33:03 rbpi0 systemd[1]: test.service: Succeeded. Aug 24 19:35:14 rbpi0 systemd[1]: Started test. Aug 24 19:35:14 rbpi0 echo[911]: TEST Aug 24 19:35:14 rbpi0 systemd[1]: test.service: Succeeded. Aug 24 19:40:14 rbpi0 systemd[1]: Started test. Aug 24 19:40:14 rbpi0 echo[1352]: TEST Aug 24 19:40:14 rbpi0 systemd[1]: test.service: Succeeded.And you can clearly see the test.service has been executed at 19:33 at boot... Has someone any Idea where the mistake could be? Edit 1 I tried to change the [Install] section:Attempt 1: Removed completely the [Install] section. Result:The unit files have no installation config (WantedBy=, RequiredBy=, Also=, Alias= settings in the [Install] section, and DefaultInstance= for template units). This means they are not meant to be enabled using systemctl.Attempt 2: changed WantedBy=default.target to WantedBy=timer.target or WantedBy=multi-user.target Result: same issue.Edit 2 By reading the timer manual page I noticed I needed to be sure the system clock is synced before time-sync.target. I made sure the clock is synced but the issue remains.
How to avoid systemd periodic realtime timer running at boot
Let's focus on one question here: the duplicate runs each hour. You've used this syntax for it: OnCalendar=*-*-* *:40:*According to man systemd.time, the wildcard in the seconds place means it matches every second of the 40th minute of every hour. You can confirm this with the included systemd-analyze tool, which has a calendar sub-command: systemd-analyze calendar --iterations=5 "*-*-* *:40:*" Normalized form: *-*-* *:40:* Next elapse: Thu 2020-12-17 17:40:00 EST (in UTC): Thu 2020-12-17 22:40:00 UTC From now: 15min left Iter. #2: Thu 2020-12-17 17:40:01 EST (in UTC): Thu 2020-12-17 22:40:01 UTC From now: 15min left Iter. #3: Thu 2020-12-17 17:40:02 EST (in UTC): Thu 2020-12-17 22:40:02 UTC From now: 15min left Iter. #4: Thu 2020-12-17 17:40:03 EST (in UTC): Thu 2020-12-17 22:40:03 UTC From now: 15min left Iter. #5: Thu 2020-12-17 17:40:04 EST (in UTC): Thu 2020-12-17 22:40:04 UTC From now: 15min leftSo that's a problem. The second problem is that you have included Requires= in your timer. In man systed.unit, the documentation for the Requires= directive says this:If this unit gets activated, the units listed [In Requires=] will be activated as well.So that could also cause the target service to be loaded a second time as well. Open a new question about OnBootSec= timing-- that's a separate issue.
I have a nodejs gui program (does not requires user interaction), which needs to be run at every 40th minutes at each hour. Say run at 05:40PM, 06:40PM, 07:40PM and so on. In a Debian server, I have enabled a systemd timer using: systemctl --user enable my_program.timerProblem is, scheduling starts at the given time and finishes successfully. But immediately after that program starts again and finishes successfully. Say program started at 5:40PM and finished at 5:45PM. After a minute or so, say at 5:46PM program starts again and finishes. Then this happens again at next time, so at 6:40PM. This happens every time every hour. Also, if I reboot the server, after logging into that user account, program starts immediately instead of waiting 5 minute.How to stop auto running second time? How to force to start 5 minutes after reboot?Content of /home/user/schedule.sh #! /usr/bin/bash cd /home/user/my_program && DISPLAY=:0 /usr/bin/node ./index.jsContent of /home/user/.config/systemd/user/my_program.service [Unit] Description=My Program[Service] ExecStart=/home/user/schedule.shContent of /home/user/.config/systemd/user/my_program.timer [Unit] Description=My Program Timer Requires=my_program.service[Timer] Unit=my_program.service OnCalendar=*-*-* *:40:* OnBootSec=5 minutes[Install] WantedBy=timers.target
Systemd timer running scheduled service 2 times in a row, instead of running only 1 time
sleep.target is a target, so you name it in a WantedBy setting in your service unit file, and then enable the service. Further reading"Sleep hooks". Power management. Arch wiki.
I was trying to create a simple(ish) way to detect whether my laptop lid was closed and activate i3lock when it is. I know that this can be done through acpid or other methods, but I wanted to learn more about systemd timers anyway, so I went that way. It turns out that systemd has special units for timers, and one of them is for sleep, but I was unable to find anything online telling me how to do this, even on the usually amazing arch wiki. Does anyone know how to set up a service/timer to activate on these special units?
How to use systemd special units
TimeoutStartSec should be used. As @muru mentioned in the comment, the correct way to run a script that should exit is to use Type=oneshot instead of Type=simple. For example, if your script is normally executed in under 10 seconds, your config would be: [Service] Type=oneshot TimeoutStartSec=10 SyslogIdentifier=myservice Environment='MYVAR=myvar' User=deanresin ExecStart=/usr/bin/python3 /home/deanresin/myscriptIf the script takes more than 10 seconds to execute, it will be killed, and the service would show as failed.
I have a problem where on rare occasions my Type=simple systemd service hangs or gets caught in a loop. This causes its timer to stop scheduling the service because, as confirmed with sudo systemctl status myservice, the service is still running when it should have long exited. I haven't yet figured out the bug but it isn't critical. But in the mean time I don't want it to stop scheduling future runs. Is there a way to specify in the systemd service file a maximum run time, after which it will force stop? [Unit] Description=scripts should run and exit but occasionally hangs or infinite loop [Service] SyslogIdentifier=myservice Environment='MYVAR=myvar' User=deanresin Type=simple ExecStart=/usr/bin/python3 /home/deanresin/myscript
How can I specify a maximum service duration in systemd service?
CentOS 7 had systemd init system. systemd has a good feature which is named as timer. Timer is like service and is intended for starting services at specific time. systemd shutdown system by calling systemd-poweroff service. So it's need to write systemd-poweroff.timer: $ cat /etc/systemd/system/systemd-poweroff.timer [Unit] Description=Poweroff every work day # Call necessary service Unit=systemd-poweroff.service[Timer] # Power off in working days at 23:00 OnCalendar=Mon,Tue,Wed,Thu,Fri *-*-* 23:00:00[Install] WantedBy=timers.targetIt's need to do systemctl enable systemd-poweroff.timer and systemctl start systemd-poweroff.timer for enable and run timer. After, timer will be started: $ systemctl list-timers NEXT LEFT LAST PASSED UNIT ACTIVATES Thu 2018-04-19 19:39:36 MSK 14min left n/a n/a systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service Thu 2018-04-19 23:00:00 MSK 3h 34min left n/a n/a systemd-poweroff.timer systemd-poweroff.service2 timers listed.Pass --all to see loaded but inactive timers, too.If you want to disable timer in particular day then it's possible just in case of ordinary systemd service: # systemctl stop systemd-poweroff.timer # systemctl list-timers NEXT LEFT LAST PASSED UNIT ACTIVATES Thu 2018-04-19 19:39:36 MSK 12min left n/a n/a systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service1 timers listed. Pass --all to see loaded but inactive timers, too.
I have multiple Redhat & CentOS 7 servers that are used only during working hours. I am looking into using the systemd-shutdownd service to shut down each machine at 6-30pm on workdays. Systemd appears to be a cleaner solution than cron jobs. Google shows that there is a schedule file that this service uses, but I have not been able to find how to implement it. Also, I'd like a way to stop the auto-powerdown in case I work late on a particular day.
Use systemd-shutdownd schedule
OnBootSec= is relative to the boot time, as given by the kernel — basically, the time at which the kernel started execution. See the relevant lines in systemd. The table of directives in man systemd.timer hints at this in its description of OnStartupSec=:For system timer units this is very similar to OnBootSec= as the system service manager is generally started very early at boot.If systemd distinguishes its own startup time from the boot time, that indicates that the latter is the actual boot time (or as close as can be).
"On Boot" is ambiguous to me. Does it mean when booting starts, when booting finishes, mid boot, when timer.target is met? The documentation that I've read doesn't resolve this ambiguity.
When does the timer for 'OnBootSec' in a systemd timer unit actually start?
You can find detailed description how to specify time for timer unit in man systemd.time: Examples for valid timestamps and their normalized form:hourly → *-*-* *:00:00So I guess the value you want to put there is: *-*-* *:50:00Also:Either time or date specification may be omitted, in which case the current day and 00:00:00 is implied, respectively. If the second component is not specified, ":00" is assumed.So it should be enough to put there just *:50
I'm having hard time figure out the format to run every hour on the :50 minute mark. I've tried: OnCalendar=00/0:50But it Timer unit lacks value setting. Refusing.
Systemd timer run on XX:50
5m is a time-span. OnCalendar= expects a time-stamp. According to systemd.time(7), a time-stamp is a unique point in time, while a time-span is a duration. If you want to keep OnCalendar=, then use a time-stamp like:minutely, hourly, daily 2012-11-23 11:12:13 Mon,Fri *-01/2-01,03 *:30:45If you do want to use a duration, then try OnActiveSec= or OnBootSec= which takes a duration and adds it to the time the unit became activated or the system booted. See systemd.timer(5) for more options such as OnUnitActiveSec= (when the timer was last triggered)Defines realtime (i.e. wallclock) timers with calendar event expressions.A calendar event expression is a "unique point in time".
I am trying to get a simple systemd service to run on a timer unit. For some reason, systemd doesn't seem to like the way I am specifying time. The script worked when I used minutely or daily as the value for OnCalendar, but now I want to set it to 5 minutes and systemd gives an error: /etc/systemd/system/backup.timer:7: Failed to parse calendar specification, ignoring: 5m backup.timer: Timer unit lacks value setting. Refusing.Thing is, according to this page, minutes should be an understood unit of time. They have 2hours as an example timespan that should work there, so I don't understand why 5m or 5minutes is somehow invalid. Here is my timer file: [Unit] Description=Timer unit for backup.service Requires=backup.service[Timer] Unit=backup.service OnCalendar=5m[Install] WantedBy=timers.targetThe other thing is, when I run systemd-analyze timespan 5minutes it does not return an error, and it seems to be able to parse the value correctly: Original: 5minutes μs: 300000000 Human: 5min
systemd: Failed to parse calendar specification, ignoring: 5m
Since your computer is presumably not running in 1998, you can’t use that; if you don’t specify the year, it works: $ systemd-analyze calendar "10-2 0/8:00" Original form: 10-2 0/8:00 Normalized form: *-10-02 00/8:00:00 Next elapse: Sat 2021-10-02 00:00:00 CEST (in UTC): Fri 2021-10-01 22:00:00 UTC From now: 8 months 18 days left(using my local time).
I want a timer for a specific day (every year specific month and day) repeating every 8 hours on that day (birthday reminder). I have tested several including: [Unit] Description=Tom Birthday Requires=Tom_Birthday.service[Timer] Unit=Tom_Birthday.service OnCalendar=*-10-2 00/8:00[Install] WantedBy=timers.targetNo one works.
systemd timer every X hours on a specific day
CacheDirectoryMode=644This allows to read the directory list. Not to interact with a file within this directory list: the eXecute bit is required to further traverse the path and access files within the directory. This makes the write access for user monitor also useless. Change this parameter into: CacheDirectoryMode=755which is the default (ie: you can remove this parameter instead). This now allows to access files within this directory, for reading or for writing for user monitor. The behavior is linked from systemd's documentation ( RuntimeDirectoryMode=, StateDirectoryMode=, CacheDirectoryMode=, LogsDirectoryMode=, ConfigurationDirectoryMode= ) to the manual for path_resolution(7) which include all the details about basic Unix access, especially in Step 2: walk along the path and in the Permission paragraphs:If the process does not have search permission on the current lookup directory, an EACCES error is returned ("Permission denied").Of the three bits used, the first bit determines read permission, the second write permission, and the last execute permission in case of ordinary files, or search permission in case of directories.
I have a Golang binary that runs every 5 mins. It is supposed to create & update a text file which needs to be write restricted. To run this binary I created a systemd service and a systemd timer unit. Systemd service uses a DynamicUser. To achieve access restriction i use CacheDirectory directive in systemd so that only DynamicUser can write that file and it only exists as long as user exists. Also set CacheDirectoryMode=644 to allow only owner with write permissions. When systemd service runs, it is failing with failed to read output file: lstat /var/cache/monitor/output_file.txt: permission denied> Question: Although service unit will create a dynamic user & run an executable that creates/updates/reads the file, why that executable itself get Permission Denied when trying to read the file when systemd service runs? file-monitor.go compiled to produce /usr/local/bin/file-monitor binary package mainimport ( "fmt" "os" )function foo() { var outputFile = os.Getenv("CACHE_DIRECTORY") + "/output_file.txt" outputFileBytes, err := os.ReadFile(outputFile) if err != nil { return fmt.Errorf("failed to read output file %s: %v\n", outputFile, err) } }function main() { foo() }file-updater.service [Unit] Description="description" After=file-updater.service[Service] DynamicUser=yes User=monitor Group=monitorCacheDirectory=monitor CacheDirectoryMode=644ExecStart=/usr/local/bin/file-monitor <arg1>Type=oneshot[Install] WantedBy=multi-user.target
Systemd executable failed to read file from CacheDirectory with Permission Denied
FWIW, I'm not running any desktop environment, just X and a window manager. I'm not sure if that effects how graphical-session.target is triggered.It does – the .target needs to be explicitly started by your ~/.xinitrc (or by your WM's "autostart"). graphical-session.target is not started automatically by Xorg for every GUI session, only pulled in as a dependency in specific situations (such as by gnome-session.target, as GNOME primarily uses systemd for session management).
I currently have this timer: [Unit] Description=Schedule wallpaper rotation[Timer] OnCalendar=*-*-* *:00:00 Persistent=true[Install] WantedBy=graphical-session.targetWhich runs this service: [Unit] Description=Rotate wallpapers[Service] Type=oneshot ExecStart=%h/bin/wpman %h/docs/media/wallpaper/arkadyWhich runs this script: #!/bin/bashTARGET="${1}" CURRENT= NEXT= REST= LISTFILE="${HOME}/.wallpaper-list" TARGFILE="${HOME}/.wallpaper-target" WALLFILE="${HOME}/.wallpaper"if [[ ! -d "${TARGET}" ]]; then echo "Invalid target: '${TARGET}'" exit 1 fi TARGET="$(realpath "${TARGET}")" [[ -f "${TARGFILE}" ]] && CURRENT="$(cat "${TARGFILE}")" if [[ -f "${LISTFILE}" ]]; then NEXT="$(head -n 1 "${LISTFILE}")" REST="$(tail -n +2 "${LISTFILE}")" fimklist() { find "${TARGET}" -mindepth 1 -maxdepth 1 -type f | sort -R > "${LISTFILE}" echo "${TARGET}" > "${TARGFILE}" NEXT="$(head -n 1 "${LISTFILE}")" REST="$(tail -n +2 "${LISTFILE}")" }set-wallpaper() { feh --bg-fill "${NEXT}" echo "${REST}" > "${LISTFILE}" cp "${NEXT}" "${WALLFILE}" }if [[ -z "${CURRENT}" ]] || ([[ -n "${CURRENT}" ]] && [[ "${CURRENT}" != "${TARGET}" ]]) || [[ ! -f "${LISTFILE}" ]] || [[ -z "${NEXT}" ]]; then mklist fiset-wallpaperBut it doesn't start. I thought about just starting it from timers.target and checking in my script if $DISPLAY is empty and exiting if so, but I'm not sure if $DISPLAY will be available to the script if started this way. FWIW, I'm not running any desktop environment, just X and a window manager. I'm not sure if that effects how graphical-session.target is triggered. Is there a way to get this to work how I want? Maybe a systemd timer wasn't the best approach.
How do you create a systemd user timer that will start only after X has started?
It's in man systemd.timerAccuracySec= Specify the accuracy the timer shall elapse with. Defaults to 1min. The timer is scheduled to elapse within a time window starting with the time specified in OnCalendar=, OnActiveSec=, OnBootSec=, OnStartupSec=, OnUnitActiveSec= or OnUnitInactiveSec= and ending the time configured with AccuracySec= later. Within this time window, the expiry time will be placed at a host-specific, randomized, but stable position that is synchronized between all local timer units. This is done in order to optimize power consumption to suppress unnecessary CPU wake-ups. To get best accuracy, set this option to 1us. Note that the timer is still subject to the timer slack configured via systemd-system.conf(5)'s TimerSlackNSec= setting. See prctl(2) for details. To optimize power consumption, make sure to set this value as high as possible and as low as necessary. ...It doesn't define 0us, so that behaviour isn't defined. I don't think you can assume it means "best accuracy. 1us does mean "best accuracy" according to the man page.
Can anyway tell me where is the document for "AccuracySec=0" of systemd-timer? The most closed document I found is "AccuracySec=1us". I know the meaning of AccuracySec but just want to be sure that AccuracySec=0 means most accurate.
systemd-timer: undocumented "AccuracySec=0"
man systemd.timer says:Unit= The unit to activate when this timer elapses. The argument is a unit name, whose suffix is not ".timer". If not specified, this value defaults to a service that has the same name as the timer unit, except for the suffix. (See above.) It is recommended that the unit name that is activated and the unit name of the timer unit are named identically, except for the suffix.man systemd.path similarly says:Unit= The unit to activate when any of the configured paths changes. The argument is a unit name, whose suffix is not ".path". If not specified, this value defaults to a service that has the same name as the path unit, except for the suffix. (See above.) It is recommended that the unit name that is activated and the unit name of the path unit are named identical, except for the suffix.Neither of these suggest that you can have multiple Unit= lines or multiple arguments per Unit= line. Even if you try it and find it works, it's not guaranteed that it will work in future releases of systemd because it would be undocumented behaviour. Therefore it's safest to create a single *.path/*.timer for each unit you need to trigger, even if it means identical *.path or *.timer units. There are probably already several *.timer units with OnCalendar=daily on your system. Hoestly, it would be a little scary to trigger two independent services if I touch a single path. It invites race conditions. You could consider changing your service to use multiple ExecStartPre= or ExecStartPost= to sequence the operations, ensuring they always happen in a deterministic order.
Can multiple instances of Unit= exist in a systemd.path or systemd.timer unit? Or, must one instead specify multiple instances of the path or timer unit, each with a single instance of Unit=? I haven't been able to find or derive any guidance elsewhere. The former obviously is easier. The specific application is to have a path unit activate two mount units. In particular, the path unit monitors a virtual machine's log file, which is quiet until the VM runs. The mounts are of shares on the virtual machine and are defined in the host's fstab entries, each of which uses the x-systemd.requires= mount option to specify the path unit, so that the mounts don't occur until the virtual machine is running. This works well with a single share. So, the more specific questions are (a) whether the path unit knows to simply propagate the mount units as instructed, leaving the mount units to mount the shares, or gets confused and can only propate a single mount unit; or (b) whether calling the same path unit twice in fstab creates conflicts or errors when the path unit has many Unit= directives (i.e., by re-creating all the mount points specified) or simply is an expression of a dependency. Many thanks.
Multiple Instances of Unit= in Path or Timer Unit?
I'm gradually starting to understand systemd.unit(5). The Before= and Requires= options only refer to the [Unit] level settings and have no bearing on any configuration on the [Timer] level. Except for inputting a systemd.time(7)-conforming value setting into the [Timer] unit, this cannot be done without assistance from the shell.
(x-mas.service) [Unit] Description=Celebrate X-Mas[Service] Type=simple ExecStart=/usr/sbin/x-mas-day[Install] WantedBy=multi-user.target(x-mas.timer) [Unit] Description=Add "X-Mas" to the calendar[Timer] OnCalendar=*-12-25 00:00:00 Unit=x-mas.service[Install] WantedBy=timers.target(buy-presents.service) [Unit] Description=Get your wallet out Requires=x-mas.service Before=x-mas.service[Service] Type=simple ExecStart=open-amazon-dot-com.sh[Install] WantedBy=multi-user.target(buy-presents.timer) [Unit] Description=Buy presents Before=x-mas.timer Requires=x-mas.timer[Timer] OnActiveSec=1 AccuracySec=no-pressure? RandomizedDelaySec=true?[Install] #WantedBy=timers.target RequiredBy=x-mas.timerClearly there is still plenty of time from today to buy presents so optimized flexibility in scheduling before the 'x-mas deadline' is desired, but it was not immediately obvious to me on first reading systemd.timer(5) how to even relate timers to other timers. Can this be done with systemd units only?
Is there a way to schedule a lazy timer relative to another timer?
You can mark a.service as RequiredBy b.service. Make a.service look like: [Unit] Before=b.service[Service] Type=exec ExecStart=...[Install] RequiredBy=b.serviceAnd then: systemctl enable a.serviceNow whenever b.service starts -- either via a timer or via systemctl start -- your new a.service will start first.
In my system I have b.service activated by b.timer. I want another service (a.service) that start before b.service. I cant change b.service or b.timer because are not mine. I've putte Before=b.service in a.service but the timer start b.service without starting a.service.
systemd unit "Before=" with timer
After several tests, also Persistent=boolean didn't do the job. So far, I totally forgot about /etc/crontab and all times used crontab -e just to get the problems with anachron which leaded to the systemd configuration you see above and therefore this thread at all. But adding the job to /etc/crontab does the job without anachron - without "catching up". üòÅ All details for proper setup of /etc/crontab can be found within crontab manual.The real issue: Difference between hardware clock and system clock. Solution: Setting the Linux' system time to UTC via timedatectl set-local-rtc 0 In Detail: The system is dual boot Linux and Windows. Once I set up this system it were Arch Linux and Windows 7. Now it is Manjaro Linux and Windows 10. While Arch was in charge, the hardware clock was set to UTC within Windows, so it fits Linux. But somehow, when I installed Manjaro, the Linux' system clock was set to local time (TZ). So while Windows kept it's registry key and used UTC Linux started using TZ - and on every boot and resume the time jump caused by ntp triggered the service. After some time testing I was able to verify. One still needs /etc/crontab to omit the catch up but also /etc/crontab is not enough, when time switches trigger. This also executes /etc/crontab/-entries.
I wanted a "more stupid" version of crontab: Run only on given times, don't catch up after suspension. I.e. when a service should have been triggered by crontab, but the machine was suspended, it was triggered right after resuming. That's what I didnt' want. I solved this by writing a systemd.timer unit (instead of crontab) and accordingly a -sleep.service, which deactivated the timer unit, when the machine is going to suspend and reactivated the timer unit when the system resumed. Since my last update last weekend suddenly it behaved like crontab did: the timer unit started it's target, just when the timer unit got started, even though it wasn't a given time within the timer unit. I checked the logs and the -sleep.service unit did it's job and deactivated the timer unit. (Also the timer unit is showing it's de- and reactivation.) However, as I said: I don't want the timer unit to "catch up". I want it ONLY triggering it's unit on given times, never else! Thank you very much! Regards Dom # horcrux-sleep.service [Unit] Description=horcrux sleep hook Before=sleep.target Before=shutdown.target StopWhenUnneeded=yes RefuseManualStart=yes [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/bin/systemctl stop horcrux.timer ExecStop=/usr/bin/systemctl start horcrux.timer [Install] WantedBy=sleep.target Also=horcrux.timer# horcrux.timer Jun 27 06:51:15 citadel systemd[1]: Stopped Schedule for backup. [Unit] Description=Schedule [Timer] OnCalendar=Mon *-*-* 20..21:00:00 OnCalendar=Tue..Thu *-*-* 17..21:00:00 OnCalendar=Fri *-*-* 17..23:00:00 OnCalendar=Sat *-*-* *:00:00 OnCalendar=Sun *-*-* 00..21:00:00 ## Spätschicht #OnCalendar=Mon..Wed,Fri *-*-* 7..11:00:00 ## Frühschicht am Donnerstag #OnCalendar=Thu *-*-* 13..16:00:00 ## Frühschicht täglich #OnCalendar=Mon..Fri *-*-* 7..11:00:00 ## freie Tage #OnCalendar=Mon..Fri 9..11,14..16,0..5:00:00 ## Urlaub / Krankheit #OnCalendar=*-*-* *:00:00 Persistent=1 AccuracySec=1sec [Install] WantedBy=timers.target Also=horcrux-sleep.service
systemd.time - OnCalendar= unwanted running after suspend
Your system is running a systemd version which is too old compared to the version of systemctl. (This D-Bus method was added in systemd v253.) Use systemctl [--user] daemon-reexec to upgrade the running systemd version.
I have a simple systemd service and timer under ~/.config/systemd/user for building nightly images of my favorite program: # ~/.config/systemd/user/kicad-build.service [Unit] Description=KiCAD nightly builder[Service] Type=simple StandardOutput=null ExecStart=/bin/bash /home/jan/kicad-nightly-builder/build.sh# ~/.config/systemd/user/kicad-build.timer [Unit] Description=KiCAD nightly build timer[Timer] OnCalendar=daily Persistent=true RandomizedDelaySec=7200[Install] WantedBy=timers.targetNow I wanted to disable the timer, as I don't need the nightly builds anymore: [jan@memory-alpha user]$ systemctl --user stop kicad-build.timer[jan@memory-alpha user]$ systemctl --user disable kicad-build.timer Failed to disable unit: Unknown method DisableUnitFilesWithFlagsAndInstallInfo or interface org.freedesktop.systemd1.Manager.What is going on here? Why can systemctl not find the appropriate method for disabling the unit? The timer is still enabled: [jan@memory-alpha user]$ systemctl --user status kicad-build.timer ○ kicad-build.timer - KiCAD nightly build timer Loaded: loaded (/home/jan/.config/systemd/user/kicad-build.timer; enabled; preset: enabled) Active: inactive (dead) since Sat 2023-03-04 09:54:42 CET; 11min ago Duration: 1month 2w 5d 13h 3min 15.725s Trigger: n/a Triggers: ● kicad-build.serviceJan 13 10:21:26 memory-alpha systemd[901]: Started KiCAD nightly build timer. Mar 04 09:54:42 memory-alpha systemd[901]: Stopped KiCAD nightly build timer.Further testing: [jan@memory-alpha user]$ gdbus introspect --system --dest org.freedesktop.systemd1 --object-path /org/freedesktop/systemd1 | grep DisableUnit DisableUnitFiles(in as files, DisableUnitFilesWithFlags(in as files, DisableUnitFilesWithFlagsAndInstallInfo(in as files,[jan@memory-alpha user]$ gdbus introspect --session --dest org.freedesktop.systemd1 --object-path /org/freedesktop/systemd1 | grep DisableUnit DisableUnitFiles(in as files, DisableUnitFilesWithFlags(in as files,So apparently my system session has the appropriate method though my user session does not. Unfortunately I don't know enough about D-Bus to further debug this issue, any ideas? [jan@memory-alpha user]$ uname -a Linux memory-alpha 6.1.4-arch1-1 #1 SMP PREEMPT_DYNAMIC Sat, 07 Jan 2023 15:10:07 +0000 x86_64 GNU/Linux[jan@memory-alpha user]$ systemctl --version systemd 253 (253-1-arch) +PAM +AUDIT -SELINUX -APPARMOR -IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD +BPF_FRAMEWORK +XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified
Can't disable systemd user timer, Unknown method DisableUnitFilesWithFlagsAndInstallInfo
Since the time i've asked this question, i've red more and found out that my question was lying on a misconception from my side - i thought that both systemd's user and system instances rely on the same set of targets. According to THIS page each systemd instance uses its own set targets, so it's clear that the user timers do not activate before the user instance of systemd is started, so the timer will correctly count the time from the start of the systemd's user instance.The system manager starts the [emailprotected] unit for each user, which launches a separate unprivileged instance of systemd for each user — the user manager. Similarly to the system manager, the user manager starts units which are pulled in by default.target. For non-graphical sessions, default.target is used. Whenever the user logs into a graphical session, the login manager will start the graphical-session.target target that is used to pull in units required for the graphical session. A number of targets (shown on the right side) are started when specific hardware is available to the user.Since this is the case, i believe timers.target is the more appropriate of the two, but both should work.
My .timer file located in ~/.config/systemd/user doesn't show in output of systemctl --user list-timers --all command unless i enable it. Is it normal for this command to not show disabled .timers alongside enabled ones? I cannot enable the .timer without an [Install] section because of The unit files have no installation config error. According to freedesktop.org documentation:Timer units will automatically have a dependency of type Before= on timers.targetDoes this mean that i do not have to enable my .timer for it in order to work? If i do need to enable my .timer: I believe that default.target is what software which is to be executed after successful user login is WantedBy. I also believe that the user systemd instance is started by pam_systemd, which i believe happens before default.target. So it seems to me that if i use default.target the timer will be activated after login. If i use OnStartupSec in this case, will it correctly count the time from the startup of the systemd user instance? On the other hand, if i use timers.target, since this is a user timer, will it be activated before login and start counting the seconds from its activation time, or it will just register and start counting time only after systemd --user is started?
Should i use default.target or timers.target value for WantedBy for a systemd user timer? [duplicate]
It seems not to work, when the computer is turned off or no internet connection, it will not catch up as the message still says: $ systemctl --no-pager status mintupdate-automation-upgrade.timer ... Trigger: Tue 2021-02-02 00:32:40 CET; 13h leftSo the update process (at least using this method, maybe in contrast to unattended-upgrades) is not really safe. If there is the possibility to fix it, let me know?
How do the systemctl timer works when the computer is turned off at the given trigger time? There is the option "Persistent", but when exactly is the command executed? In how far is it guaranteed that the command will be safely executed, e.g. that a maximum of given time shall not pass between two executions? status: $ systemctl status mintupdate-automation-upgrade.timer ● mintupdate-automation-upgrade.timer - Update Manager automatic upgrades Loaded: loaded (/lib/systemd/system/mintupdate-automation-upgrade.timer; enabled; vendor preset: enabled) Active: active (waiting) since Fri 2021-01-22 20:20:24 CET; 4 days ago Trigger: Thu 2021-01-28 00:44:21 CET; 12h left Triggers: ● mintupdate-automation-upgrade.serviceconfiguration files $ systemctl cat mintupdate-automation-upgrade.* # /lib/systemd/system/mintupdate-automation-upgrade.timer [Unit] Description=Update Manager automatic upgrades[Timer] OnCalendar=daily OnStartupSec=60m RandomizedDelaySec=60m Persistent=true[Install] WantedBy=timers.target# /lib/systemd/system/mintupdate-automation-upgrade.service [Unit] Description=Update Manager automatic upgrades After=network-online.target[Service] Type=oneshot CPUQuota=50% CPUWeight=20 IOWeight=20 ExecStart=/usr/lib/linuxmint/mintUpdate/automatic_upgrades.py[Install] WantedBy=multi-user.targetThe timer has the Persistent flag, but the service (is triggered by timer) has not $ systemctl show mintupdate-automation-upgrade.timer --property=Persistent Persistent=yes
systemctl Persistent timer and service, when computer turned off
according to systemd.timer:Multiple directives may be combined of the same and of different types, in which case the timer unit will trigger whenever any of the specified timer expressions elapse. For example, by combining OnBootSec= and OnUnitActiveSec=, it is possible to define a timer that elapses in regular intervals and activates a specific service each time. Moreover, both monotonic time expressions and OnCalendar= calendar expressions may be combined in the same timer unit.The "any" specification suggests that it's an "OR" relationship.
I want to create a timer that fires, completes execution, waits for 30 seconds and fires again but only during night hours. So far I got this: [Timer] OnUnitInactiveSec=30s OnCalendar= * - * - * 23,24,00,01,02,03,04,05,06,07:*But I don't know if the 2 conditions act as an "and" or as an "or", in other words, I don't know if one condition met will suffice to fire the timer or both would be required (which is what I want). I couldn't find that detail in the help pages and the examples I found in internet use only one type of this conditions.
Mixing conditions in Linux timer
For the record: OnCalendar=*-*-* *:0/5:* is simply wrong. OnCalendar=*-*-* *:0/5:00 does stoping the multiple execution.
I want to start a command (unison) every 5 min as a systemd.service via a systemd.timer unit. The '.service' file alone runs fine. However when it's started by the timer unit, it runs multiple times and stops with these errors: Start request repeated too quickly. and Failed with result 'start-limit-hit'. But why? I start the timer service like this: systemctl --user start service.timer. The files are located in: $HOME/.config/systemd/user/. sync.service [Unit] Description=Sync Service[Service] Type=oneshot ExecStart=/bin/zsh -l -c "unison -batch %u" ExecStartPost=/bin/zsh -l -c 'dunstify "sync ~"'[Install] WantedBy=graphical.targetsync.timer [Unit] Description=Timer for Sync Service[Timer] OnCalendar=*-*-* *:0/5:* AccuracySec=5s[Install] WantedBy=timers.targetThe unison command syncs over the network into a server via ssh with a password proteceted keyfile. A ssh-agent instance is running by the user. That's why i have to use a login shell: zsh -l -c "...".
Systemd Service/Timer -- Oneshot service w/ timer executes multiple times and failed w/ 'start-limit-hit'
This is documented in systemd.exec:EnvironmentFile= [...] The argument passed should be an absolute filename or wildcard expression, optionally prefixed with "-", which indicates that if the file does not exist, it will not be read and no error or warning message is logged.And in systemd.service:ExecStart= … For each of the specified commands, the first argument must be an absolute path to an executable. Optionally, this filename may be prefixed with a number of special characters: Table 1. Special executable prefixes … ExecStartPre=, ExecStartPost= … If any of those commands (not prefixed with -) fail, the rest are not executed and the unit is considered failed.(To find the most complete documentation for a systemd directive, look it up in systemd.directives.)
On my Archlinux system, the /usr/lib/systemd/system/mdmonitor.service file contains these lines: [Service] Environment= MDADM_MONITOR_ARGS=--scan EnvironmentFile=-/run/sysconfig/mdadm ExecStartPre=-/usr/lib/systemd/scripts/mdadm_env.sh ExecStart=/sbin/mdadm --monitor $MDADM_MONITOR_ARGSI suspect (confirmed by some googling) that the =- means that the service should not fail if the specified files are absent. However I failed to find that behaviour in the manpage of systemd unit files. Where is the official documentation for the =- assignment?
Documentation of =- (equals minus) in systemd unit files
I was running into the same issue. Googling I found this thread: https://bbs.archlinux.org/viewtopic.php?id=233035 The problem is with how the service is being started. If you specify the user/group in the unit file then you should start the service as a system service. If you want to start the service as a user service then the User/Group is not needed and can be removed from the unit config. You simply start the service when logged in as the current user passing the --user flag to systemctl.
I'm trying to set up watchman as a user service. I've followed their documentation as closely as possible. This is what I have: The socket file: [Unit] Description=Watchman socket for user %i[Socket] ListenStream=/usr/local/var/run/watchman/%i-state/sock Accept=false SocketMode=0664 SocketUser=%i SocketGroup=%i[Install] WantedBy=sockets.targetThe service file: [Unit] Description=Watchman for user %i After=remote-fs.target Conflicts=shutdown.target[Service] ExecStart=/usr/local/bin/watchman --foreground --inetd --log-level=2 ExecStop=/usr/bin/pkill -u %i -x watchman Restart=on-failure User=%i Group=%i StandardInput=socket StandardOutput=syslog SyslogIdentifier=watchman-%i[Install] WantedBy=multi-user.targetSystemd attempts to run watchman but is stuck in a restart loop. These are the errors I get: Apr 16 05:41:00 debian systemd[20894]: [emailprotected]: Failed to determine supplementary groups: Operation not permitted Apr 16 05:41:00 debian systemd[20894]: [emailprotected]: Failed at step GROUP spawning /usr/local/bin/watchman: Operation not permittedI'm 100% sure the group and user I'm enabling this service & socket exists. What am I doing wrong?
Failed to determine supplementary groups: Operation not permitted
As others have mentioned, it's a service template. In the specific case of [emailprotected], it's for invoking sshd only on-demand, in the style of classic inetd services. If you expect SSH connections to be rarely used, and want to absolutely minimize sshd's system resource usage (e.g. in an embedded system), you could disable the regular ssh.service and instead enable ssh.socket. The socket will then automatically start up an instance of [emailprotected] (which runs sshd -i) whenever an incoming connection to TCP port 22 (the standard SSH port) is detected. This will slow down the SSH login process, but will remove the need to run sshd when there are no inbound SSH connections.
Some applications, like ssh have a unit file that ends with @, like ssh.service and [emailprotected]. They contain different contents, but I cannot understand what exactly is the difference in functionality or purpose. Is it some naming convention I'm not aware of?
Why do some unit filenames end with @?
The systemd manual discusses the relationship between Before/After and Requires/Wants/Bindto in the Before=, After= section: Note that this setting is independent of and orthogonal to the requirement dependencies as configured by Requires=, Wants= or BindsTo=. It is a common pattern to include a unit name in both the After= and Requires= options,After does not imply Wants or WantedBy, nor does it conflict with those settings. If both units are triggered to start, After will affect the order, regardless of the dependency chain. If the module listed in After is not somewhere in the dependency chain, it won't be loaded, since After does not imply any dependency.
I have a question regarding making my own unit (service) file for Systemd. I've read the documentation and had some questions. After searching around, I found this very helpful answer that gives some detail about some of the questions I was having. How to write a systemd .service file running systemd-tmpfiles Although I find that answer useful, there is still one part that I do not understand. Mainly this part:Since we actually want this service to run later rather than sooner, we then specify an "After" clause. This does not actually need to be the same as the WantedBy target (it usually isn't)My understanding of After is that it is pretty straight forward. The service (or whatever you are defining) will run after the unit listed in After. Similarly, WantedBy seems pretty straight forward. You are defining that the unit you list has a Want to your service. So for a target like multi-user or graphical, your unit should be run in order for systemd to consider that target reached. Now, assuming my understanding of how these declarations work is correct so far, my question is this: Why would it even work to list the same unit in the After and WantedBy clauses? For example, defining a unit that is After multi-user.target and also WantedBy multi-user.target seems to me like it would lead to an impossible situation where the unit needs to be started after the target is reached, but also it needs to be started for the target to be considered "reached". Am I misunderstanding something?
Systemd Unit File - WantedBy and After
Functionally Wants is in the Unit section and WantedBy is in the Install. The init process systemd does not process/use the Install section at all. Instead, a symlink must be created in multi-user.target.wants. Usually, that's done by the utility systemctl which does read the Install section. In summary, WantedBy is affected by systemctl enable/systemctl disable. Logically Consider which of the services should "know" or be "aware" of the other. For example, a common use of WantedBy: [Install] WantedBy=multi-user.targetAlternatively, that could be in multi-user.target: [Unit] Wants=nginx.serviceBut that second way doesn't make sense. Logically, nginx.service knows about the system-defined multi-user.target, not the other way around. So in your example, if alpha's author is aware of beta, then alpha Wants beta. If beta's author is aware of alpha then beta is WantedBy alpha. To help you decide, you may consider which service can be installed (say, from a package manager) without the other being present. Config directories As another tool in your box, know that systemd files can also be extended with config directories: /etc/systemd/system/myservice.service.d/extension.conf This allows you to add dependencies where neither service is originally authored to know about the other. I often use this with mounts, where (for example) neither nginx nor the mount need explicit knowledge of the other, but I as the system adminstrator understand the dependency. So I create nginx.service.d/mymount.conf with Wants=mnt-my.mount.
As far as I can tell from the documentation of systemd, Wants= and WantedBy= perform the same function, except that the former is put in the dependent unit file and vice-versa. (That, and WantedBy= creates the unit.type.wants directory and populates it with symlinks.) From DigitalOcean: Understanding Systemd Units and Unit Files:The WantedBy= directive... allows you to specify a dependency relationship in a similar way to the Wants= directive does in the [Unit] section. The difference is that this directive is included in the ancillary unit allowing the primary unit listed to remain relatively clean. Is it really just about keeping a unit file "clean"? What is the best practice for using these two directives? That is, if service alpha "wants" service beta, when should I use Wants=beta.service in alpha.service and when should I prefer WantedBy=alpha.service in the beta.service?
Best practice for Wants= vs WantedBy= in Systemd Unit Files
The use case of this double relation is similar to a “provides” relation. systemd-timesyncd provides a time synchronisation service, so it satisfies any dependency a unit has on time-sync.target. It must start before time-sync.target because it’s necessary for any service which relies on time synchronisation, and it wants time-sync.target because any unit relying on time synchonisation should be started along with the systemd-timesyncd service. I think the misunderstanding comes from your interpretation of “wants”. The “wants” relation in systemd isn’t a dependency: systemd-timesyncd doesn’t need time-sync to function. It’s a “start along with” relation: it says that the configuring unit (systemd-timesyncd.service) wants the listed units (time-sync.target) to start along with it. See also Which service provides time-sync.target in systemd?
In this example of a systemd unit file: # systemd-timesyncd.service ...Before=time-sync.target sysinit.target shutdown.target Conflicts=shutdown.target Wants=time-sync.targetsystemd-timesyncd.service should start before time-sync.target. This defines an ordering dependency. But at the same systemd-timesyncd.service wants time-sync.target. So time-sync.target is it's requirement dependency What is the use case for this relation and why aren't they in some conflict with one another?
"before" and "want" for the same systemd service?
multi-user.target is appropriate for the system-bus, but you are using --user which works with the user-bus. The user-bus does not typically have multi-user.target stew ~ $ sudo systemctl status multi-user.target ● multi-user.target - Multi-User System Loaded: loaded (/lib/systemd/system/multi-user.target; static) Active: active since Fri 2021-08-27 10:09:41 CEST; 5h 19min ago Docs: man:systemd.special(7)Aug 27 10:09:41 stewbian systemd[1]: Reached target Multi-User System.stew ~ $ systemctl --user status multi-user.target Unit multi-user.target could not be found.The solution is to either use the system bus (which will start the service on boot), or use the user bus (which will start when the user logs in). If you choose to stick with the user bus then change multi-user.target to default.target (which is the main user target). If you choose to switch to the system bus, then you can still run the service as your user with User= in the [Service] section. See man systemd.special for info about these targets.
OS: Ubuntu 20.04.3 $ \cat /home/nikhil/.config/systemd/user/Festival.service [Unit] Description=Festival Service[Service] ExecStart=/usr/bin/festival --server Restart=on-failure RestartSec=10 SyslogIdentifier=FestivalService[Install] WantedBy=multi-user.targetDescription I did systemctl --user enable Festival.service, rebooted my system. But the festival server does not start. Only when I do manually systemctl --user start Festival.service, it starts. Issue Could you please tell me, why user service does not work with multi-user.target, which is suppose to work on every boot? ReferenceWhy do most systemd examples contain WantedBy=multi-user.target? - Unix & Linux Stack Exchange
Systemd service does not start (WantedBy=multi-user.target)
Yes, use ! to negate the condition: [Unit] ConditionPathExists=!/some/path/to/some/fileIt's in the manual:With ConditionPathExists= a file existence condition is checked before a unit is started. If the specified absolute path name does not exist, the condition will fail. If the absolute path name passed to ConditionPathExists= is prefixed with an exclamation mark ("!"), the test is negated, and the unit is only started if the path does not exist.
I'm attempting to make a systemd service that should only start if a certain file doesn't exist on the file system. If I use ConditionPathExists this will make the service start only when the file in question exists, which is the opposite behavior of what I want. Is there a way to invert these conditions?
Systemd - Invert Conditions in unit file?
Most common methods:Use an EnvironmentFile= that is generated on the fly, e.g. via ExecStartPre= that calls a simple script. Current systemd versions will re-read EnvironmentFile before each exec to allow this to work. (/run is a good location for the temporary file.) This is the simplest method, as the script only needs to write out the KEY="value" lines.Use a systemd generator which dynamically writes unit files at /run/systemd every time configuration is (re)loaded. The generator could be a shell script, as long as it's limited to local filesystem access. Generators can be placed in /etc/systemd/system-generators/; they will be run during every boot and during every "systemctl daemon-reload", getting the output path as $1. They will run literally before any units have started, so they must not expect network or anything else to be up. This is the most flexible method, as any unit option can be specified dynamically – not just environment but other things like WorkingDirectory=. (The generator doesn't need to create a whole unit; it can extend existing units in the usual way, by creating $1/oracle.service.d/environ.conf or similar.)Other methods:Use an instanced service unit [emailprotected] which uses %i to fill in the version. You will still need to disable "oracle@old" and enable "oracle@new", but it saves you opening the text editor. (And it also makes it easy to quickly roll back to the old version just by starting the correct unit.) I think you can have an Alias=oracle.service so that enabling an instance will automatically map it to the shorter name.Use a wrapper shell-script that sets the variables and execs the actual program. Yes, the exec is important. (Also, use SyslogIdentifier= to prevent the script's name from showing your journalctl output.) Wrapper scripts are usually discouraged, but that's typically because they do things that can trivially be done from the .service, which is not 100% the case here.Avoid systemctl set-environment as it is global – the variables will become available to all services started after that point, whether they want them or not.
This $(ls -d...) does not work in a systemd unit file: [Service] Type=forking Environment="ORACLE_HOME=$(ls -d /usr/lib/oracle/*/client64 | sort -rV | head -n1)" Environment="TNS_ADMIN=$(ls -d /usr/lib/oracle/*/client64/lib/network/admin | sort -rV | head -n1)"I want to avoid hard-coding the Oracle client version (at the moment 19.19), to simplify updates. When I install a new Oracle client, I don't want to have to modify the systemd unit file. How can I achieve that? I use RHEL9 if that matters.
How to specify dynamic "Environment" variables in a systemd unit file?
Actually, if you try to run the systemctl edit command with a new, not-yet-existing service, it will tell you exactly what to do: $ systemctl edit happy-unicorns.service No files found for happy-unicorns.service. Run ‘systemctl edit --force --full happy-unicorns.service' to create a new unit.As such to create a new system unit (in this case a service), indeed just run: $ systemctl edit --force --full happy-unicorns.serviceIt will happily popup your text editor (which one can be specified e.g. by setting the environmental variable $EDITOR as a usual thing for Linux tools). The meaning:--full will edit the whole unit file instead of just create an override. This means, in our example it will actually use the full service file in a proper location /etc/systemd/system/.#happy-unicorns.service11738733f89dc655 instead of creating a directory and override for the service in e.g. /etc/systemd/system/happy-unicorns.service.d/.#override.conf98be493089631328. Note: You see systemctl appends some random numbers and marks the file as temporary in order to not apply the changes directly. They are still sanity-checked and the file is moved when everything is correct. --force actually needs to be specified to create a new file instead of editing an existing one. If you want to create the file at other places (i.e. user or global scope) by adding the usual switches: You can add --user for your current user --global for all users. --system for the current system is the default value.I agree it may not be so obvious to find, given it’s not a separate subcommand, but it makes quite much sense when you think of just editing any file and letting systemctl decide, whether it’s a file that already exists or not. See also the systemctl man page for more information.
Given the CLI tool systemctl edit can be used to edit existing systemd units such as services, timers, sockets, devices, mounts, automounts, targets, swap, path, slice, scope or nspawn files and you have another subcommand that can delete/reset systemd units can I also create a new systemctl file like this? Most guides online tell you to use some custom text editor and place the file somewhere (where you first need to find the correct systemd directory, yet again…). Also, you need to reload the daemon if you copy the files manually via systemctl daemon-reload. As a sysadmin I may however just quickly create a new unit in the default location, just as systemctl edit would do for editing/overriding an existing entry. I just like how it pops me directly into my favorite CLI text editor (nano or so) and I can edit my content right away. I tried systemctl add and systemctl create, but no one of these two commands exists. I did not find that information on the net nor any Stackexchange answer here…
Create a new systemd unit/service/timer/sockets with systemctl from the command line?
The answer is in the first sentence you quoted. "systemd will dynamically create device units". It says "create", not just "start". Through integration with the udev daemon, once the kernel tells udev about a new device, a .device unit will be synthesized on the fly. (And similarly, if a device disappears, the .device unit will be destroyed as well.) It is the unit itself that gets dynamically created (so it only exists in memory inside systemd), not a unit file. So it's basically the same mechanism as for dynamically created .scope units or transient .service units (and unlike .mount unit files that are created by a generator based on /etc/fstab). There's thus no unit file anywhere on disk. You can see that systemctl status some.device doesn't refer to any unit files and systemctl show some.device doesn't list any FragmentPath.
I've been reading up on systemd and doing a little probing regarding device unit-files. According to the man pages:systemd will dynamically create device units for all kernel devices that are marked with the "systemd" udev tag (by default all block and network devices, and a few others). This may be used to define dependencies between devices and other units. To tag a udev device, use "TAG+="systemd"" in the udev rules file, see udev(7) for details.I have tried looking in /lib/systemd/system, as described in the Debian Wiki:Unit files provided by Debian are located in the /lib/systemd/system directory.But these are nowhere to be found. Yet ~$ sudo systemctl list-units --type=devicedoes display devices units (such as disks, sound card, ethernet controller etc...) I would like to know where I can find device unit files in Debian? or if these do not exist then why not and how is systemd handling these units in Debian? Any clarifications/comments/insights would be much appreciated.
Why are there no device unit files in Debian?
I finally found a solution, although I'm not sure I understand it. Somehow the version of gpg-agent started by systemd was the issue. When performing systemctl --user mask gpg-agent and then restarting the gpg-agent manually, the problem disappeared. I'll try to understand why that was the case and then write an update here.
I'm having trouble using gpg (actually, the gpg-agent) on my Debian Bullseye (Stable) system. More precisely, I use the following: gpg --version | head -n2 gpg (GnuPG) 2.2.27 libgcrypt 1.8.8uname -v #1 SMP Debian 5.10.46-4 (2021-08-03)lsb_release -a 2> /dev/null Distributor ID: Debian Description: Debian GNU/Linux 11 (bullseye) Release: 11 Codename: bullseyeI haven't rebooted my machine for approx 3 months. During that time I was able to use gpg without difficulties (encrypting, decrypting, signing, verifying, key management). I made multiple updates during the last months, none of which created any problems for me (in addition I'm using needrestart). I didn't change anything in the relevant config files (I know of, being ~/.gnupg/gpg.conf, ~/.gnupg/gpg-agent.conf, ~/.gnupg/dirmngr.conf) in the last 3 months. Today I restarted my machine and suddenly I wasn't able to use my gpg-agent for anything, where secret keys are involved. While gpg -k [1] and gpg --search-keys DEADBEEF lead to results, gpg -K as well as gpg -d /PATH/TO/ENCRYPTED/FILE hangs indefinitely. Similarly, gpg-connect-agent reloadagent /bye and gpgconf --kill gpg-agent as well as systemctl --user start gpg-agent leads to hanging. Similarly, my systemd-unit-file is not out of the ordenary: systemctl --user cat gpg-agent | grep -Ev '^#|^$' [Unit] Description = gpg-agent (password store for gpg-keys) [Service] Type = forking ExecStart = /usr/bin/gpg-agent --daemon ExecStop = /usr/bin/gpg-connect-agent /bye Restart = on-abort [Install] WantedBy = default.targetI'm aware that this problem has already been described by others (see e.g. here but the mentioned solution (pkill -9 gpg-agent) does not apply to me, since this is happening eventhough no other process containing the string gpg (read: the gpg-agent) is running. ps -ef | grep gpg && echo " " && gpg --verbose --debug-level guru -K MYUSERNAME 59248 59247 0 17:17 pts/1 00:00:00 grep --color=auto gpg gpg: enabled debug flags: packet mpi crypto filter iobuf memory cache memstat trust hashing ipc clock lookup extprog gpg: DBG: [not enabled in the source] start gpg: DBG: [not enabled in the source] keydb_new gpg: DBG: [not enabled in the source] keydb_search_reset gpg: DBG: keydb_search: reset (hd=0x000055c04a474cd0) gpg: DBG: [not enabled in the source] keydb_search enter gpg: DBG: keydb_search: 1 search descriptions: gpg: DBG: keydb_search 0: FIRST gpg: DBG: keydb_search: searching keybox (resource 0 of 1) gpg: DBG: keydb_search: searched keybox (resource 0 of 1) => Success gpg: DBG: [not enabled in the source] keydb_search leave (found) gpg: DBG: [not enabled in the source] keydb_get_keybock enter gpg: DBG: parse_packet(iob=1): type=6 length=51 (parse.../../g10/keydb.c.1242) gpg: DBG: parse_packet(iob=1): type=12 length=12 (parse.../../g10/keydb.c.1242) gpg: DBG: parse_packet(iob=1): type=13 length=19 (parse.../../g10/keydb.c.1242) gpg: DBG: parse_packet(iob=1): type=12 length=12 (parse.../../g10/keydb.c.1242) gpg: DBG: parse_packet(iob=1): type=2 length=150 (parse.../../g10/keydb.c.1242) gpg: DBG: parse_packet(iob=1): type=12 length=6 (parse.../../g10/keydb.c.1242) gpg: DBG: parse_packet(iob=1): type=2 length=150 (parse.../../g10/keydb.c.1242) gpg: DBG: parse_packet(iob=1): type=12 length=6 (parse.../../g10/keydb.c.1242) gpg: DBG: parse_packet(iob=1): type=14 length=56 (parse.../../g10/keydb.c.1242) gpg: DBG: parse_packet(iob=1): type=2 length=126 (parse.../../g10/keydb.c.1242) gpg: DBG: parse_packet(iob=1): type=12 length=6 (parse.../../g10/keydb.c.1242) gpg: DBG: parse_packet(iob=1): type=14 length=51 (parse.../../g10/keydb.c.1242) gpg: DBG: parse_packet(iob=1): type=2 length=245 (parse.../../g10/keydb.c.1242) gpg: DBG: parse_packet(iob=1): type=12 length=6 (parse.../../g10/keydb.c.1242) gpg: DBG: iobuf-1.0: underflow: buffer size: 924; still buffered: 0 => space for 924 bytes gpg: DBG: iobuf-1.0: close '?'<<< HERE HANGING INDEFINITELY >>>^C gpg: signal Interrupt caught ... exitingAlso my variables GPG_AGENT_INFO and GPG_TTY are set. echo -e "$GPG_AGENT_INFO\n$GPG_TTY" /run/user/1000/gnupg/S.gpg-agent:0:1 /dev/pts/1Rebooting btw. didn't change anything. Any ideas?edit1: reinstalling gpg, gpg-agent and dirmngr doesn't fix the issue. Additionally, removing the files ~/.gnupg/gpg.conf, ~/.gnupg/gpg-agent.conf and ~/.gnupg/dirmngr.conf doesn't solve it.edit2: in the meantime I upgraded from PureOS Amber to Debian Stable (Bullseye) and reinstalled a new version of gpg, gpg-agent, dirmngr and libgcrypt20 (and changed the text above to reflect the new version), but the problem is still present.[1] technically speaking gpg -k also hung, but I assume this was because I enabeled the option with-secret in my gpg.conf-file. After commenting that out this problem disappeared.
gpg-agent hanging when trying to access private keys
/run/user/1000, which of course does not exist until user #1000 logs in or explicitly starts up xyr per-user service management, is a red herring. The entire mechanism that uses it should not be there. Bug #215 for this program runs a lot deeper than you think. This service unit file is very wrong, as is the operation of the program itself. There is a lot of cargo cult programming, based upon not actually understanding the fundamentals of systemd service units.Service units are not shell script. The systemd manual does explain this. The ExecStart setting here causes the service program to be run with some extra arguments, 2>&1> and /dev/null. The service manager already ensures that only one service runs. All of this code added here is unnecessary junk. The rickety and dangerous PID file mechanism should not be used. It has no place in proper service management. The service manager also handles invoking the service in a dæmon context. A lot of the other code in main() is also unnecessary junk, based upon the dæmonization fallacy.The program should not be fork()ing at all, and the service readiness mechanism should not be specified as Type=forking. Like so many programs in the real world, this program is not speaking the forking readiness protocol in the first place. The program is already running as the superuser. User=root is unnecessary, and indeed the service should be redesigned to not need running with superuser privileges but rather run under the aegis of a dedicated unprivileged service account. The service manager is already logging the standard output and error, and doing a better job of it than this program is. This home-grown logging system just grows a log file until it fills up an entire filesystem, consuming all of the emergency space reserved for the superuser. Your log is simply the standard error, handily accessible from C++ as std::clog. In fact, all of the code from the fork() to the redirection of standard error should not be used. Service management handles all of this, from session leadership through working directory and umask to standard I/O, and does it properly. This program does not, and it should not be attempting to do any of this for itself when used under a service manager.Everything that you took from Boost was wrong.Three service units is unnecessary maintenance overhead. They only differ in their After settings, and those can just be merged into one. Graceless termination is not success. Given that there was already one problem with cleaning up files upon termination, SuccessExitStatus=SIGKILL is wrongheaded. Normal termination should be graceful, via SIGTERM, and SIGKILL should be considered abnormal. (Of course, the whole output file mechanism is a badly implemented home-grown logging mechanism that should not be used under service management, as already explained.) This is the systemd default. Destructors of the database objects and other stuff should run. Do not leave main() with exit().A dæmon program implemented properly for running under a service manager, be it from daemontools, runit, s6, nosh, systemd, or something else, is a lot shorter:… // the same until this point void pvo_upload(void) { std::clog << "Starting Daemon..." << std::endl; CommonServiceCode(); std::clog << "Stopping Daemon..." << std::endl; }int main(int argc, char *argv[]) { int c; const char *config_file = ""; /* parse commandline */ while(1) { static struct option long_options[] = { { "config-file", required_argument, 0, 'c' }, { 0, 0, 0, 0 } }; int option_index = 0; c = getopt_long (argc, argv, "c:", long_options, &option_index); if (c == -1) break; switch (c) { case 'c': config_file = optarg; break; default: return EXIT_FAILURE; break; } } if (cfg.readSettings(argv[0], config_file) != Configuration::CFG_OK) return EXIT_FAILURE; std::clog << "Starting SBFspotUploadDaemon Version " << VERSION << std::endl; // Check if DB is accessible db_SQL_Base db = db_SQL_Base(); db.open(cfg.getSqlHostname(), cfg.getSqlUsername(), cfg.getSqlPassword(), cfg.getSqlDatabase()); if (!db.isopen()) { std::clog << "Unable to open database. Check configuration." << std::endl; return EXIT_FAILURE; } // Check DB Version int schema_version = 0; db.get_config(SQL_SCHEMAVERSION, schema_version); db.close(); if (schema_version < SQL_MINIMUM_SCHEMA_VERSION) { std::clog << "Upgrade your database to version " << SQL_MINIMUM_SCHEMA_VERSION << std::endl; return EXIT_FAILURE; } // Install our signal handler. // This responds to the service manager signalling the service to stop. signal(SIGTERM, handler); // Start daemon loop pvo_upload(); return EXIT_SUCCESS; }And the service unit is shorter, too:[Unit] Description=SBFspot upload daemon After=mysql.service mariadb.service network.target[Service] Type=simple TimeoutStopSec=10 ExecStart=/usr/local/bin/sbfspot.3/SBFspotUploadDaemon Restart=on-success[Install] WantedBy=multi-user.targetThe log output is viewable with systemctl status and journalctl (with the -u option and the service name, if desired). Further readingJonathan de Boyne Pollard (2016). Don't use logrotate or newsyslog in this century.. Frequently Given Answers. Jonathan de Boyne Pollard (2001). Mistakes to avoid when designing Unix dæmon programs. Frequently Given Answers. Jonathan de Boyne Pollard (2015). Readiness protocol problems with Unix dæmons. Frequently Given Answers. https://unix.stackexchange.com/a/283739/5132 https://unix.stackexchange.com/a/321716/5132
When I reboot my Raspberry Pi (Stretch) a daemon fails to start because /run/user/1000 does not exist. This is my unit file: [Unit] Description=SBFspot Upload Daemon[Service] User=pi Type=forking TimeoutStopSec=60 ExecStart=/usr/local/bin/sbfspot.3/SBFspotUploadDaemon -p /run/user/1000/sbfspotupload.pid 2>&1> /dev/null PIDFile=/run/user/1000/sbfspotupload.pid Restart=no RestartSec=15 SuccessExitStatus=SIGKILL[Install] WantedBy=default.targetWhen I configure to Restart=on-failure all goes well after a few retries, but that's not really what I want. I want the daemon to wait for /run/user/1000 gets mounted. I tried with After=run-user-1000.mount but it still fails. Is this possible or do I have to stick with the Restart=on-failure?
/run/user/$UID not mounted when daemon starts
So I logged into one user account su usernameNo, you did not. You are not logging in. You are augmenting the privileges of your existing login session with su username. systemctl with the --user option locates your per-user Desktop Bus, managed by you per-user Desktop Bus dæmon, and via that bus communicates with your per-user instance of systemd that manages your per-user servics. su is not a login mechanism. It works within an existing interactive login session. In that session, your processes have environment variables that tell them where your per-user runtime directory is (XDG_RUNTIME_DIR), where the per-user Desktop Bus is (DBUS_SESSION_BUS_ADDRESS), and indeed other things like where your X server is (DISPLAY). In particular, DBUS_SESSION_BUS_ADDRESS can implicitly reference XDG_RUNTIME_DIR or can explicitly name the same path. That path will generally be something like /run/user/1001/bus for the Desktop Bus broker's access socket (presuming that your user ID is 1001, for example). These variables are not changed by su. There's been a whole back and forth for many years over this, including the behaviours of other similar commands such as pkexec. The consequence of this is that if you su to a second user in your login session, running systemctl as that second user tries to connect to a Desktop Bus broker access socket located in a directory private to the first user. User 1002 (to pick a user ID for your second user for the sake of example) cannot access /run/user/1001 or anything within it, and even if xe had read+execute access to that directory xe cannot access /run/user/1001/bus because that only grants access to user 1001 also. Of course, this is not the right Desktop Bus broker to be talking to in the first place. You want to talk to the second user's Desktop Bus broker, and through it to the second user's per-user instance of systemd. The simple solution is as part of the su to set those environment variables to the ones appropriate for the second user account, pointing to the second user's Desktop Bus:DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1002/bus su username -c 'systemctl --user' I of course in such circumstances use a handy tool for setting this, userenv, which enables me not to have to type that bus address longhand:su username -c 'userenv --set-dbus systemctl --user' Further readinghttps://unix.stackexchange.com/a/407863/5132 https://unix.stackexchange.com/a/423648/5132 Jonathan de Boyne Pollard (2014). Don't abuse su for dropping user privileges. Frequently Given Answers. Jonathan de Boyne Pollard. userenv. nosh toolset manual pages. Softwares. https://unix.stackexchange.com/a/427917/5132
On my arch server, I was setting up users restricted to their home directories. I ran: useradd -m -s /bin/bash username and passwd username I've read this wiki article... I figured I should use systemd user services to make each user run a node server on startup. So i logged into one user account su username and created a file ~/.config/systemd/user/serve.service containing: [Unit] Description=One of the servers[Service] ExecStart=/usr/bin/node /home/username/server.js[Install] WantedBy=default.targetthen I ran systemctl --user enable serve.service which responded with Failed to connect to bus: Permission denied As far as I understand I should run systemctl --user ... command logged in as a user and not as root. So what did I miss in this configuration?
How do I setup user autostart and properly configure systemd user services?
This type of syntax is not directly supported, as explained on the man page for system.service:This syntax is inspired by shell syntax, but only the meta-characters and expansions described in the following paragraphs are understood, and the expansion of variables is different. Specifically, redirection using "<", "<<", ">", and ">>", pipes using "|", running programs in the background using "&", and other elements of shell syntax are not supported.The 'following paragraphs' mainly include basic environment variable substitution, path searching, and some C escapes. In general, you can get around these restrictions by writing your own shell script which sets up the process, and then specifying that script as the ExecStart option on the systemd service. In your specific case, you should be able to get the date substitution to work by passing it explicitly to a shell: ExecStart=/bin/bash -c '/home/tcs/minetest/bin/minetestserver --worldname world --logfile /home/tcs/logs/debug_$(date +%Y_%m_%d).txt'
I try use date output as part of log file name in systemd unit. Here example: [Unit] Description=TCS minetest server unit[Service] Type=simple ExecStart=/home/tcs/minetest/bin/minetestserver --worldname world --logfile /home/tcs/logs/debug_$$(date +%%Y_%%m_%%d).txt ExecReload=/bin/kill -HUP $MAINPID User=tcs[Install] WantedBy=multi-user.targetBut I get the log file: ls /home/tcs/logs/ 'debug_$(date'How I can use current date for log filename?
Use dynamic date in systemd unit
The paths systemd looks up for unit files is read from UnitPath and can be queried with systemctl. # systemctl --no-pager --property=UnitPath show | tr ' ' '\n' UnitPath=/etc/systemd/system.control /run/systemd/system.control /run/systemd/transient /etc/systemd/system /run/systemd/system /run/systemd/generator /lib/systemd/system /run/systemd/generator.lateAs you can see, this does not include /usr/lib/systemd/system, which is the output on a Ubuntu 18.04 system. The UnitPath is generated during runtime and only directories that actually exist are shown here. # mkdir -p /usr/lib/systemd/system # systemctl daemon-reload # systemctl --no-pager --property=UnitPath show | tr ' ' '\n' | grep "/usr/lib/systemd/system" /usr/lib/systemd/systemSo creating the directory was enough to add /usr/lib/systemd/system to UnitPath, which was likely done by installing Elasticsearch.Which directories are taken into account when constructing UnitPath can be queried with pkg-config and the variables systemdsystemunitdir and systemdsystemunitpath. # pkg-config systemd --variable=systemdsystemunitdir /lib/systemd/system# pkg-config systemd --variable=systemdsystemunitpath | tr ':' '\n' /etc/systemd/system /etc/systemd/system /run/systemd/system /usr/local/lib/systemd/system /lib/systemd/system /usr/lib/systemd/system /lib/systemd/systemIn src/core/systemd.pc.in the systemdsystemunitpath is as follows. systemdsystemunitpath=${systemdsystemconfdir}:/etc/systemd/system:/run/systemd/system:/usr/local/lib/systemd/system:${systemdsystemunitdir}:/usr/lib/systemd/system:/lib/systemd/system
I understand that systemd stores unit files at different locations for different versions of Linux. On RHEL, it's at /usr/lib/systemd/system/, whereas on Debian-based machines it's at /lib/systemd/system/. However, on my Ubuntu 18.04 machine, I just installed Elasticsearch using a .deb file, and its systemd unit file was installed under /usr/lib/systemd/system/, but systemd is still able to pick it up. $ uname -a Linux nucleolus 4.15.0-46-generic #49-Ubuntu SMP Wed Feb 6 09:33:07 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux$ sudo systemctl status elasticsearch.service ● elasticsearch.service - Elasticsearch Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: http://www.elastic.coNote the path is /usr/lib/systemd/system/elasticsearch.service. So why does a systemd unit file at /usr/lib/systemd/system/ still works for Ubuntu? What is the real unit file load path for Debian/Ubuntu systems?
Why does a systemd unit file at `/usr/lib/systemd/system/` still works for Ubuntu?
A systemd socket is a special type of unit that causes systemd to itself bind to the port (or other resource, such as a unix domain socket file path) and spawn a new instance of a service for any connection. With ssh.service enabled, its sshd that runs continuously and binds to the socket, as your lsof shows. Having ssh.socket on instead would mean sshd does not run continuously, but rather an instance of it is invoked only to handle the one client. And in contrast it would show systemd listening on port 22. Because systemd and sshd cannot both listen on the same port, ssh.socket specifies ssh.service as conflicting.
Description of condition I ran into a strange condition with systemd and ssh on Ubuntu 18.04.3 LTS I checked the status of the ssh.socket unit: $ systemctl status ssh.socket ● ssh.socket - OpenBSD Secure Shell server socket Loaded: loaded (/lib/systemd/system/ssh.socket; disabled; vendor preset: enabled) Active: inactive (dead) Listen: [::]:22 (Stream) Accepted: 0; Connected: 0And it was inactive, however I was logged in with ssh at the very same time and service itself was running, and SSH's socket and corresponding port was open: $ lsof -P -i -n | grep sshd sshd 26785 root 3u IPv4 14858764 0t0 TCP 10.200.130.28:22->10.100.40.141:42188 (ESTABLISHED) sshd 26875 xxx_root 3u IPv4 14858764 0t0 TCP 10.200.130.28:22->10.100.40.141:42188 (ESTABLISHED) sshd 63859 root 3u IPv4 238437 0t0 TCP *:22 (LISTEN) sshd 63859 root 4u IPv6 238439 0t0 TCP *:22 (LISTEN)So I looked into the unit file of ssh.socket at /lib/systemd/system/ssh.socket: [Unit] Description=OpenBSD Secure Shell server socket Before=ssh.service Conflicts=ssh.service ConditionPathExists=!/etc/ssh/sshd_not_to_be_run[Socket] ListenStream=22 Accept=yes[Install] WantedBy=sockets.targetBecause of the Before=ssh.service directive it should be started before the ssh service, and the Conflicts=ssh.service directive will cause it to stop when the ssh service starts. Which explains why it happens in the aspect of unit files, but rise other questions. Questions Why the inactive state of the ssh.socket unit has no effect on the actual ssh socket? Why the maintainers added the Conflict directive? For example if you check the unit file of docker.socket it is not set to conflict with the docker.service. How the case of sshd differs? Additional info I also checked this on a old fedora 30 workstation. It has the same condition, with minor differences: it uses sshd.service and sshd.socket as unit names and there is no Before directive in the sshd.socket unit file. On both system I have not noticed any resulting problem from this condition, and I suspect that it has some purpose, but cannot find one.
Why ssh.socket is set to conflict with ssh.service (Ubuntu 18.04.3)?
You probably have hit this Systemd bug which occurs when your RTC is set to the local time (timedatectl will confirm this). Either upgrade Systemd or set your RTC to UTC: # timedatectl set-local-rtc 0The latter is preferable. Quoting the timedatectl manual:Note that maintaining the RTC in the local timezone is not fully supported and will create various problems with time zone changes and daylight saving adjustments. If at all possible, keep the RTC in UTC mode.Here "problems with daylight saving adjustments" means that if your machine is off during daylight saving time change (which occured not long ago) then the time read from the RTC "will NOT be adjusted for the change" (quoted from the hwclock manual).
When we issue "systemctl status", we usually get in the output, a line showing the status and for how long it has been in that status. Like: (I issued that few minutes ago) Active: active (running) since Wed 2023-11-22 01:56:06 CST; 10h agoHowever, it happened to get the following line for the same service when the system time was 01:19:27 CST Active: active (running) since Wed 2023-11-22 **01:56:06** CST; 36min **left**Why the time after "since" is in the future? And why it shows the time "left"? Left for what? I expected to see a time in the past and to see x time units ago I tried to issue "systemctl list-timers --all" to find out if there is a timer related to that service, but I found none related.
Why systemctl status shows a time in the future and the amount of time left?
It's not being started because it's not wanted by anything that gets started. [Install] WantedBy=network-online.target I have added a symbolic link to the above file in /lib/systemd/system. Pretty much all of that is wrong.The unit file should be placed in /etc/systemd/system. Symbolic links are interpreted idiosyncratically by systemd, and do not have the conventional filesystem semantics. And /lib/systemd/system is not the place for hand-written unit files that do not come from packages. The unit file should be wanted by something that actually gets started at bootstrap. network-online.target usually does not. multi-user.target is the usual choice. graphical.target is another.
I have added my service to systemd (I am running it on a pi3), it looks like this: [Unit] Description=Oral-B BLE scanner service Wants=network-online.target After=network-online.target StartLimitBurst=10 StartLimitIntervalSec=10 Requires=bluetooth.target[Service] Type=simple WorkingDirectory=/home/pi/scripts ExecStart=/home/pi/scripts/scanOralB.py Restart=always RestartSec=10[Install] WantedBy=network-online.targetI have added a symbolic link to the above file in /lib/systemd/system. I have enabled the service as well. Just to be sure I checked. pi@raspberrypi:~ $ systemctl is-enabled scanOralB.service enabledIf I reboot and check the status it looks like this: pi@raspberrypi:~ $ sudo systemctl status scanOralB.service * scanOralB.service - Oral-B BLE scanner service Loaded: loaded (/home/pi/scripts/scanOralB.service; enabled; vendor preset: enabled) Active: inactive (dead)If I start the service manually it works just fine. Can someone explain why the service is not being started after boot? I get no extra output from journalctl either.
systemd service ends up in “inactive (dead)” after boot
The error is in After=mnt-ram The actual value given by systemctl --user list-units is mnt-ram.mount NOT mnt-ram. In accessing systemd units I've fallen into the habit of omitting the .service extension (eg. systemctl restart servicename) so dropped the extension here where referencing the mnt-ram.mount service.
Arch 5.18/ MATE Desktop I have a user service that sets up values for my panel [Unit] Description=Set values for panel widgets After=mnt-ram After=sys-subsystem-net-devices-eno1.device[Service] ExecStart=/home/stephen/bin/panel-setup.sh Type=oneshot RemainAfterExit=True[Install] WantedBy=default.targetBoth mnt-ram and sys-subsystem-net-devices-enp0s8.device show up as active for systemctl --user list-units. At boot the journal reports systemd[669]: /home/stephen/.config/systemd/user/panel-setup.service:3: Failed to add dependency on mnt-ram, ignoring: Invalid argument However after the desktop loads I can issue without error and with expected effect: systemctl user restart panel-setup
systemd user unit error on boot : Failed to add dependency ignoring: Invalid argument
The default binary search path is described in the section on command lines:If the command is not a full (absolute) path, it will be resolved to a full path using a fixed search path determined at compilation time. Searched directories include /usr/local/bin/, /usr/bin/, /bin/ on systems using split /usr/bin/ and /bin/ directories, and their sbin/ counterparts on systems using split bin/ and sbin/. It is thus safe to use just the executable name in case of executables located in any of the "standard" directories, and an absolute path must be used in other cases. Using an absolute path is recommended to avoid ambiguity. Hint: this search path may be queried using systemd-path search-binaries-default.The default value of ExecSearchPath itself is empty, which triggers the behaviour above. (Note that ExecSearchPath is very recent, it was added in systemd 250.)
man systemd.exec says concerning ExecSearchPath=:Takes a colon separated list of absolute paths relative to which the executable used by the Exec*= (e.g. ExecStart=, ExecStop=, etc.) properties can be found. ExecSearchPath= overrides $PATH if $PATH is not supplied by the user through Environment=, EnvironmentFile= or PassEnvironment=. Assigning an empty string removes previous assignments and setting ExecSearchPath= to a value multiple times will append to the previous setting.What is the default value of ExecSearchPath=?
What is the default value of `ExecSearchPath=` in a systemd unit file?
tl;dr: ConditionHost=|HostOne* ConditionHost=|HostTwo*You can easily check your conditions with systemd-analyze. That should speed up your testing. Here's an example where I am using ConditionHost on my own machine (stewbian). Here, I succeed with an exact match. $ systemd-analyze condition ConditionHost=stewbian test.service: ConditionHost=stewbian succeeded. Conditions succeededHere, I succeed with a globbed match $ systemd-analyze condition ConditionHost=stew* test.service: ConditionHost=stew* succeeded. Conditions succeeded.Here, I correctly failed with a bad match $ systemd-analyze condition ConditionHost=machine2 test.service: ConditionHost=machine2So first, we could say that your test could work with ConditionHost=Host*, but I suspect you want to be more precise.From man systemd.unit:If multiple conditions are specified, the unit will be executed if all of them apply (i.e. a logical AND is applied).Therefore, multiple conditions should be on separate lines, but they will be AND'd, so it will fail $ systemd-analyze condition "ConditionHost=machine2" "ConditionHost=stewbian" test.service: ConditionHost=stewbian succeeded. test.service: ConditionHost=machine2 failed. Conditions failed.But the man page continues:Condition checks can use a pipe symbol ("|") after the equals sign ("Condition…=|…"), which causes the condition to become a triggering condition. If at least one triggering condition is defined for a unit, then the unit will be started if at least one of the triggering conditions of the unit applies and all of the regular (i.e. non-triggering) conditions apply.Therefore, use ConditionHost=| on each condition and the conditions will be OR'd: $ systemd-analyze condition "ConditionHost=|machine2" "ConditionHost=|stewbian" test.service: ConditionHost=|stewbian succeeded. test.service: ConditionHost=|machine2 failed. Conditions succeeded.You can also include the globs: $ systemd-analyze condition \ "ConditionHost=|stew*" \ "ConditionHost=|machine*" test.service: ConditionHost=|machine* failed. test.service: ConditionHost=|stew* succeeded. Conditions succeeded.In your file, use: ConditionHost=|HostOne* ConditionHost=|HostTwo*I can see what you were trying to do. The docs do say:This either takes a hostname string (optionally with shell style globs) which is tested against the locally set hostname ...If we look at man 7 glob we read:A string is a wildcard pattern if it contains one of the characters '?', '*', or '['. Globbing is the operation that expands a wildcard pattern into the list of pathnames matching the pattern.In this definition neither | nor {...} are considered globs. While {...} may be considered a common bash glob, systemd isn't bash and doesn't use that definition.
I'm trying to make a systemd service have a conditional start based on multiple hostname pattern. I've tried this without luck: root@linkbox-BI034415:/# systemctl cat mcbapp # /lib/systemd/system/myservice.service [Unit] Description=My Service Wants=another.service After=another.service ConditionHost=HostOne*|HostTwo* Also tried ConditionHost={HostOne*,HostTwo*}[Service] EnvironmentFile=/etc/default/myenv ExecStart=/opt/bin/my-apps Restart=on-failure[Install] WantedBy=multi-user.targetAny suggestion ?
How to check multiple host name in systemd unit condition
Could this be the same issue as the one the asker of question #442181 had? I.e. sshd fails to start at boot because the interface/address it wants to bind to isn't ready yet. You mention that you've specified a non-standard port for the server socket, have you also specified a particular network interface and/or IP address? I don't know why systemd instead starts a per-connection daemon that uses the standard configuration, though. It might be part of the default system configuration, as you suggest. In question #507705 they talk about systemd "socket activation", which apparently is the feature that provides per-connection service spawning. Look for a systemd unit file named ssh.socket. You can use man systemd.socket to get information about how the feature works. Edit: You should be able to use systemctl status ssh.socket to check whether systemd's SSH server socket is enabled.
I’m using Debian 11 on a Raspberry Pi 4 (image found here). sshd is properly configured (I only edited /etc/ssh/sshd_config, the rest is completely fresh from system installation) and works correctly when I start it manually. However it doesn’t start automatically by systemd at boot. sudo systemctl status sshd returns this: ● ssh.service - OpenBSD Secure Shell server Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled) Active: inactive (dead) Docs: man:sshd(8) man:sshd_config(5)There is nothing related to ssh in journalctl’s output. This is the content of /lib/systemd/system/ssh.service: [Unit] Description=OpenBSD Secure Shell server Documentation=man:sshd(8) man:sshd_config(5) After=network.target auditd.service ConditionPathExists=!/etc/ssh/sshd_not_to_be_run[Service] EnvironmentFile=-/etc/default/ssh ExecStartPre=/usr/sbin/sshd -t ExecStart=/usr/sbin/sshd -D $SSHD_OPTS ExecReload=/usr/sbin/sshd -t ExecReload=/bin/kill -HUP $MAINPID KillMode=process Restart=on-failure RestartPreventExitStatus=255 Type=notify RuntimeDirectory=sshd RuntimeDirectoryMode=0755[Install] WantedBy=multi-user.target Alias=sshd.serviceThe file sshd_not_to_be_run does not exist. network.target is active. I also installed auditd just for troubleshoot and it successfully starts automatically, but ssh.service is still dead after reboot. I run out of ideas…UPDATE: I just discovered that a sshd process spawns on every connection demand. It is managed by systemd itself and it’s clearly printed in the journal when some foreign computers try to connect to mine: oct. 30 13:09:30 RaspServeur systemd[1]: Started OpenBSD Secure Shell server per-connection daemon (117.68.2.55:45784). ░░ Subject: L'unité (unit) [emailprotected]:22-117.68.2.55:45784.service a terminé son démarrage ░░ Defined-By: systemd ░░ Support: https://www.debian.org/support ░░ ░░ L'unité (unit) [emailprotected]:22-117.68.2.55:45784.service a terminé son démarrage, avec le résultat done. oct. 30 13:09:30 RaspServeur audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj==unconfined msg='[emailprotected]:22-117.68.2.55:45784 comm="systemd" exe="/usr/lib/systemd/systemd" ho> oct. 30 13:09:33 RaspServeur sshd[1861]: error: kex_exchange_identification: Connection closed by remote host oct. 30 13:09:33 RaspServeur sshd[1861]: Connection closed by 117.68.2.55 port 45784 oct. 30 13:09:33 RaspServeur systemd[1]: [emailprotected]:22-117.68.2.55:45784.service: Succeeded. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://www.debian.org/support ░░ ░░ The unit [emailprotected]:22-117.68.2.55:45784.service has successfully entered the 'dead' state. oct. 30 13:09:33 RaspServeur audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj==unconfined msg='[emailprotected]:22-117.68.2.55:45784 comm="systemd" exe="/usr/lib/systemd/systemd" hos>It’s like a parallel installation of sshd exists with a default configuration. My own configuration with settings like a specific port number to use can’t work without starting manually the sshd.service. But I can successfully connect to that shadow sshd with default port, and systemctl status sshd still reports a dead service… The situation becomes creepy, I’m now two fingers away to erase the SD card and install an image of another distribution with less pre-configuration.
Why ssh.service doesn’t start automatically during boot despite being enabled by systemd?
I don't think you are doing anything wrong. I think there's a bug in systemd. On Debian testing (systemd 246, and later on 246.1 after upgrading) I observed the following: ConditionEnvironment= was only released with version 246 on July 30 2020 (2.5 weeks before time of writing) and the pull request was merged on May 15. Therefore, it's reasonable to assume it isn't mature yet. Here's a test that leads me to think it's a bug: $ systemd-analyze condition \ 'ConditionKernelVersion=' \ 'ConditionKernelVersion=' \ 'ConditionACPower=' \ 'ConditionArchitecture=' \ 'AssertPathExists=' \ 'ConditionEnvironment=' Cannot parse "ConditionEnvironment=".If I run each condition one-by-one, they all parse the empty expression except for ConditionEnvironment=. I tried your target verbatim (also from an i3 environment) and I found that ConditionEnvironment= had no influence on whether I could reach the target. I tried corrent and incorrect values. Therefore this problem is not specific to systemd-analyze. One thing I did find super-interesting is a comment in xdg-autostart-generator/xdg-autostart-condition.c: * This binary is intended to be run as an ExecCondition= in units generated * by the xdg-autostart-generator. It does the appropriate checks against * XDG_CURRENT_DESKTOP that are too advanced for simple ConditionEnvironment= * matches.I think the bug is valid, but I find it interesting that a generator was made (and deployed as /lib/systemd/systemd-xdg-autostart-condition) to overcome a problem experienced with the exact environment you are looking into. I filed a bug report with Debian. I expect the debian devs will take a look and forward upstream to the systemd devs.
I'm using systemd version 246: $ systemctl --version systemd 246 (246.2-1-arch) +PAM +AUDIT -SELINUX -IMA -APPARMOR +SMACK -SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +ZSTD +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybridwhich documents ConditionEnvironment in its systemd.unit manpage. However, if I use it in my unit file ~/.config/systemd/user/i3-session-pre.target like this: [Unit] Description=i3 session Documentation=man:systemd.special(7) BindsTo=graphical-session-pre.target ConditionEnvironment=XDG_SESSION_DESKTOP=i3I get the following entry in my user journal: systemd[599]: /home/****/.config/systemd/user/i3-session-pre.target:5: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.Also systemd-analyze condition fails to handle this condition: $ systemd-analyze condition ConditionEnvironment=XDG_SESSION_DESKTOP=i3 Cannot parse "ConditionEnvironment=XDG_SESSION_DESKTOP=i3".What am I doing wrong?
systemd: Unknown key name 'ConditionEnvironment' in section 'Unit'
It seems that the root cause is that dependant.service is starting too soon sometimes: adding Restart directives is a bit of a hack. This to me indicates that it's missing a timing requirement, which is what After is for. Depending on the type of service, you'll need to determine what resources are needed before the service should be started. Assuming that this is network-related, you'll want to add the following in the [Unit] section of dependant.service: After=network.targetBy doing this, you're indicating that the basic network should be available before systemd tries to start the service. Otherwise, systemd will try to start as many services in parallel as it can, which means that depending on the start order you might be starting with basically nothing initialized, which is a situation that some services can tolerate and some fail badly with. If you want to make sure that depending.service always restarts with dependant.service, then add both a BindsTo and After to depending.service: [Unit] After=dependant.service BindsTo=dependant.serviceThese behaviours are documented in the systemd.unit(7) man page. I rarely have to use more than Wants, Requires and After, but there are more advanced options if you have services that have particularly complex start conditions. I find it to be helpful when creating a new service (or group of services) to look in the distribution-supplied unit files to see how they are done and shamelessly copy the good parts (try /usr/lib/systemd/system or /lib/systemd/system): they often will have clues about what After and Requires requirements are useful for a particular type of service.
I have created a SystemD unit to start a service, and that service requires another unit to start beforehand. I've set the depending service with Requires=dependant.service, and that way when depending.service is automatically started during boot, it first tries to start dependant.service. The problem is that if dependant.service starts too early, it fails to start (I'm not really sure what "too early" here means). To solve this, I've set dependant.service to Restart=always. And that works fine - depending.service is enabled and starts automatically, it starts dependant.service, which crashes and then gets restarted and always succeeds to start on the 2nd try. But depending.service have seen dependant.service's first failure and its Requires=dependant.service causes it to fail. The log shows: systemd[1]: Dependency failed for depending. systemd[1]: Job depending.service/start failed with result 'dependency'.Even though dependant has eventually succeeded, and both have Restart=always, depending never restarts after the initial failure of dependant. I've tried various configuration of Requires=, Wants=, BindsTo= and After but didn't manage to find a combination that causes depending to restart after dependant restarts.
SystemD `Requires` fail when required unit fails for the first time
You want type=oneshot. If you use type=exec, other services will be able to start before the firewall is configured. From the systemd.service man page, for exec:...Or in other words: simple proceeds with further jobs right after fork() returns, while exec will not proceed before both fork() and execve() in the service process succeeded.And for oneshot:Behavior of oneshot is similar to simple; however, the service manager will consider the unit up after the main process exits.In other words, with Type=exec, systemd considers the service to be "up" once the process has successfully started, while for Type=oneshot, systemd considers the service to be "up" once the process has successfully completed.
I'm debugging a firewall .service unit and a few questions arise. One of those questions is the unit's best service type, either exec or oneshot. Virtually no comparisons of the two appear in my searches, probably because exec is a relatively recent addition to systemd (v.249 IIRC). By way of background, the unit (call it iptables.service) is intended to activate and configure the firewall by running a Bash script (call it iptables.sh) before the network is up (i.e., before network-pre.target), e.g., ExecStart=/bin/bash -c '/home/locsh/iptables.sh'Type=oneshot has the advantage of not entering the "active" state, so it subsequently can be restarted or reactivated, e.g., by a timer unit. It also is the more common of the two types in most examples, albeit without explanation. Type=exec has the advantage that it will delay startup of follow-up units until the main service has been executed. This seems to make perfect sense for a firewall .service unit because the network should depend on the script running successfully and remain down otherwise, e.g., if the script temporarily can't be read because somehow the relevant .mount unit hasn't yet activated. Restart=on-failure seems to be an obvious and prudent addition in either case. The first question is whether one or the other might better for any reason. The second question is whether Type=exec, because it delays the start of follow-up units, might introduce a subtle ordering cycle in some edge cases, either with or without "Restart=on-failure", in part because the unit's ordering dependency Before=network-pre.targetis relatively early in the boot process.
systemd Firewall .service Unit: Type=exec or Type=oneshot?
As given in the Arch wiki page, the file should be in /etc/systemd/system/. There are several directories where systemd looks for unit files, and /etc/systemd/system/ is where a system administrator should place their service files. See man systemd.unit. After creating or a modifying a file in these directories, you have to run systemctl daemon-reload, which gets systemd to recheck its directories for new or modified units. Only then can you enable or start a new service.
I'm trying to implement the delayed hibernation unit. I'm on arch/antergos. >>> systemctl enable suspend-to-hibernate.service Failed to enable unit ...to-hibernate.service: Invalid argumentsystemd-analyze verify ...hibernate.service responds with an empty output. I copied the unit file straight from the arch wiki and changed SLEEPLENGTH to 1 hour. How can I debug the issue? How can I make systemd issue more descriptive error messages? suspend-to-hibernate.service [Unit] Description=Delayed hibernation trigger Documentation=https://bbs.archlinux.org/viewtopic.php?pid=1420279#p1420279 Documentation=https://wiki.archlinux.org/index.php/Power_management Conflicts=hibernate.target hybrid-sleep.target Before=sleep.target StopWhenUnneeded=true[Service] Type=oneshot RemainAfterExit=yes Environment="WAKEALARM=/sys/class/rtc/rtc0/wakealarm" Environment="SLEEPLENGTH=+1hour" ExecStart=-/usr/bin/sh -c 'echo -n "alarm set for "; date +%%s -d$SLEEPLENGTH | tee $WAKEALARM' ExecStop=-/usr/bin/sh -c '\ alarm=$(cat $WAKEALARM); \ now=$(date +%%s); \ if [ -z "$alarm" ] || [ "$now" -ge "$alarm" ]; then \ echo "hibernate triggered"; \ systemctl hibernate; \ else \ echo "normal wakeup"; \ fi; \ echo 0 > $WAKEALARM; \ '[Install] WantedBy=sleep.target
systemd invalid argument - debugging delayed hibernation service file
There are several things you should do in order to work with the system service as you want (changes are on /etc/systemd/system/python-test.service).Change Restart=always to Restart=on-failure The values StartLimitInterval=600, StartLimitBurst=5 seem to be supported yet. However you should place them in [Unit]. If you place StartLimitInterval in [Unit] you can rename it to StartLimitIntervalSec (man systemd.unit uses StartLimitIntervalSec instead). Add RemainAfterExit=no in [Service] section. Add this line in [Service] section: TimeoutStopSec=infinity Use the environment variable EXIT_STATUS in the script to determine if the script exited successfully or not. Change OnFailure=mailer@%n.service to OnFailure=mailer@%N.service. The difference between both is that using %N will remove the suffix. Install and start the service atd (sudo systemctl start atd.service) to be able to use at command. Or if you do not want to use at then you can write another systemd service to relaunch the service. (in this example, I used relaunch.service) Use the same values on sleep and RestartSec. In your case, since RestartSec has 60 then in this line the sleep must have 60 too: echo "sleep 60; sudo systemctl start ${1}.service" | at nowUsing ExecStart and ExecStopPost= for getting the ExitStatus of your main process: /home/debian/tmp.py. Don't use ExecStop ,from man systemd.service:ExecStop=Note that the commands specified in ExecStop= are only executed when the service started successfully first. They are not invoked if the service was never started at all, or in case its start-up failed, for example because any of the commands specified in ExecStart=, ExecStartPre= or ExecStartPost= failed (and weren't prefixed with "-", see above) or timed out. Use ExecStopPost= to invoke commands when a service failed to start up correctly and is shut down again.The service /etc/systemd/system/python-test.service should be: [Unit] After=network.target OnFailure=mailer@%N.serviceStartLimitBurst=5 StartLimitIntervalSec=600 [Service] Type=simple TimeoutStopSec=infinity ExecStart=/home/debian/tmp.py ExecStopPost=/bin/bash -c 'echo The Service has exited with values: $$EXIT_STATUS,$$SERVICE_RESULT,$$EXIT_CODE' ExecStopPost=/home/debian/bin/checkSuccess "%N" # Any exit status different than 0 is considered as an error SuccessExitStatus=0 StandardOutput=append:/tmp/python-out-test.log StandardError=append:/tmp/python-err-test.log # Always restart service 60sec after exit Restart=on-failure RestartSec=60 RemainAfterExit=no[Install] WantedBy=multi-user.targetAnd /home/debian/bin/checkSuccess should have this: Solution 1: Using at command: #!/bin/bashif [ "$EXIT_STATUS" -eq 0 ] then echo "sleep 60; sudo systemctl start ${1}.service" | at now exit 0 else systemctl start "mailer@${1}.service" exit 0 fiSolution 2: Using another systemd service: #!/bin/bashif [ "$EXIT_STATUS" -eq 0 ] then systemctl start relaunch.service else systemctl start "mailer@${1}.service" fi exit 0And the relaunch.service should have: [Unit] Description=Relaunch Python Test Service[Service] Type=simple RemainAfterExit=no ExecStart=/bin/bash -c 'echo Delay; sleep 10 ; systemctl start python-test.service'The "$EXIT_STATUS" variable which is set by the systemd service is determined by the exit status of /home/debian/tmp.py. The ${1} represents the name of the unit: python-test and it's passed to script in the line /home/debian/bin/checkSuccess "%N".Notes:You can check the logs: 'echo The Service %n has exited with values: $$EXIT_STATUS,$$SERVICE_RESULT,$$EXIT_CODE' in real time by using:tail -f /tmp/python-out-test.logIf you use the solution 2 ( with relaunch.service ) when you want to stop your main service you should run:sudo systemctl stop relaunch.service #Might not be necessary but you stop python service too: # sudo systemctl stop python-test.service
I'm using a systemd unit file in order to control a python process running on a server (with systemd v247). This process must be restarted 60 seconds after it exits, either on failure or on success, except if it fails 5 times in 600 seconds. This unit file links another service in order to notify failures by email. /etc/systemd/system/python-test.service [Unit] After=network.target OnFailure=mailer@%n.service[Service] Type=simpleExecStart=/home/debian/tmp.py# Any exit status different than 0 is considered as an error SuccessExitStatus=0StandardOutput=append:/var/log/python-test.log StandardError=append:/var/log/python-test.log# Always restart service 60sec after exit Restart=always RestartSec=60# Stop restarting service after 5 consecutive fail in 600sec interval StartLimitInterval=600 StartLimitBurst=5[Install] WantedBy=multi-user.target/etc/systemd/system/[emailprotected] [Unit] After=network.target[Service] Type=oneshotExecStart=/home/debian/mailer.py --to "[emailprotected]" --subject "Systemd service %I failed" --message "A systemd service failed %I on %H"[Install] WantedBy=multi-user.targetThe triggering of OnFailure worked pretty well during basic testing. However when I added the following section into the Unit file, the OnFailure only triggered once the 5 consecutive fails occurred. StartLimitInterval=600 StartLimitBurst=5This is not the behavior I would like, since I want be be notified everytime the process fails, even if the burst limit is not reached yet.When checking process status, the output is not the same when burst limit is not reached ● python-test.service Loaded: loaded (/etc/systemd/system/python-test.service; disabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since Thu 2022-12-22 19:51:23 UTC; 2s ago Process: 1421600 ExecStart=/home/debian/tmp.py (code=exited, status=1/FAILURE) Main PID: 1421600 (code=exited, status=1/FAILURE) CPU: 31msDec 22 19:51:23 test-vps systemd[1]: python-test.service: Failed with result 'exit-code'.Than when it is ● python-test.service Loaded: loaded (/etc/systemd/system/python-test.service; disabled; vendor preset: enabled) Active: failed (Result: exit-code) since Thu 2022-12-22 19:52:02 UTC; 24s ago Process: 1421609 ExecStart=/home/debian/tmp.py (code=exited, status=1/FAILURE) Main PID: 1421609 (code=exited, status=1/FAILURE) CPU: 31msDec 22 19:51:56 test-vps systemd[1]: python-test.service: Failed with result 'exit-code'. Dec 22 19:52:02 test-vps systemd[1]: python-test.service: Scheduled restart job, restart counter is at 5. Dec 22 19:52:02 test-vps systemd[1]: Stopped python-test.service. Dec 22 19:52:02 test-vps systemd[1]: python-test.service: Start request repeated too quickly. Dec 22 19:52:02 test-vps systemd[1]: python-test.service: Failed with result 'exit-code'. Dec 22 19:52:02 test-vps systemd[1]: Failed to start python-test.service. Dec 22 19:52:02 test-vps systemd[1]: python-test.service: Triggering OnFailure= dependencies.I couldn't find anything explaining how to modify the triggering of OnFailure within the unit file. Is there a way to notify mails everytime the process fails and still keep the burst limit ?
Service OnFailure trigerred only after burst limit reached
In the Install-section of /etc/systemd/system/watch-for-sync-need-all.target multi-user.target had to be added as WantedBy and now it works. In this stack overflow thread Systemd with multiple execStart comment 5 by Johny mentions that. I do not understand, why this is needed, since the target is supposed to start after multi-user.target like defined in the Unit-section and as far as I have understood, the unit-section accounts for all actions to it including enabling at system boot. I hope this thread and answer helps anyway.
I have created a target file /etc/systemd/system/watch-for-sync-need-all.target [Unit] Description=systemd target to group services for all folders that create a sync need by changes After=multi-user.target Wants=watch-for-sync-need@_sl_home_sl_.service Wants=watch-for-sync-need@_sl_stream_sl_.service[Install] Also=watch-for-sync-need@_sl_home_sl_.service Also=watch-for-sync-need@_sl_stream_sl_.serviceIts purpose is to be able to start, stop, enable or disable all in the target specified systemd template services /etc/systemd/system/[emailprotected] [Unit] Description=watch sync folders for changes then flag sync need and set rtcwake BindsTo=watch-for-sync-need-all.target After=watch-for-sync-need-all.target[Service] User=root Group=root Type=simple ExecStart=/bin/bash /etc/custom/notify-on-change %i Restart=on-failure RestartSec=3[Install] WantedBy=watch-for-sync-need-all.targetIn case it has to deal with my problem I post the called script content of /etc/custom/notify-on-change #! /usr/bin/env bashinotifywait -q -m -r -e modify,delete,create "${1//_sl_//}" | while read DIRECTORY EVENT FILE do echo "yes" > /etc/custom/log/sync-needed bash /etc/custom/set-rtcwake systemctl stop watch-for-sync-need-all.target doneIf there is a change in the folders /home/ or /stream/ inotifywait notices that, flags a sync need, sets a computer self wakeup in the upcoming night at 3 o'clock and stops the services. (There is a cronjob on the machine that syncs to another computer at some minutes past 3 o'clock, if a sync need is flagged. The computer shuts itself down, when not used. Like that, I can work on my computer and make changes in /home/ or /stream/ and then and only then a sync will be started shortly automatically.) My Problem is, that I can't enable my target adequately. The target can be started or stopped without problems. That means, that both "sub"-units are running. Enabling does not give out any warnings and creates corresponding links in the directory /etc/systemd/system/watch-for-sync-need-all.target.wants but when my machine boots, the "sub"-units are not running. After a new boot I get the following output of systemctl status watch-for-sync-need-all.target watch-for-sync-need-all.target - systemd target to group services for all folders that create a sync need by ch> Loaded: loaded (/etc/systemd/system/watch-for-sync-need-all.target; indirect; vendor preset: enabled) Active: inactive (dead)`enter code here`or systemctl status watch-for-sync-need@_sl_home_sl.servicewatch-for-sync-need@_sl_home_sl.service - watch sync folders for changes then flag sync need and set rtcwake Loaded: loaded (/etc/systemd/system/[emailprotected]; disabled; vendor preset: enabled) Active: inactive (dead)How can I make systemd start the target (all "sub"-units) at system boot?
systemd start stop enable and disable multiple services with one target unit
If PartOf is part of the service [Unit] then the target shouldn't need Wants; systemd will automatically know what templates have been enabled for the target. systemctl enable myapp@Chicago (etc) should work with what's in the descrption. Remove Wants= from the target and systemctl enable myapp and then systemctl start myapp ... what happens?
Newbie systemd user here; my apologies if the question seems 'too basic' and would be better asked elsewhere on the StackOverflow ecosystem... I'm in the process of converting some of my ancient init service files, written 12 years ago or so, to systemd. Some of those had rather nasty tricks to figure out how to configure a batch of services; I'm not really an experienced system programmer, and I'm fond of quick & dirty hacks to get things working as proofs-of-concept... they can always be improved later on. That said, here is what I'm trying to accomplish: I'm hosting an online virtual world platform (no, it's not Minecraft...) that runs several instances of the same binary, but with different parameters (think shards). This would be rather easy to do if I knew how many instances to launch in advance (i.e. at boot time), or at least what set of parameters they would take, or even if I had an actual list (potentially generated in advance) with that information. In practice, what happens is the following: there is a special 'configuration' directory (at least that one is known in advance!) which has several subdirectories (the number is not known in advance!), each named after a specific instance. The name is important (i.e. I cannot make things easier by using sequential names, which would make some tricks easier to apply, even assuming that I knew in advance how many subdirectories there are — which I don't!). Imagine that the subdirectory names are city names, for example, NewYork, Chicago, LA, KansasCity. So, schematically, you could think of the overall directory tree as looking like this: game-root-folder | \___ bin | \___________ myapp (the actual game engine for each instance) | \___________ myapp-single-instance.sh (explained below) | \___ config-folder \__________ NewYork | \______ config.ini | \__________ Chicago | \______ config.ini | \__________ LA | \______ config.ini | \__________ KansasCity : \______ config.ini(my apologies, I'm not good with ASCII diagrams...) Assume that each config.ini is just a set of GPS coordinates for the city in question (in practice, it's not much more complex than that, but what matters is that there are enough items in those files — which might be not written by a human, but generated automatically from a 'managing app' — to make it very hard to encode everything in environmental variables), i.e. something which is unique and different from city to city and which is hard-coded into the startup configuration for that city. In practice, the actual layout is considerably more complex, but this should be enough to explain my issue. When not using systemd, but rather launching everything manually from the shell, the task is simple — the command to launch each instance will look something like this: # Call this script with `myapp-single-instance.sh start <city name>` case "$1" in start) cd /full/path/to/game-root-folder/bin /usr/bin/screen -S $2 -d -m -l myapp \ --config=/path/to/game-root-folder/config-folder/$2/config.ini ;; stop) # discussed below ;; *) # show usage exit 1 ;; esac The non-systemd configuration file just had an additional script that would list the contents of that configuration directory and extract the names of the subdirectories, launching the start/stop script for each city name, i.e. something like this: cd /full/path/to/game-root-folder/bin for CITY_NAME in `ls /path/to/game-root-folder/config-folder` do myapp-single-instance.sh start $CITY_NAME doneAs a side-note, to stop those instances, one can do: /usr/bin/screen -S $CITY_NAME -X eval 'stuff "quit"\015'(assuming that quit is the console command to close that instance) or, if the above doesn't work for some reason (instance in endless loop, not accepting commands): /usr/bin/screen -X -S $CITY_NAME killin which case kill is a command of screen itself, to gracefully terminate itself and whatever is inside it. The last option is sending a SIGKILL, of course (also done via the currently existing scripts). As you can see, if someone wants to add a new instance for a new city, all they need to do is to create a new subdirectory, say, Boston, and start it with the single-line start/stop script; individual instances can then be stopped or started manually if needed; new instances might require manual intervention anyway (putting those config.ini in place), so it's okay to expect that there are additional commands to set up a new instance — so long as each instance is mostly independent of the others, and does not interact directly with them, there is no need to signal anything to the existing instances when adding a new one (or removing an existing one). Again, in reality, there is a 'master' instance monitoring server, which does some housekeeping behind the scene, but for the purpose of this question, let's ignore its existence for now. The above example is trivial to adapt to a init scenario. Now, how to do it under systemd? Well, the two-tiered approach seems to fit like a glove to the usage of systemd targets, launching several instances from a template. Here is my naïve attempt, loosely based on this answer (point 3.), but also on others: ; [emailprotected][Unit] Description=Game city name %I After=syslog.target After=network.target Requires=mariadb.service mysqld.service PartOf=myapp.target[Service] Type=forking User=myappuser Group=myappgroup WorkingDirectory=/full/path/to/game-root-folder/bin ExecStart=myapp-single-instance.sh start "%I" ExecStop=myapp-single-instance.sh stop "%I" ExecStop=/bin/sleep 5 KillSignal=SIGCONT Restart=always RestartSec=30s Environment=USER=myappuser HOME=/full/path/to/game-root-folder/bin RemainAfterExit=false SuccessExitStatus=1[Install] WantedBy=multi-user.targetSo far, so good, I'm basically just encapsulating my existing scripts inside the systemd configuration. But now what to do with the target? ; myapp.target [Unit] Description=Launch all game instances [emailprotected] [emailprotected] [emailprotected][Install] WantedBy=multi-user.targetWhoops, I forgot Chicago! See, that's what happens when manually editing configurations... Basically, I would like to have a dynamic way to do that, just like I did with the amazing `ls /path/to/game-root-folder/config-folder` command — so easy, but so powerful! However, from what I read on the subject, it's not possible to pipe the output of a shell command to the Wants=... line — in fact, the only lines that can run shell commands are the Exec...= ones (for obvious reasons). Tough luck for me! Others have struggled with similar issues. The solution was to push the logic of selecting what instance to run to environment variables, which could then be 'massaged' and sent to the executable itself. I'm a bit baffled about how such a solution could even be implemented in my case, but it doesn't seem to fit my requirements. In fact, I find it strange that such an 'obvious' use-case — dynamically calling service units from a target (where 'dynamic', in this context, means 'a set of service units for a single executable, of unknown size, where each element may have a different range of parameters, different from each other, and not known in advance') — is not a 'trivial' scenario; perhaps I'm not fully grasping how systemd addresses this case? I have seen two examples where the solution was to include different files (using the systemd override mechanism, i.e. putting files inside /etc/systemd/myapp.service.d/), and get systemd to search for those additional configurations; or, alternatively, list all the possible instances to be called from the .target script, store them either in a file or even into an environment variable. Every time someone adds an additional instance, a background-timed process would check the subdirectories for each instance and either return an environment variable with every bit of collected data, or write that to a special file (one would assume that this file would be shared on a well-known spot), and read from it in order to figure out how to start each instance at boot time. Such solutions imply running a background daemon that checks what files are available and refreshes such file(s) before calling the target unit with a reload. Nevertheless, these solutions seem not to work when trying to change the Wants=.. or Require=.. lines; it's mostly the Exec...= parameters that can be addressed this way (although I saw a few conflicting instructions, where allegedly some systemd versions have code that will have a 'dynamic' way of filling in fields...). Also, note that I would like to retain the ability to start/stop/reload instances individually, as well as get them all started at boot time (or gracefully stopped before a mandatory reboot, for instance). Taking all the above into account, what solution would you recommend? Thanks in advance and keep up the awesome work in this community! Cheers,Gwyn
How to dynamically create a list of units for a `systemd` target?
Check out the systemd.unit man page to see descriptions, I've included them below, but will do my best to explain. Requires is a strong dependency. If my.service gets activated then anything listed after Requires= also gets activated. If one of the units listed after Requires= is explicitly stopped, then my.service also is stopped. If there are no Before= or After= used to set ordering for my.service and a unit listed after Requires= then they will be started simultaneously. Wants is a weaker dependency. Units listed after Wants= will be started if my.service is started. However if a listed unit has issues starting then it does not stop my.service from starting. BindsTo is an even stronger dependency than Requires. It is like requires, but if for any reason a service that is listed after BindsTo= stops for any reason then my.service will also be stopped. After and Before are both used to specify an order. They are independent settings from Requires, Wants, and BindsTo, but can be used alongside them to specify the order in which the services should be started. PropagatesReloadTo and PropagatesReloadFrom are used to queue up reloads across multiple units. If my.service specified PropagatesReloadTo=docker.service then reloading my.service would also reload docker.service. If my.service specified PropagatesReloadFrom=docker.service then reloading docker.service would also reload my.service. It is more or less recommended to use Wants when possible over using Requires or BindsTo. Do not overlap services with Wants, Requires, and BindsTo. Decide which fits the need for your service for the unit files you want to specify and go with it.Whenever the docker.service and/OR when management.service are restarted, I'd like my.service to be restarted as well (after docker and management).If you would like to restart a service if another service is restarted then you can use PartOf= instead of Requires.PartOf= Configures dependencies similar to Requires=, but limited to stopping and restarting of units. When systemd stops or restarts the units listed here, the action is propagated to this unit. Note that this is a one-way dependency — changes to this unit do not affect the listed units.my.service starts a docker container so I want to start the docker.service before starting my.service (it is possible that docker.service is disable on my system).After=docker.service management.service to set my.service to start after docker.service and management.service Since you would like the restart as listed above use PartOf=docker.service management.service. If you did not need the restart to be propagated, then you would likely decide between Wants, Requires, and BindsTo. Try to not create unnecessary strong dependencies though. ThenRequires= Configures requirement dependencies on other units. If this unit gets activated, the units listed here will be activated as well. If one of the other units fails to activate, and an ordering dependency After= on the failing unit is set, this unit will not be started. Besides, with or without specifying After=, this unit will be stopped if one of the other units is explicitly stopped. This option may be specified more than once or multiple space-separated units may be specified in one option in which case requirement dependencies for all listed names will be created. Note that requirement dependencies do not influence the order in which services are started or stopped. This has to be configured independently with the After= or Before= options. If a unit foo.service requires a unit bar.service as configured with Requires= and no ordering is configured with After= or Before=, then both units will be started simultaneously and without any delay between them if foo.service is activated. Often, it is a better choice to use Wants= instead of Requires= in order to achieve a system that is more robust when dealing with failing services. Note that this dependency type does not imply that the other unit always has to be in active state when this unit is running. Specifically: failing condition checks (such as ConditionPathExists=, ConditionPathIsSymbolicLink=, ... — see below) do not cause the start job of a unit with a Requires= dependency on it to fail. Also, some unit types may deactivate on their own (for example, a service process may decide to exit cleanly, or a device may be unplugged by the user), which is not propagated to units having a Requires= dependency. Use the BindsTo= dependency type together with After= to ensure that a unit may never be in active state without a specific other unit also in active state (see below).Wants= A weaker version of Requires=. Units listed in this option will be started if the configuring unit is. However, if the listed units fail to start or cannot be added to the transaction, this has no impact on the validity of the transaction as a whole. This is the recommended way to hook start-up of one unit to the start-up of another unit.Before=, After= These two settings expect a space-separated list of unit names. They configure ordering dependencies between units. If a unit foo.service contains a setting Before=bar.service and both units are being started, bar.service's start-up is delayed until foo.service has finished starting up. Note that this setting is independent of and orthogonal to the requirement dependencies as configured by Requires=, Wants= or BindsTo=. It is a common pattern to include a unit name in both the After= and Requires= options, in which case the unit listed will be started before the unit that is configured with these options. This option may be specified more than once, in which case ordering dependencies for all listed names are created. After= is the inverse of Before=, i.e. while After= ensures that the configured unit is started after the listed unit finished starting up, Before= ensures the opposite, that the configured unit is fully started up before the listed unit is started. Note that when two units with an ordering dependency between them are shut down, the inverse of the start-up order is applied. i.e. if a unit is configured with After= on another unit, the former is stopped before the latter if both are shut down. Given two units with any ordering dependency between them, if one unit is shut down and the other is started up, the shutdown is ordered before the start-up. It doesn't matter if the ordering dependency is After= or Before=, in this case. It also doesn't matter which of the two is shut down, as long as one is shut down and the other is started up. The shutdown is ordered before the start-up in all cases. If two units have no ordering dependencies between them, they are shut down or started up simultaneously, and no ordering takes place. It depends on the unit type when precisely a unit has finished starting up. Most importantly, for service units start-up is considered completed for the purpose of Before=/After= when all its configured start-up commands have been invoked and they either failed or reported start-up success.BindsTo= Configures requirement dependencies, very similar in style to Requires=. However, this dependency type is stronger: in addition to the effect of Requires= it declares that if the unit bound to is stopped, this unit will be stopped too. This means a unit bound to another unit that suddenly enters inactive state will be stopped too. Units can suddenly, unexpectedly enter inactive state for different reasons: the main process of a service unit might terminate on its own choice, the backing device of a device unit might be unplugged or the mount point of a mount unit might be unmounted without involvement of the system and service manager. When used in conjunction with After= on the same unit the behaviour of BindsTo= is even stronger. In this case, the unit bound to strictly has to be in active state for this unit to also be in active state. This not only means a unit bound to another unit that suddenly enters inactive state, but also one that is bound to another unit that gets skipped due to a failed condition check (such as ConditionPathExists=, ConditionPathIsSymbolicLink=, ... — see below) will be stopped, should it be running. Hence, in many cases it is best to combine BindsTo= with After=.PropagatesReloadTo=, ReloadPropagatedFrom= A space-separated list of one or more units where reload requests on this unit will be propagated to, or reload requests on the other unit will be propagated to this unit, respectively. Issuing a reload request on a unit will automatically also enqueue a reload request on all units that the reload request shall be propagated to via these two settings.
I can't wrap my head around systemd unit files. Here's my scenario, I have a service called: my.service my.service needs to start sometime after boot, whenever everything else is ready, no rush. my.service starts a docker container so I want to start the docker.service before starting my.service (it is possible that docker.service is disable on my system). Whenever the docker.service and/OR when management.service are restarted, I'd like my.service to be restarted as well (after docker and management). my.service needs to be started after management.service Now i'm so confused between Requires=, After=, Wants= BindsTo= ReloadPropagatedFrom= etc... I've been using on combination of those but it doesn't seem to start docker.service nor my.service [Unit] Description=test Requires=management.service After=multi-user.target Wants=docker.service management.service multi-user.target BindsTo=docker.service management.service ReloadPropagatedFrom=docker.service[Service] ExecStartPre=/usr/bin/start.sh ExecStop=/usr/bin/stop.sh Restart=always RestartSec=30[Install] WantedBy=multi-user.targetwhat am I doing wrong?
systemd nightmare - ordering my service so it starts at boot and restarts when needed
I'd suggest, you run two actions: one upon login (for mounting), the other one upon logout (unmounting). A sample service file: [Unit] Description=mount with sshfs[Service] Type=simple ExecStart=/path/to/login_script.sh #This makes the service stay active while logged in and # makes sure ExecStop is only executed at logout RemainAfterExit=yes ExecStop=/path/to/logout_script.sh[Install] #opens with first login and ends with last logout - multiple ssh sessions are OK WantedBy=default.targetThe write scripts for login and logout. I'll just make a check by SSID to see if we're at home or not. See what works out for you. #!/bin/bash #login script if [[ "$( nmlci | grep 'home_network_name' )" != "" ] ; then #we're at home sshfs <mount with home options> else #we're out sshfs <mount with outside options> fiand after logging out, you want to unmount #!/bin/bash #logout script fusermount -u /path/to/mount/pointTo activate the service put it in e.g. ~/.cofig/systemd/user/autosshfs.service and run systemctl --user enable autosshfs.service. It should work on next login.
I have the following problem: I have my network drive in my home network that I would like to mount via sshfs. Being in my local network I don't have to care that much about encryption and could use the arcfour cipher for example. My ssh port internally is A. For technical reasons I can connect from the outside network via ssh on the port B which is not the same as A. I am also not able to connect from the outside to that port while I am in my home network. Now, mounting from the outside, I would of course rely on other encryption. I would like to build a systemd-service which handles that mount situation reasonably well, using arcfour on port A iff I am in network with ID X and connecting via port B with another cipher in all other cases. I'm still somewhat new to writing my own services and could not find the right condition that would work here. Could someone help me out with this?
Define systemd service conditional to network ID
With your configuration, sshd.service will certainly start only after zerotier-one.service starts. But that is not enough. The sshd.service would need to wait until Zerotier has actually connected successfully, which can happen quite a bit later (in computer timescales, at least). And the current zerotier-one.service is not even trying to provide that information to systemd: [Unit] Description=ZeroTier One After=network-online.target network.target Wants=network-online.target[Service] ExecStart=/usr/sbin/zerotier-one Restart=always KillMode=process[Install] WantedBy=multi-user.targetYou would probably have to create a Type=oneshot service (it could be called zerotier-wait-online.service) that would run a script that includes a loop that calls e.g. zerotier-cli listnetworks or just ip addr show and looks for the IP address 192.168.10.10. If it is not available, the script would sleep a few seconds and try again. When the script would see the address has appeared, the script would exit - and that would tell systemd that any service configured to run After=zerotier-wait-online.service can now proceed. (Unlike the default Type=simple and several other service types, services of Type=oneshot are only considered "started" after their main ExecStart process has successfully exited - and that's exactly what you need. Once you have that service working, you can change your sshd.service override to After=zerotier-wait-online.service, and then it should work as you wanted. Note that you cannot simply require that zerotier-wait-online.service runs Before=network-online.target, because zerotier-one.service itself runs After=network-online.target. Trying to set up such a requirement would create an impossible situation.The root of the problem is that the use of ListenAddress brings with it the requirement that the specified address must already be up when sshd starts. If you need sshd to listen in the Zerotier IP address only, but don't specifically have to use ListenAddress to implement it, you could use alternative ways to implement the restriction. In /etc/ssh/sshd_config, you could add a Match block like this, to deny access on any local IP address except the Zerotier one: Match LocalAddress *,!192.168.10.10 DenyUsers *Or you could use iptables to drop/reject incoming connections if the destination address is anything except 192.168.10.10: iptables -I INPUT 1 -p tcp --dport 22 \! -d 192.168.10.10/32 -j DROPDROP makes blocked connection attempts hang until they time out; if you want the blocked connections to fail quickly, use a rule like this instead: iptables -I INPUT 1 -p tcp --dport 22 \! -d 192.168.10.10/32 -j REJECT --reject-with tcp-resetIf you use ufw or some other firewall management system, there is probably a way to configure an equivalent rule to it.
OS: Debian 11 Bullseye Context:The Zerotier application adds the zerotier-one.service system service and creates a virtual network interface (when it works). The sshd server default listens to all addresses 0.0.0.0Until then, everything is fine with me Now I am introducing custom config in /etc/ssh/sshd_config.d/my-sshd.conf add ListenAddress 192.168.10.10 that my sshd server accepts calls only at the Zerotier interface address. Now I suspect that sshd.service starts before zerotier-one.service because after restarting the computer: $ sudo systemctl status sshd.service ● ssh.service - OpenBSD Secure Shell server Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Thu 2023-09-14 17:21:27 CEST; 28s ago Docs: man:sshd(8) man:sshd_config(5) Process: 524 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=0/SUCCESS) Process: 551 ExecStart=/usr/sbin/sshd -D $SSHD_OPTS (code=exited, status=255/EXCEPTION) Main PID: 551 (code=exited, status=255/EXCEPTION) CPU: 21mssystemd[1]: Starting OpenBSD Secure Shell server... sshd[551]: error: Bind to port 22 on 192.168.10.10 failed: Cannot assign requested address. sshd[551]: fatal: Cannot bind any address. systemd[1]: ssh.service: Main process exited, code=exited, status=255/EXCEPTION systemd[1]: ssh.service: Failed with result 'exit-code'. systemd[1]: Failed to start OpenBSD Secure Shell serverSo I added the After= option to /etc/systemd/system/ssh.service.d/override.conf changing using the command sudo systemctl edit sshd.service: [Unit] After=network.target auditd.serviceto: [Unit] After=network.target auditd.service network-online.target zerotier-one.serviceIt looks like this now: $ sudo systemctl cat sshd.service # /lib/systemd/system/ssh.service [Unit] Description=OpenBSD Secure Shell server Documentation=man:sshd(8) man:sshd_config(5) After=network.target auditd.service ConditionPathExists=!/etc/ssh/sshd_not_to_be_run[Service] EnvironmentFile=-/etc/default/ssh ExecStartPre=/usr/sbin/sshd -t ExecStart=/usr/sbin/sshd -D $SSHD_OPTS ExecReload=/usr/sbin/sshd -t ExecReload=/bin/kill -HUP $MAINPID KillMode=process Restart=on-failure RestartPreventExitStatus=255 Type=notify RuntimeDirectory=sshd RuntimeDirectoryMode=0755[Install] WantedBy=multi-user.target Alias=sshd.service# /etc/systemd/system/ssh.service.d/override.conf [Unit] After=network.target auditd.service network-online.target zerotier-one.serviceBut after restarting the computer, the error still occurs When I do a sudo systemctl restart sshd.service now I get: $ sudo systemctl status sshd.service ● ssh.service - OpenBSD Secure Shell server Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/ssh.service.d └─override.conf Active: active (running) since Thu 2023-09-14 17:40:43 CEST; 2s ago Docs: man:sshd(8) man:sshd_config(5) Process: 3065 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=0/SUCCESS) Main PID: 3066 (sshd) Tasks: 1 (limit: 9423) Memory: 1.0M CPU: 21ms CGroup: /system.slice/ssh.service └─3066 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startupssystemd[1]: Starting OpenBSD Secure Shell server... sshd[3066]: Server listening on 192.168.10.10 port 22. systemd[1]: Started OpenBSD Secure Shell server.I have the impression that the sshd.service is still starting before zerotier-one.service Is something missing or can it be checked differently? Should I do something else in addition to adding zerotier-one.service to After=? EDIT (Information for other users): In addition to the solution proposed by @telkoM (for which I thank you), another trick solved the problem in my case: Just add the directive ExecStartPost=sleep 10 to zerotier-one.service or ExecStartPre=sleep 10 to sshd.service
Reorder of launching Systemd services
You could use the Linux kernel support for miscellaneous binary formats (binfmt_misc). This allows you to register an interpreter (e.g. Java) to execute a file based on the first few bytes in the file (e.g. a jar file). See https://www.kernel.org/doc/Documentation/admin-guide/binfmt-misc.rst for more information.
I'm struggling to make a SpringBootApp to run as a service at the moment. The biggest issue is that Devops doesn't allow us to make changes on the Ansible scripts that deploys the artifact and creates the service (sample shown below). [Unit] Description=A Spring Boot application After=syslog.target[Service] User=rating-gateway ExecStart=/opt/rating-gateway/rating-gateway-0.0.1-SNAPSHOT.jar SuccessExitStatus=143 Restart=always RestartSec=5Technically, If I were to add java -jar on the ExecStart it will run correctly, but as we cannot edit the Ansible scripts I need to find a workaround. I've read a few guides where the service does not use java -jar instruction, but not sure what would be missing for this to run correctly. I've added java to the PATH as I thought that would help me. But it didn't. PATH=/home/rating-gateway/.local/bin:/home/rating-gateway/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/bin:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.275.b01-1.el8_3.x86_64/binAny other ideas? to workaround this issue? Thanks in advanced. PS. I'm a Dev, but got root access to the server, so any help is pretty welcome.(edit extra info) When I don't have java -jar on the ExecStart command. I get the following error in the /var/log/messages Mar 30 08:44:44 systemd[134389]: rating-gateway.service: Failed at step EXEC spawning /opt/rating-gateway/rating-gateway-0.0.1-SNAPSHOT.jar: Exec format errorBut I've already confirmed the architecture of both platform where jar is built and server that is deployed. Both are x84_64
Java systemd service without specifying java -jar
systemctl enable <servicename>@<instancename>
I have a service that processes items from a RabbitMQ queue, of which I spool additional instances as the queue grows in size. How can I enable systemd to start a single instance of the unit at startup? Here's my unit file: [Unit] Description=A service (%i) to consume items from a queue After=network.target[Service] Type=simple User=root ExecStart=/usr/local/bin/queue-consumer.sh start %i[Install] WantedBy=multi-user.target
How to enable systemd unit to run at startup
I would agree with @Iarsks's comment in that DynamicUser=yes doesn't make much sense for a user unit. Obviously, you wouldn't be able to create and switch users. And if your unit needs to be a user-unit, then you wouldn't want this anyways. So why do you want to add DynamicUser= to a --user unit? An obvious answer could be "Because I've heard it's good for security". In that case, consider what DynamicUser= does and choose the parts that make sense. Here are two resources to help you decide that stuff:Feature explanation from the devs: http://0pointer.net/blog/dynamic-users-with-systemd.html man systemd.exec: https://www.freedesktop.org/software/systemd/man/systemd.exec.htmlDynamicUser= enables a lot of stuff that might make sense for the --user bus. You could consider turning these on instead:RemoveIPC=yes. Careful, because when the unit stops, all IPC belonging to that user/group will be destroyed. That might be a good thing, unless you have other services running on that --user bus. NoNewPrivileges=yes and RestrictSUIDSGID=yes prevent any scripts from taking advantage of password-less sudo configurations or capabilites. Note that when running in user mode or in system mode without User=, setting RestrictSUIDSGID=yes will imply NoNewPrivileges=yes. ProtectSystem=strict and ProtectHome=read-only will prevent the service from writing to arbitrary file system locations. If you want the service to be able to access something specific, specify those paths in ReadWritePaths=. Or create temporary paths for this sort of thing with the next few options. Note that ProjectHome= is only available to the user bus when unprivileged user namespaces are available. PrivateTmp=yes: A private /tmp is created that other services cannot write to. The temporary files are also cleaned up automatically. This is only available to the user bus when unprivileged user namespaces are available. RuntimeDirectory=: Creates a writable runtime directory which is owned by the user/group and removed automatically when the unit is terminated. StateDirectory=, CacheDirectory= LogsDirectory= assign writable directories for these specific purposes.
For systemd's system units (the units you operate with systemctl --system (default)), it's possible to specify DynamicUser=yes to make systemd dynamically allocate a user and group for the service to achieve some sense of sandboxing. However while reading the manual I was not able to find any mention of if and how it works with user units (systemctl --user). So my questions are:Can DynamicUser be used in user units? If so, how exactly is it handled (i.e. what are the differences/quirks compared to using it in system units)?Documentation from official/reputable sources is desirable, but I can understand if there is none. Thanks in advance.
systemd - does the `DynamicUser` option work with user units and if so, how?
On Arch, at least, systemd mounts generated from /etc/fstab are deployed to /run/systemd/generator For example on my system, with the listing below I can add to my service file [Unit] Description=backup logging to temp After=mnt-ram.mountls -la /run/systemd/generator :> ls -la total 32 -rw-r--r-- 1 root root 362 Jun 20 17:01 -.mount drwxr-xr-x 5 root root 260 Jun 20 17:01 . drwxr-xr-x 22 root root 580 Jun 21 04:40 .. -rw-r--r-- 1 root root 516 Jun 20 17:01 boot.mount drwxr-xr-x 2 root root 120 Jun 20 17:01 local-fs.target.requires drwxr-xr-x 2 root root 80 Jun 20 17:01 local-fs.target.wants -rw-r--r-- 1 root root 168 Jun 20 17:01 mnt-3T.automount -rw-r--r-- 1 root root 515 Jun 20 17:01 mnt-3T.mount -rw-r--r-- 1 root root 168 Jun 20 17:01 mnt-4T.automount -rw-r--r-- 1 root root 515 Jun 20 17:01 mnt-4T.mount -rw-r--r-- 1 root root 260 Jun 20 17:01 mnt-ram.mount -rw-r--r-- 1 root root 349 Jun 20 17:01 mnt-sda.mount drwxr-xr-x 2 root root 80 Jun 20 17:01 remote-fs.target.requires
I have a inotify-based service that backs up my LAN's git directory to the Dropbox. I tried keeping the git directory in the Dropbox but I have multiple git clients so often get error files there. In this early stage of development, this is a fairly busy and chatty system service that wants to log to a ram drive. I don't want to use /tmp because other applications depend on having space there. To create the ram drive in my fstab I have this : tmpfs /mnt/ram tmpfs nodev,nosuid,noexec,nodiratime,size=1024M 0 0I need to be sure that the ram drive is mounted before the backup service starts. I want to put a condition to the service that delays its start. I see suggestions that people use the *.mnt service as a precondition but I don't see any file in /lib/systemd/system that gives me the name of the service I need. How can I identify this mount? Is there another approach?
systemd service to start after mount of ram drive
well, the error tells you what's wrong! Read man systemd.mount to learn about the unit file name requirements:Mount units must be named after the mount point directories they control. Example: the mount point /home/lennart must be configured in a unit file home-lennart.mount. For details about the escaping logic used to convert a file system path to a unit name, see systemd.unit(5). Note that mount units cannot be templated, nor is possible to add multiple names to a mount unit by creating additional symlinks to it.So your unit file must be named srv-smb.mount.
It is necessary to create a virtual file samba.img that will be a device and automatically mount it when the system starts. creating a virtual disk from a file fallocate -l 2G /root/img/samba.img mkfs.ext4 /root/img/samba.imgcraating mount point sudo mkdir /srv/smbI create a mount file to run at system startup vim /etc/systemd/system/mnt-driveone.mountmnt-driveone.mount contain [Unit] Description=Additional drive[Mount] What=/root/img/samba.img Where=/srv/smb Type=ext4 Options=defaults[Install] WantedBy=multi-user.targetNext, add to autoload systemctl enable mnt-driveone.mountRun systemctl start mnt-driveone.mountAnd I get an error mnt-driveone.mount: Where= setting doesn't match unit name. Refusing.I looked, such an error occurs when there are incorrect paths or spaces in writing paths, but my directory exists and there are no spaces.
Systemd mount unit configuration *.img file on centos
Does it mean I should not be adding .preset file in my rpm?Yes, you shouldn't put the .preset file in your service's package RPMbut it is not happeningIt's not happening because the systemd package in RHEL ships with a default preset at /usr/lib/systemd/system-preset/90-systemd.preset. If you want to stick to distribution packaging guidelines you have two options, namely:contact operating system's packager for including your application service to the default preset more feasible, create your own package for shipping preset. typically you ship your preset in your own repository's "release" package. An example of this is epel-release, which installs /usr/lib/systemd/system-preset/90-epel.preset among other things.
OS - Red Hat Enterprise Linux 8 I've created a .spec file to build and package my application. My rpm also includes my_app.service file for systemd to start it. However, by default one has to enable this with systemctl enable my_app.service. I'd like to have it enabled after the rpm has been installed. I've googled and found that I can use systemd.preset, which says:It is not recommended to ship preset files within the respective software packages implementing the units, but rather centralize them in a distribution or spin default policy, which can be amended by administrator policy.Does it mean I should not be adding .preset file in my rpm? Also, later in the man page it says:If no preset files exist, systemctl preset will enable all units that are installed by default.If I read it correctly, then my application's service file should be automatically enabled, but it is not happening, or this implies the manual systemctl preset ?
automatically activate service after RPM was installed
ConditionPathIsEncrypted= only exists in versions v264-rc1 and newer. If you want to look what conditions the version you are using supports, i would suggest you take a look at the 'systemd.unit' manpage. man systemd.unitThere is a section with 'Conditions and Asserts' - the systemd version shipping with Ubuntu 20.04 for example is v245 and thus is missing the ConditionPathIsEncrypted= condition.
I've created a unit file to mount the /srv partition automatically. It will check first if /dev/mapper/srv exists and then start it. I'd like to take it one step further and only let it be able to start if /dev/mapper/srv is a LUKS encrypted block device, with the ConditionPathIsEncrypted option. But I get the warning: /etc/systemd/system/srv.mount:4: Unknown lvalue 'ConditionPathIsEncrypted' in section 'Unit' I tried giving it a boolean value, that also didn't work. Putting it in the [Mount] category also didn't solve it. [Unit] Description=srv mount ConditionPathExists=/dev/mapper/srv #ConditionPathIsEncrypted=/dev/mapper/srv[Mount] What=/dev/mapper/srv Where=/srv Type=ext4 Options=defaults[Install] WantedBy=multi-user.targetWhat am I doing wrong?
ConditionPathIsEncrypted not supported?
while it is running, the service remains "activating".That would indicate you're using the wrong Type= for your service file. See man systemd.unit and systemd.service for a pretty detailed discussion on when what service is called started. The text is too long to reasonably copy & paste in here, but from the Type= description in man systemd.service:Type= Configures the mechanism via which the service notifies the manager that the service start-up has finished. One of simple, exec, forking, oneshot, dbus, notify, notify-reload, or idle:If set to simple (the default if ExecStart= is specified but neither Type= nor BusName= are), the service manager will consider the unit started immediately after the main service process has been forked off (i.e. immediately after fork(), and before various process attributes have been configured and in particular before the new process has called execve() to invoke the actual service binary). Typically, Type=exec is the better choice, see below. The exec type is similar to simple, but the service manager will consider the unit started immediately after the main service binary has been executed.[…] If set to forking, the manager will consider the unit started immediately after the binary that forked off by the manager exits. The use of this type is discouraged, use notify, notify-reload, or dbus instead […] Behavior of oneshot is similar to simple; however, the service manager will consider the unit up after the main process exits. […][… and the bus-/notification-based schemes…]So, there's different points in time when your service goes from "activating" to "active", depending on how it's set up. "activating" however always starts at the same point: the moment systemd begins to do whatever is specified in service file.
According to various clauses in the docs, the "activating" state is the transition between inactive states and an active state. So far so obvious. But how exactly is it defined? What determines whether a service is no longer inactive but activating? What determines whether a service is no longer activating but active? The only thing I know of that plays into the "activating" state is the ExecStartPre script; while it is running, the service remains "activating". Are there other means to cause a service to remain in an activating state? Could you have the service start the primary ExecStart but only consider the service active once the executable causes some specific "I'm up!" event for instance?
What exactly does it mean for a systemd service to be "activating"?
CacheDirectoryMode=644This allows to read the directory list. Not to interact with a file within this directory list: the eXecute bit is required to further traverse the path and access files within the directory. This makes the write access for user monitor also useless. Change this parameter into: CacheDirectoryMode=755which is the default (ie: you can remove this parameter instead). This now allows to access files within this directory, for reading or for writing for user monitor. The behavior is linked from systemd's documentation ( RuntimeDirectoryMode=, StateDirectoryMode=, CacheDirectoryMode=, LogsDirectoryMode=, ConfigurationDirectoryMode= ) to the manual for path_resolution(7) which include all the details about basic Unix access, especially in Step 2: walk along the path and in the Permission paragraphs:If the process does not have search permission on the current lookup directory, an EACCES error is returned ("Permission denied").Of the three bits used, the first bit determines read permission, the second write permission, and the last execute permission in case of ordinary files, or search permission in case of directories.
I have a Golang binary that runs every 5 mins. It is supposed to create & update a text file which needs to be write restricted. To run this binary I created a systemd service and a systemd timer unit. Systemd service uses a DynamicUser. To achieve access restriction i use CacheDirectory directive in systemd so that only DynamicUser can write that file and it only exists as long as user exists. Also set CacheDirectoryMode=644 to allow only owner with write permissions. When systemd service runs, it is failing with failed to read output file: lstat /var/cache/monitor/output_file.txt: permission denied> Question: Although service unit will create a dynamic user & run an executable that creates/updates/reads the file, why that executable itself get Permission Denied when trying to read the file when systemd service runs? file-monitor.go compiled to produce /usr/local/bin/file-monitor binary package mainimport ( "fmt" "os" )function foo() { var outputFile = os.Getenv("CACHE_DIRECTORY") + "/output_file.txt" outputFileBytes, err := os.ReadFile(outputFile) if err != nil { return fmt.Errorf("failed to read output file %s: %v\n", outputFile, err) } }function main() { foo() }file-updater.service [Unit] Description="description" After=file-updater.service[Service] DynamicUser=yes User=monitor Group=monitorCacheDirectory=monitor CacheDirectoryMode=644ExecStart=/usr/local/bin/file-monitor <arg1>Type=oneshot[Install] WantedBy=multi-user.target
Systemd executable failed to read file from CacheDirectory with Permission Denied
Looking at the documentation, there appear to be several options available. The simplest may be the PropagatesStopTo= option:PropagatesStopTo=, StopPropagatedFrom= A space-separated list of one or more units to which stop requests from this unit shall be propagated to, or units from which stop requests shall be propagated to this unit, respectively. Issuing a stop request on a unit will automatically also enqueue stop requests on all units that are linked to it using these two settings.If I set up service1.service like this: [Unit] Wants=service2.service service3.service After=service2.service service3.service PropagatesStopTo=service2.service service3.service[Service] Type=exec ExecStart=...Then systemctl start service1 brings up service2 and service3, while systemctl stop service1 also brings down service2 and service3. You could also do something with PartOf or BindsTo but that would require changes in the dependent services (service2 and service3).
I have systemd:service1.service service2.service service3.serviceservice1.service looks like: [Unit] Wants=service2.service service3.service After=service2.service service3.service[Service] ExecStart=/var/scripts/script.sh[Install] WantedBy=multi-user.targetThis service1.service does what it should - it brings up services 2 and 3 before ExecStart, which is 50% what I need. The other 50% is to bring service2 and service3 down, when service1 is down, either systemctl stop service1 or any other termination, like SIGKILL. How should I configure service2/service3 to accomplish this task?
Bring systemd services up/down along with specific systemd service?
I may not have understood what you need, but perhaps you can do something simple like adding a failing command after the foo command. This second command would not be run by systemctl stop. For example, replace the ExecStart with ExecStart=/bin/bash -c '/opt/foo/foo -stayresident && exit 7'The choice of 7 is just so we can see it more clearly in the status. If foo is killed by signal, or dies of its own accord, the shell process proceeds and results in an exit code of 7 and a systemd status of failed. If a systemctl stop is done the shell is killed, and the status is inactive (dead). Making the exit 7 only occur if the original command is successfull allows any failure signals foo naturally emits to be kept. Note, to avoid killall foo also matching the bash command, you can use a subterfuge like bash -c '/opt/"f"oo/"f"oo ...'.
I'm wrapping a 3rd party executable in a systemd service unit to manage it. I can't alter the behavior of this program and I don't really trust its exit codes. I would like to treat any exit that was not caused by systemd as a failure, that includes exit code 0 or an outside SIGTERM, so I can detect the difference through systemd's interfaces. Currently my unit looks something like: [Unit] Description=Foo service Requires=bar.service After=bar.service[Service] ExecStart=/opt/foo/foo -stayresident KillMode=control-group Restart=noIf I kill the service process manually, I get "inactive" when checking the state with systemctl killall foo && systemctl status foo.serviceIf I upgrade that kill to -9, I get "failed": killall -9 foo && systemctl status foo.serviceThis is the behavior I'd like to expand. I know that the SuccessExitStatus= service unit setting can be used to count non-zero exits and other failure types to be considered as success, but I don't see anything that would do the opposite.
Make systemd treat unexpected exit as failure
Since the service has Type=forking, the ExecStart process had PID 4758 and the exit code you're asking about is listed with main PID 4747, we can conclude that systemd managed to fork() a child process which then successfully execve()'d the ExecStart process, and so the table of systemd-specific exit codes does not apply here. The systemd-specific table of exit codes would apply if the error was from the actual systemd child process after the fork() but before the execve(): specifically, error 202 would mean e.g. a problem in implementing the StandardInput=, StandardOutput= or StandardError= directives in the service definition. But since the ExecStart is specifically reported to have been PID 4601 and having exited with status=0/SUCCESS, that was not what happened here. The ExecStop was executed as PID 4758, so it's not from that one either. The status code 202 is from the "main process" of your application (the one that had PID 4747), and it means whatever the application developer wanted it to mean. The lingering TCP socket is not the cause: since your application process died, the kernel will have cleaned up any lingering sockets it may have had. Of course, if the application did not use the SO_REUSEADDR socket option, it might not be possible to immediately restart the application and have it use the same port number, until the lingering socket's TIME_WAIT has expired... but that's not systemd's problem; that's something the application will have to deal with on its own. The /FDS part comes from the exit_status_to_string() function in shared/exit-status.c file in the systemd source code package. That function is supposed to add a brief hint to what the status code may mean, if the code has any standardized meaning. The function can take a parameter that determines which set(s) of status code hints to use, but when systemctl status uses the function (i.e. in file systemctl/systemctl-show.c, it (as of this writing) seems to always call it with that parameter set to EXIT_STATUS_LIBC | EXIT_STATUS_SYSTEMD, i.e. "show the status code hints according to the usage of libc and systemd itself" without checking if the status code in fact came from a process that was a member of the systemd software suite or not. The end result is that status 202 always gets /FDS appended to it, whether it's known to have the systemd-specific meaning "Failed to close unwanted file descriptors, or to adjust passed file descriptors" or not. It's just a simple table lookup: don't presume it has any more intelligence than that. (In Unix programming literature and programmer jargon, "fds" is a pretty universal shorthand for the words "file descriptors". The /FDS also suggests the symbolic name of status code 202 in systemd's code: EXIT_FDS - and since all systemd's status code symbols have the EXIT_ prefix, it's chopped of for brevity.)
I get this message from systemd status after I have stopped my service: Actice: failed (Result: exit-code) <...> Main PID: 4747 (code=exited, status=202/FDS)Status FDS is defined in the docs like this:202 EXIT_FDS Failed to close unwanted file descriptors, or to adjust passed file descriptors.Starting the service works fine, no errors are reported by systemd status QuestionsWhat does EXIT_FDS mean in more practical detail? Is the status code from my application, or from systemd itself? My application opens a TCP socket, which it doesn't close when stopped. Is that the reason? If so, can I make systemd ignore the lingering socket and not report it as an error?Details The full status message: tool-user@tool-box:~$ systemctl status tool.service ● tool.service - Tool application Loaded: loaded (/home/tool-user/tool.service; linked; vendor preset: enabled) Active: failed (Result: exit-code) since Mon 2022-02-07 14:14:46 CET; 3s ago Process: 4758 ExecStop=/bin/bash -c tool-stop && while ps -p $MAINPID >/dev/null Process: 4601 ExecStart=/bin/bash -c tool-start (code=exited, status=0/SUCCESS) Main PID: 4747 (code=exited, status=202/FDS)Feb 07 14:14:31 tool-box systemd[1]: Starting Tool application... Feb 07 14:14:32 tool-box bash[4601]: Server started on port 44680 Feb 07 14:14:32 tool-box systemd[1]: Started Tool application. Feb 07 14:14:44 tool-box systemd[1]: Stopping Tool application... Feb 07 14:14:45 tool-box systemd[1]: tool.service: Main process exited, code=exited, status=202/FDS Feb 07 14:14:46 tool-box systemd[1]: tool.service: Failed with result 'exit-code'. Feb 07 14:14:46 tool-box systemd[1]: Stopped Tool application.The service definition file looks like this: [Unit] Description=Tool application # Standard dependencies for web server After=network.target remote-fs.target nss-lookup.target httpd-init.service[Service] Type=forking Restart=on-failure RestartSec=10 ExecStart=/bin/bash -c 'toolStart' ExecStop=/bin/bash -c 'toolStop && while ps -p $MAINPID >/dev/null 2>&1; do sleep 1; done' User=tool-user StandardOutput=syslog StandardError=syslog TimeoutStopSec=60[Install] WantedBy=multi-user.targetOS: Ubuntu 18.04 Server, run in VirtualBox on Windows 10. tool-user@tool-box:~$ uname -a Linux tool-box 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
What does systemd exit code EXIT_FDS mean?
Besides adding recipe name to the RM_WORK_EXCLUDE += "systemd" in local.conf, one should clean shared state using one of the cleaning options provided by Yocto, for example $ bitbake -c cleansstate recipe before bitbaking again, otherwise, with an unflushed shared state cache it will start from the current state, not from the beginning. More information on cleaning state and much more is on the yoctoproject website.
How can I customize systemd source code? When I do bitbake, systemd appears in my end project, but I cannot find any source files of it in my local directories. I want to track down unit files lifecycle, log_debug(...) and log_info(...)s are not showing up (some of the messages appear, but gives not enough info for me) in journalctl in my embedded project. Does Yocto pulls systemd source files, compiles and then deletes them, if so how can I prevent deletion, customize code and then recompile?
How Yocto embeds systemd in end project?
There is a mismatch between your understanding of "successfully running" and systemd's idea. For services such as these, the "Type" is "simple", which means:If set to simple (the default if ExecStart= is specified but neither Type= nor BusName= are), the service manager will consider the unit started immediately after the main service process has been forked off.(My emphasis) The subtle distinction here is one of timing: systemd considers the pywal unit started as soon as %h/.bin/pywal has been initiated while you'd like the subsequent units to wait until it is "has finished execution of the start up script". As a result, the dependencies are started as soon as they're able (according to systemd), which means i3 will start very shortly after pywal starts, and pywal can sleep as much as it wants -- it has no further effect on the start time of i3. I believe the proper answer here is to have pywal be of Type=Notify and to have it notify systemd when it is ready. See these for more about systemd-notify:How can a systemd service flag that it is ready, so that other services can wait for it to be ready before they start? "Example 7. Services that notify systemd about their initialization"A workaround could be to modify the i3 unit with an ExecStartPre: # ... [Service] ExecStartPre = /bin/sleep 10 ExecStart = /usr/bin/i3-msg restart # ...... to force the actual i3 executable to wait 10 seconds before running.
So, I started breaking up my system init and creating some service files for things I want to be loaded after I login. It's the usual stuff like polybar, dunst and the rest of the things. Things do work, but I have some issues with my pywal setup. I've separated the dependencies into a separate .target called theme.target [Unit] Description = Theme dependencies BindsTo = xsession.target Wants = pywal.service Wants = i3.service Wants = polybar.service Wants = dunst.serviceWhat I want to achieve is to have i3 (and the rest of the list for that matter) to load only after pywal.service is fully running and has finished execution of the start up script. In .bin/pywal I do have a sleep of 10s pywal.service [Unit] Description = Run pywal service responsible for color schemes PartOf = theme.target[Service] ExecStart = %h/.bin/pywal[Install] WantedBy = theme.targeti3.service [Unit] Description = A tiling window manager PartOf = theme.target After = pywal.service Requires = pywal.service[Service] ExecStart = /usr/bin/i3-msg restart[Install] WantedBy = theme.targetI might be confused here with what "successfully running" actually means, but i3 service I noticed restarts immediately once I run systemctl --user restart theme.targetI'm then just watching status on all 3 services/targets and see that the i3 has restarted at the same time as pywal.service. So basically what am I missing here and why does i3 not restart 10 seconds after pywal? Edit: fixed a sentence to make more sense in this context based on the comment
After= directive of systemd unit not working as expected
There are several possibilities, all depending on the exact parameters of your situation right now. I'm going to assume Linux in the following examples where applicable, but similar functionality exists on other platforms in most cases.You might be able to get the dynamic loader to run an executable for you. Assuming cat is dynamically-linked, your platform's equivalent of /lib/ld-linux.so.2 will likely also be in memory and thus usable to run a binary: $ /lib64/ld-linux-x86-64.so.2 ./chmod chmod: missing operandYou may have multiple of these (32- and 64-bit are likely) and there may be multiple copies available, or symlinks that need resolving. One of those may work. If you have a mounted vfat or NTFS filesystem, or another that treats all files as 777, you can create your executable on there. $ cat > /mnt/windows/chmod < /dev/tcp/localhost/9999If you have a mounted network filesystem, even if it's not locally writable, you can create files on the remote system and use those normally. If there's a mounted partition you don't care about the contents of, on a drive that is still mostly working, you can replace the contents with a new image of the same filesystem type containing executables you want - cat should be fine for this in the role people usually use dd for, and you can provide the image over the network. $ cat > /dev/sdb1 < ...This one is plausible, but has a lot of places not to work depending on what exactly is still in memory from that partition. If there is any accessible file that has execute permission on any writable filesystem, you can cat > into it to replace the contents with a binary of your choosing. $ cat > ~/test.py < ...Since Bash is still running, you could dynamically load a Bash plugin into the process that exposes chmod. In particular, you could install and load ctypes.sh, which provides a foreign function interface to Bash, and then dlcall chmod ./netcat 511. You could bring in a dynamic library file foo.so of your construction and then have cat load it on your behalf by way of LD_PRELOAD, allowing you to execute arbitrary code. $ LD_PRELOAD=./hack.so cat /dev/nullIf you intercept, for example, open: int open(const char *path, int flags, ...) { chmod(path, 0755); return -1; }then you can do whatever you need to do in there.My suggestion would be to bring in a statically-linked busybox executable as the first item (or really, only item) so that you've got the full range of commands available without reusing whatever hack got you to that point to exhaustion.
root@system:~# less myfile -bash: /bin/less: Input/output errorThe root filesystem is dead. But my cat is still alive (in my memory): root@system:~# cat > /tmp/somefile C^d root@system:~#He's kind of lonely though, all his friends are gone: root@system:~# mount -bash: /bin/mount: Input/output error root@system:~# dmesg -bash: /bin/dmesg: Input/output error root@system:~# less -bash: /bin/less: Input/output error root@system:~# chmod -bash: /bin/chmod: Input/output errorThe system is still running, and fulfilling its purpose. I know, I know, the only sane response to this is to get the system down and replace the root drive. Unfortunately that's not a option as it would cost a lot of time and money. Also, it would kill my cat, and that would make me sad. I've thought of bringing him his usual friends from a donor. I dare not try to scp them in, in case ssh tries to load it and cuts the line (the binary is gone anyway). This sounds like a job for my cat's cousin: root@system:~# netcat -l 1234 > /tmp/less -bash: netcat: command not foundUnfortunately he's long gone. Now, I can try to trick my cat to perform a ritual to resurrect him: cat > netcat < /dev/tcp/localhost/9999And that sort of worked. He's almost alive: root@system:/tmp# /tmp/netcat -bash: /tmp/netcat: Permission deniedHe just needs a tiny spark of life. That little +x magic incantation that I cannot recite at the moment. Can you assist me bringing my cat's friends back?
Change permisions of a file with my cat's help
Boot another clean OS, mount the file system and fix permissions. As your broken file system lives in a VM, you should have your host system available and working. Mount your broken file system there and fix it. In case of QEMU/KVM you can for example mount the file system using nbd.
And now I am unable to chmod it back.. or use any of my other system programs. Luckily this is on a VM I've been toying with, but is there any way to resolve this? The system is Ubuntu Server 12.10. I have attempted to restart into recovery mode, unfortunately now I am unable to boot into the system at all due to permissions not granting some programs after init-bottom availability to run- the system just hangs. This is what I see: Begin: Running /scripts/init-bottom ... done [ 37.062059] init: Failed to spawn friendly-recovery pre-start process: unable to execute: Permission denied [ 37.084744] init: Failed to spawn friendly-recovery post-stop process: unable to execute: Permission denied [ 37.101333] init: plymouth main process (220) killed by ABRT signalAfter this the computer hangs.
How to recover from a chmod -R 000 /bin?
By default, BusyBox doesn't do anything special regarding the applets that it has built in (the commands listed with busybox --help). However, if the FEATURE_SH_STANDALONE and FEATURE_PREFER_APPLETS options are enabled at compile time, then when BusyBox sh¹ executes a command which is a known applet name, it doesn't do the normal PATH lookup, but instead runs its built-in applets through a shortcut:Applets that are declared as “noexec” in the source code are executed as function calls in a forked process. As of BusyBox 1.22, the following applets are noexec: chgrp, chmod, chown, cksum, cp, cut, dd, dos2unix, env, fold, hd, head, hexdump, ln, ls, md5sum, mkfifo, mknod, sha1sum, sha256sum, sha3sum, sha512sum, sort, tac, unix2dos. Applets that are declared as “nofork” in the source code are executed as function calls in the same process. As of BusyBox 1.22, the following applets are nofork: [[, [, basename, cat, dirname, echo, false, fsync, length, logname, mkdir, printenv, printf, pwd, rm, rmdir, seq, sync, test, true, usleep, whoami, yes. Other applets are really executed (with fork and execve), but instead of doing a PATH lookup, BusyBox executes /proc/self/exe, if available (which is normally the case on Linux), and a path defined at compile time otherwise.This is documented in a bit more detail in docs/nofork_noexec.txt. The applet declarations are in include/applets.src.h in the source code. Most default configurations turn these features off, so that BusyBox executes external commands like any other shell. Debian turns these features on in both its busybox and busybox-static packages. So if you have a BusyBox executable compiled with FEATURE_SH_STANDALONE and FEATURE_PREFER_APPLETS, then you can execute all BusyBox commands from a BusyBox shell even if the executable is deleted (except for the applets that are not listed above, if /proc/self/exe is not available). ¹ There are actually two implementations of "sh" in BusyBox — ash and hush — but they behave the same way in this respect.
I was reading the famous Unix Recovery Legend, and it occurred to me to wonder: If I had a BusyBox shell open, and the BusyBox binary were itself deleted, would I still be able to use all the commands included in the BusyBox binary? Clearly I wouldn't be able to use the BB version of those commands from another running shell such as bash, since the BusyBox file itself would be unavailable for bash to open and run. But from within the running instance of BusyBox, it appears to me there could be two methods by which BB would run a command:It could fork and exec a new instance of BusyBox, calling it using the appropriate name—and reading the BusyBox file from disk to do so. It could fork and perform some internal logic to run the specified command (for example, by running it as a function call).If (1) is the way BusyBox works, I would expect that certain BusyBox-provided commands would become unavailable from within a running instance of BB after the BB binary were deleted. If (2) is how it works, BusyBox could be used even for recovery of a system where BB itself had been deleted—provided there were still a running instance of BusyBox accessible. Is this documented anywhere? If not, is there a way to safely test it?
Are BusyBox commands truly built in?
If you no longer have a shell running as root, you'll have to reboot into rescue media. Anything will do as long as it's capable of mounting the root filesystem read-write. If you can still run commands as root, everything's copacetic. Set the environment variable LD_LIBRARY_PATH to point to the directories containing libraries used by the basic system tools. That's at least /usr/lib on a 32-bit Solaris, /usr/lib/64 on a 64-bit Solaris, possibly other directories (I don't have access to Solaris 10 now to check). To run an executable, prefix it with the runtime linker: /usr/lib/ld.so.1 (for a 32-bit executable) or /usr/lib/64/ld.so.1 (for a 64-bit executable) —now moved to /old. Thus you should be able to recover with something like: LD_LIBRARY_PATH=/old/usr/lib export LD_LIBRARY_PATH /old/usr/lib/ld.so.1 /old/usr/bin/mv /old/* /
As the headline says everything or almost everything important as root under root (/) was moved to /old on a Solaris 10 machine. So now the typical fault when trying when running commands are Cannot find /usr/lib/ld.so.1 (changed $PATH and also tried changing $LD_LIBRARY_PATH, $LD_LIBRARY_PATH_64 and $LD_RUN_PATH and exporting them but nothing of that seems to change the real library path). Tried pretty much yesterday to find something that might help but found nothing that will actually change the library path for Solaris 10 other than maybe crle but can't run that since Cannot find /usr/lib/ld.so.1. Found a lot of root or /usr/bin recovery tips and so on for Linux but the that information for that regarding Solaris 10/Unix is not rife and very sparse. Can't run cp, ln, mkdir or mv since Cannot find /usr/lib/ld.so.1. Can't neither log in with other sessions to the machine. Though one session is still up which can be used and that window's being stalled with while true; do date; echo hej 1234567; done. We've discussed the solution to use a Solaris boot CD and also a Linux dist on a USB drive. We've discussed the solution to switch the hard drive disks to another rack. The /.../static/.../mv solution has been tested but it didn't work. The commands that still can be used are (there might be more commands that can be used): echo, <, >, >>, |, pwd, cd. Is there a way to create a directory or folder without mkdir? Is there any way to use echo and > or echo and >> to restore /usr/lib/ld.so.1? I know that more than /usr/lib/ld.so.1 will probably need to be restored in order for commands to work. Thank you very much for reading and have a very nice day =)
unix - accidentally moved everything under root to /old - Solaris 10
Actually apt-get --reinstall install package should work, with files at least: ➜ ~ ls -l /usr/share/lintian/checks/version-substvars.desc -rw-r--r-- 1 root root 2441 Jun 22 14:19 /usr/share/lintian/checks/version-substvars.desc ➜ ~ sudo chmod +x /usr/share/lintian/checks/version-substvars.desc ➜ ~ ls -l /usr/share/lintian/checks/version-substvars.desc -rwxr-xr-x 1 root root 2441 Jun 22 14:19 /usr/share/lintian/checks/version-substvars.desc ➜ ~ sudo apt-get --reinstall install lintian (Reading database ... 291736 files and directories currently installed.) Preparing to unpack .../lintian_2.5.27_all.deb ... Unpacking lintian (2.5.27) over (2.5.27) ... Processing triggers for man-db (2.6.7.1-1) ... Setting up lintian (2.5.27) ... ➜ ~ ls -l /usr/share/lintian/checks/version-substvars.desc -rw-r--r-- 1 root root 2441 Jun 22 14:19 /usr/share/lintian/checks/version-substvars.descNow, you probably didn't get all the packages that have files on your /var directory, so its better to find them all: ➜ ~ find /var -exec dpkg -S {} + 2> /dev/null | grep -v "no path found" | wc -l 460In my case, it accounts for 460 paths that have a package, this is actually less if you consider that the same package can have several paths, which with some post processing we can find out that are ~122: ➜ ~ find /var -exec dpkg -S {} + 2> /dev/null | grep -v "no path found" | cut -d : -f 1 | sort | uniq | wc -l 122This of course counts several package that has the same path, like wamerican, aspell-en, ispanish, wspanish, aspell-es, myspell-es. This is easily fixable: ➜ ~ find /var -exec dpkg -S {} + 2> /dev/null | grep -v "no path found" | cut -d : -f 1 | sed 's/, /\n/g' | sort | uniq | wc -l 107So, I have 107 package that have any kind of file in /var or subdirectories. You can reinstall them using: sudo apt-get --reinstall install $(find /var -exec dpkg -S {} + 2> /dev/null | grep -v "no path found" | cut -d : -f 1 | sed 's/, /\n/g')This should fix the permissions. Now, there's another option, find a good installation and copy the file permissions over your installation with: chmod --recursive --reference good/var bad/var
Long story short, I destroyed /var and restored it from backup - but the backup didn't have correct permissions set, and now everything in /var is owned by root. This seems to make a few programs unhappy. I've since fixed apt failing fopen on /var/cache/man as advised here as well as apache2 failing to start (by giving ownership of /var/lib/apache2 to www-data). However, right now the only way to fix everything seems to be to manually muck around with permissions as problems arise - this seems very difficult as I would have to wait for a program to start giving problems, establish that the problem is related to permissions of some files in /var and then set them right myself. Is there an easy way to correct this? I already tried reinstalling (plain aptitude reinstall x) every package that was listed in dpkg -S /var, but that didn't work.
Fix broken permissions on /var (or any other system directory)
1 - use a programming language that implements chmod Ruby: ruby -e 'require "fileutils"; FileUtils.chmod 0755, “chmod"'Python: python -c "import os;os.chmod('/bin/chmod', 0755)”Perl: perl -e 'chmod 0755, “chmod”'Node.js: require("fs").chmod("/bin/chmod", 0755);C: $ cat - > restore_chmod.c #include <sys/types.h> #include <sys/stat.h>int main () { chmod( "/bin/chmod", 0000755 ); } ^D$ cc restore_chmod.c$ ./a.out2 - Create another executable with chmod By creating an executable: $ cat - > chmod.c int main () { } ^D$ cc chmod.c$ cat /bin/chmod > a.outBy copying an executable: $ cp cat new_chmod$ cat chmod > new_chmod3 - Launch BusyBox (it has chmod inside) 4 - Using Gnu Tar Create an archive with specific permissions and use it to restore chmod: $ tar --mode 0755 -cf chmod.tar /bin/chmod$ tar xvf chmod.tarDo the same thing but on the fly, not even bothering to create the file: tar --mode 755 -cvf - chmod | tar xvf -Open a socket to another machine, create an archive and restore it locally: $ tar --preserve-permissions -cf chmod.tar chmod$ tar xvf chmod.tarAnother possibility would be to create the archive regularly and then editing it to alter the permissions. 5 - cpio cpio allows you to manipulate archives; when you run cpio file, after the first 21 bytes there are three bytes that indicate the file permissions; if you edit those, you're good to go: echo chmod | cpio -o | perl -pe 's/^(.{21}).../${1}755/' | cpio -i -u6 - Dynamic loaders /bin/ld.so chmod +x chmod(actual paths may vary) 7 - /proc wizardry (untested) Step by step:Do something that forces the inode into cache (attrib, ls -@, etc.) Check kcore for the VFS structures Use sed or something similar to alter the execution bit without the kernel realising it Run chmod +x chmod once8 - Time Travel (git; yet untested) First, let's make sure we don't get everything else in the way as well: $ mkdir sandbox $ mv chmod sandbox/ $ cd sandboxNow let's create a repository and tag it to something we can go back to: $ git init $ git add chmod $ git commit -m '1985'And now for the time travel: $ rm chmod $ git-update-index --chmod=+x chmod $ git checkout '1985'There should be a bunch of git-based solutions, but I should warn you that you may hit a git script that actually tries to use the system's chmod 9 - Fighting Fire with Fire It would be great if we could fight an Operating System with another Operating System. Namely, if we were able to launch an Operating System inside the machine and have it have access to the outer file system. Unfortunately, pretty much every Operating System you launch is going to be in some kind of Docker, Container, Jail, etc. So, sadly, that is not possible. Or is it? Here is the EMACs solution: Ctrl+x b > *scratch* (set-file-modes "/bin/chmod" (string-to-number "0755" 8)) Ctrl+j10 - Vim The only problem with the EMACs solution is that I'm actually a Vim kind of guy. When I first delved into this topic Vim didn't have a way to do this, but in recent years someone made amends with the universe, which means we can now do this: vim -c "call setfperm('chmod', 'rwxrwxrwx') | quit"
Assuming that you can't reach the internet nor reboot the machine, how can I recover from chmod -x chmod?
How can I recover from a `chmod -x chmod`? [duplicate]