Latest server edition not working

Try the following first. After that reuse the info in my previous post:

# See current limits
sysctl fs.inotify.max_user_watches fs.inotify.max_user_instances fs.inotify.max_queued_events

# Temporarily raise them (takes effect immediately, lasts until reboot)
sudo sysctl -w fs.inotify.max_user_watches=1048576
sudo sysctl -w fs.inotify.max_user_instances=4096
sudo sysctl -w fs.inotify.max_queued_events=65536

# Try again
sudo systemctl daemon-reload
sudo systemctl restart manager-server
sudo systemctl status manager-server --no-pager

THEN:

sudo tee /etc/sysctl.d/99-inotify.conf >/dev/null <<'EOF'
fs.inotify.max_user_watches=1048576
fs.inotify.max_user_instances=4096
fs.inotify.max_queued_events=65536
EOF
sudo sysctl --system

(Optional) Find what’s hogging watches

GUI file indexers, IDEs, sync tools (e.g., Syncthing/Dropbox), container stacks, or dev servers can leak watches.

# Show processes with inotify fds
sudo lsof -n | grep inotify | awk '{print $1,$2}' | sort | uniq -c | sort -nr | head

# Or per-PID count
for p in /proc/[0-9]*; do c=$(ls -l $p/fd 2>/dev/null | grep -c inotify); [ "$c" -gt 0 ] && printf "%6d  %s\n" "$c" "$(ps -p ${p##*/} -o comm=)"; done | sort -nr | head


Why this affects your service

systemd (PID 1) runs as root and uses inotify to monitor cgroup control files. If root’s max_user_watches/max_user_instances are already used up (often by container engines, log shippers, or aggressive file watchers), starting any new service can trigger this error.

After raising limits

If you still see the error:

sudo systemctl stop manager-server
sudo systemctl daemon-reload
sudo systemctl start manager-server
journalctl -u manager-server -n 200 --no-pager

That’s it—raising the inotify limits and/or stopping the watch-heavy culprit resolves this.

  • Did the steps up but unfortunately still ā€œFailed to start manager-server.serviceā€.

Optional) Find what’s hogging watches

GUI file indexers, IDEs, sync tools (e.g., Syncthing/Dropbox), container stacks, or dev servers can leak watches.

root@manager02:~# sudo lsof -n | grep inotify | awk ā€˜{print $1,$2}’ | sort | uniq -c | sort -nr | head
8 polkitd 1218
7 multipath 449
6 systemd 1
4 accounts- 1192
3 systemd 1980
2 systemd-t 1104
1 systemd-u 479
1 systemd-r 1090
1 systemd-l 1273
1 dbus-daem 1194
root@manager02:~#

  • Will rerun the update-manager.sh script created

Running kernel seems to be up-to-date.

Restarting services…
/etc/needrestart/restart.d/systemd-manager
systemctl restart mysql.service open-vm-tools.service packagekit.service ssh.service systemd-journald.service systemd-resolved.service systemd-timesyncd.service systemd-udevd.service vgauth.service

Service restarts being deferred:
systemctl restart systemd-logind.service
systemctl restart wpa_supplicant.service

No containers need to be restarted.

User sessions running outdated binaries:
administrator @ session #1: sshd[1956]
administrator @ session #13: sshd[5500]
administrator @ user manager service: systemd[1980]

No VM guests are running outdated hypervisor (qemu) binaries on this host.
Reading package lists… Done
Building dependency tree… Done
Reading state information… Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Reading package lists… Done
Building dependency tree… Done
Reading state information… Done
==> Downloading latest ManagerServer…
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0
0 0 0 0 0 0 0 0 --:–:-- --:–:-- --:–:-- 0
100 53.3M 100 53.3M 0 0 20.1M 0 0:00:02 0:00:02 --:–:-- 31.9M
==> Installing ManagerServer…
==> Restarting manager-server service…
==> ManagerServer is running :white_check_mark:
root@manager02:~#

  • Check service - Still failing this is weird

root@manager02:~# systemctl status manager-server
Ɨ manager-server.service
Loaded: loaded (/etc/systemd/system/manager-server.service; enabled; preset: enabled)
Active: failed (Result: core-dump) since Wed 2025-10-01 10:01:24 NZDT; 1min 29s ago
Duration: 605ms
Process: 27480 ExecStart=/usr/share/manager-server/ManagerServer -port 8080 -path=/media/network/nas01/Manager/ (code=dumped, signal>
Main PID: 27480 (code=dumped, signal=ABRT)
CPU: 315ms

Oct 01 10:01:24 manager02 systemd[1]: manager-server.service: Scheduled restart job, restart counter is at 5.
Oct 01 10:01:24 manager02 systemd[1]: manager-server.service: Start request repeated too quickly.
Oct 01 10:01:24 manager02 systemd[1]: manager-server.service: Failed with result ā€˜core-dump’.
Oct 01 10:01:24 manager02 systemd[1]: Failed to start manager-server.service.
lines 1-12/12 (END)

@compuit none of those messages are from Manager Server. It’s all from systemd.

Also, when launching ManagerServer.

Use:

-path /media/network/nas01/Manager/

instead of

-path=/media/network/nas01/Manager/

Where does Manager get the Path from that is note here or what specifies it? Process: 27480 ExecStart=/usr/share/manager-server/ManagerServer -port 8080 -path=/media/network/nas01/Manager/ (code=dumped, signal>

I am happy to rebuild because all company files are backed up.

Just for the record Manager is installed here /usr/share/manager-server on the local host

Data located here /root/.local/share/Manager/ on the local host That /media/network/nas01/Manager/ was used years ago so not sure why it raised it head with an upgrade?

@compuit you have ManagerServer and systemd.

First ensure ManagerServer is working by simply launching it manually in terminal:

cd /usr/share/manager-server
./ManagerServer -port 8080 -path /media/network/nas01/Manager/

If that works and you can connect to it, then ManagerServer is not an issue.

By the way, modern way to launch ManagerServer is to use --urls instead like this:

./ManagerServer --urls http://*:8080 --path /media/network/nas01/Manager/

Sadly same issue

administrator@manager02:/usr/share/manager-server$ ./ManagerServer -port 8080 -path /media/network/nas01/Manager/
Manager Server [Version 25.9.30.2859]
Copyright (c) 2025 The Manager.io Trust. All rights reserved.

Syntax:

ManagerServer  [options]

Options:

--urls <binding>      Kestrel URL(s). Example: http://localhost:5000
                      Multiple values separated by ';'.
--path <directory>    Data directory for Manager.

Examples:

ManagerServer 
ManagerServer  --urls http://localhost:80
ManagerServer  --urls http://*:80 --path "/home/administrator/Documents/Manager.io"

[2025-10-01 11:20:12] info: Microsoft.Hosting.Lifetime[14] Now listening on: http://[::]:8080
[2025-10-01 11:20:12] info: Microsoft.Hosting.Lifetime[0] Application started. Press Ctrl+C to shut down.
[2025-10-01 11:20:12] info: Microsoft.Hosting.Lifetime[0] Hosting environment: Production
[2025-10-01 11:20:12] info: Microsoft.Hosting.Lifetime[0] Content root path: /usr/share/manager-server

administrator@manager02:~$ sudo systemctl status manager-server
[sudo] password for administrator:
Ɨ manager-server.service
Loaded: loaded (/etc/systemd/system/manager-server.service; enabled; prese>
Active: failed (Result: core-dump) since Wed 2025-10-01 10:01:24 NZDT; 1h >
Duration: 605ms
Process: 27480 ExecStart=/usr/share/manager-server/ManagerServer -port 8080>
Main PID: 27480 (code=dumped, signal=ABRT)
CPU: 315ms

Oct 01 10:01:24 manager02 systemd[1]: manager-server.service: Scheduled restart>
Oct 01 10:01:24 manager02 systemd[1]: manager-server.service: Start request rep>
Oct 01 10:01:24 manager02 systemd[1]: manager-server.service: Failed with resul>
Oct 01 10:01:24 manager02 systemd[1]: Failed to start manager-server.service.
lines 1-12/12 (END)

Please can I be pointed to the reinstall from scratch

What do you mean it’s the same issue? Your ManagerServer is clearly running on port 8080 once you launched:

./ManagerServer -port 8080 -path /media/network/nas01/Manager/

The issue I do not get is why manager is looking at the nas01 for data? /media/network/nas01/Manager the data there is obsolete and not used in years. What is the automated line telling Manager in this deployment to go there? I can see running it manually will do that but not sure why this changed to /media/network/nas01/Manager. I can move the current data to that location maybe that is the key?

Manager is not being installed. It is simply launched.

What you have done with systemd which automatically launches Manager on system restarts is not part of Manager. systemd is just one of many init systems. Manager has no idea about systemd. It’s not part of Manager.

I know older guides have been recommending systemd so that ManagerServer automatically launches after server restarts. I no longer give any guidance on how to launch ManagerServer automatically after the server restart because there are many options so it should be up to server administrator to chose method they are familiar with.

If you are not familiar how to use systemd, then don’t use it. All these posts are about systemd giving you some errors.

Launch ManagerServer directly.

When you launch:

./ManagerServer -port 8080 -path /media/network/nas01/Manager/

Then Manager will use /media/network/nas01/Manager/ as your data folder. You can modify your path argument to something else if required.

Either way, ManagerServer is not installed. It is simply launched without any installation step.

Building new server hopefully the company.manager files load OK. Still trying to understand why the upgrade caused this. Open to input.

The Step upgrade process I followed:

Step 1

wget https://github.com/Manager-io/Manager/releases/latest/download/ManagerServer-linux-x64.tar.gz -O /usr/share/manager-server/ManagerServer-linux-x64.tar.gz

Step 2

tar xvzf /usr/share/manager-server/ManagerServer-linux-x64.tar.gz -C /usr/share/manager-server

Step 3

Reboot

I am caught flat footed here as I am not even successful at setting up a new host with minimal install.

Quite unhappy with the team at manager.io I did not think the dark theme as updated in releases would be so significant and break existing config.

Regards

Got Server version 25.9.30.2859 working - What changed??

Just for the record, in our case the solution was as follows :- Essentially what the update process had done on the production system, was change the data path to a location that was last used in 2021. I do not understand why or how that change came about as I followed the 3 steps I listed in the case. It was most unexpected as no updates have behaved this way in the past. So initially overlooked the obvious. Once the path correctly set back to local device (In our case) the system was back to operational. Many thanks for the eko CREATE THE SCRIPT too. Great document. Have a good weekend.

Have you successfully updated the Manager?

Yes upgraded OK but there is some regression we are just waiting for some key bugs to be tidied up / fixed before proceeding to latest version again.