I’ve put together an open-source CLI tool for Manager.io self-hosted edition that provides command-line access to the REST API v2. It also includes skill definitions for AI coding agents (Claude Code, OpenClaw, etc.).
It’s a single Bash script with no dependencies beyond curl and jq. Works on macOS, Linux, and WSL.
Good to see some work with AI integration happening here.
I’m currently working on an MCP server, and one of the big problems we have is that the data attributes for the parameters is not well defined or even documented. My testing has also indicated that there’s also not a lot of input validation going on, as my test scripts that have been probing the API have been causing server crashes all over the place.
To help this work progress, I’ve put together an enhanced OpenAPI swagger file that includes a lot of the parameters in the schema. It’s not complete and may be incorrect in places, so I would welcome any comments or contributions.
@mprokopov clean and clear code and good initiative. I decided to answer because it may be useful to highlight that if you use a cloud-based AI (Claude, GPT, Gemini, etc.), the AI will see the output of manager commands in your chat.
That means sensitive accounting data (invoices, customer balances, bank info, etc.) temporarily enters the AI provider’s servers as part of the conversation.
This is not the repo stealing data — it’s the inherent nature of using any cloud LLM with tools/shell access.
Mitigations:
Use the CLI standalone (no AI).
Or pair it with a local LLM that supports tool calling/shell execution (e.g., via Ollama + some agent framework) — data never leaves your machine.