In fact, I use an extension to detect when a user adds an invoice using JavaScript (because there is no WebHooks). Then I use the API to get the invoice data and make some calculations.
That’s why I would like to get the FileID and Key from the url as I used to. If not, I would have to Pool every certain moments the API waiting for a new invoice and wasting a lot of resources.
I am using Manager Server v22.9.1.350 on Ubuntu 20.04 and my Sales Invoices links contain Key and FileID but in the following sequence (domain name deleted this is after the /).
I tried to reproduce your issue and I can report that the behavior is a bit perplexing. I tried to decode the query string and it appears to be recursing.
It’s true URLs have changed yesterday. Not API though. Just everything else outside of API.
For the record, I do value clean readable URLs. But they do introduce extra layer of complexity into the program. It got to the point where it’s just easier for me to encode query parameters as binary data (which can be compressed if needed).
There are no secrets in query parameter. It’s just encoded differently.
I know this kind of creates a problem where URLs are no longer “guessable”. But the way I look at this, “guessable” URLs are still in API and always will be.
What I really need is to detect the moment when the user create a document (sales invoice and purchase invoice).
If there were WebHooks would be perfect because I could receive the data (FileID and key) so I could then make an API call and get the rest of the info and make some calculations like utility by invoice, commission, etc.
Meanwhile, as there are no WebHooks, what I was doing was using an JavaScript extension to detect when the user make a new invoice and send my own http request (emulating a Webhook). I used to take the FileID and key from the URL.
In addition to, now extensions are an obsolete feature…
All this described before, just to avoid to pool the API every second looking for a new invoice. WebHooks is a better solution because it saves a lot of CPU resources for both ends. The API call is made only when needed, not every second.
cBesides, when the API is called to get a list, there is no limit, no pagination, there is no filter… So every call will bring a lot of unnecessary data.
Is the API really the best way of doing this? Would not an external program call which is passed appropriate data be a cleaner solution? Similar calls are already required for tax authority reporting / integration elsewhere.
How do you think a external program can get the appropriate data? I think the best way to make an integrations is using the API. What other solution do you propose?
In my case, that external call is made form another server where I have deployed my program. But this program need to be notified when a new invoice is created. And yes, other requirement I have is to pass the invoice data to the local tax authority.
Yes, there are some providers government approved and the only thing is to send them a JSON or a XML with the data. And I have to be doing this before the year ends.
Yes, I would love to. I fact, I have made some language contributions several years before.