Initially, I was expecting to use the Machine Learning tools provided by Google, and I have tried several scenarios but without being totally satisfied, especially if you have to train a custom model.

Indeed, DocumentAI from Google provides that possibility, but from what I have seen it requires a bit of programming, and like Microsoft, the “standard model for receipts” is valid for the US/UK. I am living in EU (Belgium, France area)

I have therefore started to investigate other options, and I have discovered through a video the OCR testing tools here.

So, I have decided to adapt my initial POC project, to integrate the Microsoft form recognizer instead of Google DocumentAI.

Indeed, the form OCR testing tool ( is very convenient to model your receipts according to your location.

So go to as some resources are required to use the tool.

Below are what I have actually used.

You can easily create the Form recognizer resource, but to use it, you also need a storage account, while the resource group is more about access rights.

I am not going to describe all steps as it will take too much time, and you will find plenty of resources on the net.

To use the form OCR testing tool (, you actually need to create a Blob container in Azure and share the access with full access.

  • Create a Blob container

Get a shared Access Signature

  • Please do not forget to provide all access, as the tool needs them.

This shared connection needs to be assigned as shown below, just copy and past the URI.

Upload a set of receipts/Tickets transport

Start with a new project.

API key: the API key is the key that you can retrieve from the resource in Azure see below.

You need to copy and paste the API key thus as the service URI.

Once you have established the connection to your storage location, you will see the images uploaded into the Form Recognizer tool.

Labelling: In the first step, you need to start to create TAGS that represent the field or label that you want to retrieve.

On the right panel, you can add new tags as demonstrated below

When this is done, all JPG ticket transports are uploaded, and now we can start to attach a tag to related bounding boxes.

You need to manage 10 tickets and make your labels accordingly.

These are the first steps to customize your model,

Once you have mapped your tags with the related area in your tickets, the next step is to train your model.

Provide a name to your model.

Once done and this is relatively fast depending on the complexity of your documents, you can inspect the training result which has to be close to 100% as much as possible.

Once analysed, you can check that the model has been recorded into Azure.
I am using some endpoints that you could manage via Curl or simply using Postman as shown below.

You could also analyse the model based on a test sample, that you can upload from your system.

Now, since you have a model for your tickets transport, you could merge it to another type of receipt or ticket.

In my case, I have the following models that I have already prepared

You have to select the models that you want to merge and click Compose button.

You will end up with a new model

As this is already a relatively long post, I will prepare the next one where I can manage in Java the new model created and retrieve data from my receipts and tickets.