Latency test between Azure and On-Premises – Part One

First test – TeamViewer VPN


In this first test, we are connected via TeamViewer VPN (specification of this technology at the last post) to an Azure VM (Virtual Machine). This VM is running Windows 10 retail version, with Visual Studio 2017 installed. Visual Studio is used to change the requests made to the OnPremService, to have more precise results. We are also using Edge browser from Microsoft to run client web app.



This is a screenshot of the UI of Azure Client App. In this screen, we see an “OK” button and three text fields. This two first fields are going to be populated with the current time (in milliseconds), and the third one with the total elapsed time, also in milliseconds. The first field will be filled with a timestamp, wich is generated when you click the OK button. The second field is filled with a timestamp, but only when the client receives a response from the OnPremService. The third is then populated with the difference of timestamps (second field – first field). This will give us the elapsed time between pressing the button (making a request) and receiving a response (with data from OnPremService).



Here we see the results. The “77” written on the button is the total number of requests made by the Azure Client App. This number is sent by the OnPremService. We can see that this operation took 42ms to execute.


Median values

Number of Requests Total elapsed time
1 177ms
5 ~45ms
10 ~44ms
20 ~42ms
50 ~41ms
75 ~47ms
100 ~45ms


Above are the median values of total elapsed times. The first one’s higher because the browser needs to render the page and all services are woken inside IIS and internet routes being formed. After the first request, we see that total elapsed times are very consistent, and didn’t have peaks.

This is only for a simple request-response mechanism via VPN. The data travelling inside the VPN is not larger than 263 bits until now.  This makes a absolute minimal “ping-like” test, showing us the real bare-minimum connection latency results.


Test #1 10KB MESSAGE

Now we are going to repeat all those steps, and instead of showing a table with results, we’ll be resuming the results, thus improving the readability.
In this test, we’ll be increasing the response to 10KB (Kilobytes), being 10kb in plain text (shown in the text area below the button), to see how total elapsed time changes.
After increasing data size, the total elapsed time stayed roughly at ~65ms, with random peaks (worst results are roughly between 125 and 150ms). We see an increase of ~20ms.
Not too bad, considering that data size is now ~300 times larger than the base test!

 test #1 100KB MESSAGE

With the same UI and the same functionalities, let’s try to increase even more the data size, to a new 100kb of text (100 times the size as before).
Initial request elapsed time was ~440ms, and right after it, stayed roughly at ~100ms, with peaks of 150ms average. Again, not too bad!


test #1 5MB MESSAGE


With the same UI and the same functionalities, let’s try to increase even more the data size to 5120kb (50 times the size), being 5MB of plain text. This is the last test of this scenario.
Initial request took ~1.8 seconds (1809ms) to execute. This is a quite considerable amount of time, when execution time is critical.
Next requests stayed roughly the same as the initial request, and all sort of spikes were detected. We noticed some requests taking nearly one second to execute, to taking up to two and a half (2,5) seconds to complete. This is more inconsistent because file size is huge for HTTP Get method. Although this might not be a VPN problem, this becomes a very unpractical way of using HTTP for a big message.

Final overview of this scenario

Resuming, this is a very consistent method. If you need to send data repeatedly, in small amounts, this is a great candidate. HTTP methods via TeamViewer VPN proved to be low-latency and very easy to implement, having fast responses to a small data size extent.

Table of average execution times regarding this test scenario and data sizes



1 263 bits ~45ms
2 10kb ~65ms
3 100kb ~100ms
4 5MB ~1600ms




Graph of data size vs execution time

In this graph, we can verify that there is an exponential growth of average execution time, when increasing data size above 100kb. Going to 1MB shouldn’t be too bad, but you start to sacrifice execution time over transmitting large data. It’s best to send data chucks and get all together in the client app. Max execution time is very above average. This dues to all services being hibernated and coming back to life takes time. When all is up and running, average times go way down!


Make sure you don’t miss next part!

Leave a Reply

Your email address will not be published. Required fields are marked *