<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Integration Archives - Blog IT</title>
	<atom:link href="https://blogit.create.pt/tag/integration/feed/" rel="self" type="application/rss+xml" />
	<link>https://blogit.create.pt/tag/integration/</link>
	<description>Create IT blogger community</description>
	<lastBuildDate>Thu, 10 Jan 2019 12:46:13 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>Latency test between Azure and On-Premises – Specifications</title>
		<link>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-specifications/</link>
					<comments>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-specifications/#respond</comments>
		
		<dc:creator><![CDATA[Gustavo Brito]]></dc:creator>
		<pubDate>Mon, 27 Nov 2017 18:00:04 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[azure]]></category>
		<category><![CDATA[delay]]></category>
		<category><![CDATA[Hybrid]]></category>
		<category><![CDATA[Hybrid Cloud]]></category>
		<category><![CDATA[Integration]]></category>
		<category><![CDATA[Latency]]></category>
		<category><![CDATA[On-Premises]]></category>
		<category><![CDATA[webservices]]></category>
		<guid isPermaLink="false">http://blogit.create.pt/gustavobrito/?p=954</guid>

					<description><![CDATA[<p>Internet Connection Create IT as an Internet connection of 100mbps Down/20mbps Up. Azure was capping at a 150mbps symmetrical connection. TeamViewer VPN, Azure Site-to-Site and Point-to-Site connections were capped at 10mpbs. Azure plans In Azure, the most economical plans were chosen, considering our requirements. Some plans were free, some were cheaper than the others, but [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-specifications/">Latency test between Azure and On-Premises – Specifications</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p style="text-align: center"><strong><em>Internet Connection</em></strong></p>
<p>Create IT as an Internet connection of 100mbps Down/20mbps Up. Azure was capping at a 150mbps symmetrical connection.</p>
<p>TeamViewer VPN, Azure Site-to-Site and Point-to-Site connections were capped at 10mpbs.</p>
<p><span id="more-954"></span></p>
<p style="text-align: center"><strong><em>Azure plans</em></strong></p>
<p>In Azure, the most economical plans were chosen, considering our requirements. Some plans were free, some were cheaper than the others, but having VPN capabilities and all configurations supported, we chose the cheaper plans that were available with all these needed features supported.</p>
<p style="text-align: center"><strong><em>Browsers</em></strong></p>
<p>We used Chrome, Firefox and Edge. This was to eliminate browser differences. Not showing any differences in total execution times, we kept using Chrome as a default testing browser.</p>
<p style="text-align: center"><strong><em>LAN Connection</em></strong></p>
<p>Lan wise, our internal network is based on a 1gbps Local Area Network.</p>
<p style="text-align: center"><strong><em>On-Premises Service Host</em></strong></p>
<p>The On-Premises Service was hosted on a machine running Windows 10 Retail with latest updates installed and IIS Express. All code was made with Visual Studio 2017 Enterprise. The relevant host hardware specifications are:</p>
<ol>
<li><em>Intel® Core™ i7-6700HQ CPU @ 2.6GHz</em></li>
<li><em>32GB of RAM DDR4 @ 3400MHz</em></li>
<li><em>NVMe M.2 PCI-e 240GB SSD</em></li>
</ol>
<p style="text-align: center"><strong><em>Testing Hours</em></strong></p>
<p>All tests were executed on working hours, 9am to 18pm, GMT.</p>
<p style="text-align: center"><strong><em>The Writer</em></strong></p>
<p>I’m a consultant @ Create It, a Portuguese company. If you read this, make sure I know about it! ? Hugs and kisses! This was an absolute pleasure to make ?</p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-specifications/">Latency test between Azure and On-Premises – Specifications</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-specifications/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Latency test between Azure and On-Premises – Conclusions</title>
		<link>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-conclusions/</link>
					<comments>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-conclusions/#respond</comments>
		
		<dc:creator><![CDATA[Gustavo Brito]]></dc:creator>
		<pubDate>Mon, 27 Nov 2017 17:50:52 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[azure]]></category>
		<category><![CDATA[delay]]></category>
		<category><![CDATA[Hybrid]]></category>
		<category><![CDATA[Hybrid Cloud]]></category>
		<category><![CDATA[Integration]]></category>
		<category><![CDATA[Latency]]></category>
		<category><![CDATA[On-Premises]]></category>
		<category><![CDATA[webservices]]></category>
		<guid isPermaLink="false">http://blogit.create.pt/gustavobrito/?p=884</guid>

					<description><![CDATA[<p>And when all testing’s complete… A final review is coming! So, brace yourselves and let’s start with a graph ? Note: 5MB results must be multiplied by 10 (value x 10) Here are the results. All of them. Don’t forget to multiply 5MB results by 10! With the graph below, we can check how message [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-conclusions/">Latency test between Azure and On-Premises – Conclusions</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>And when all testing’s complete… A final review is coming! So, brace yourselves and let’s start with a graph ?</p>
<p><img fetchpriority="high" decoding="async" class="aligncenter size-full wp-image-894" src="http://blogit-create.com/wp-content/uploads/2017/11/net-12.png" alt="" width="624" height="369" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/net-12.png 624w, https://blogit.create.pt/wp-content/uploads/2017/11/net-12-300x177.png 300w" sizes="(max-width: 624px) 100vw, 624px" /></p>
<p style="text-align: center"><strong>Note: 5MB results must be multiplied by 10 (value x 10)</strong></p>
<p>Here are the results. All of them. Don’t forget to multiply 5MB results by 10!</p>
<p><span id="more-884"></span></p>
<p>With the graph below, we can check how message sizes influence latency. After doing some math, these are the results:</p>
<p><img decoding="async" class="aligncenter size-full wp-image-904" src="http://blogit-create.com/wp-content/uploads/2017/11/net-13.png" alt="" width="576" height="334" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/net-13.png 576w, https://blogit.create.pt/wp-content/uploads/2017/11/net-13-300x174.png 300w" sizes="(max-width: 576px) 100vw, 576px" /></p>
<p>&nbsp;</p>
<p>As we can confirm, latency follows an exponential growing rate, which means, the bigger the message, even greater will be latency. We can confirm that, for each scenario, this applies.</p>
<p><strong>But what does it tell me? </strong>This tells you that you need to be careful if you’re planning on sending big messages between two points. <strong>This exponential growing rate applies to all scenarios!</strong></p>
<p>All values listed and noted, so we gave tests a score. This score is calculated by adding all three results (10kb, 100kb and 5mb) and dividing all by three. This will give us average execution time. For your best understanding, below is an exponential graph. This climbs faster and faster to infinity, being the horizontal axle message sizes, and the vertical one latency:</p>
<p><img decoding="async" class="aligncenter size-full wp-image-914" src="http://blogit-create.com/wp-content/uploads/2017/11/net-14.png" alt="" width="415" height="188" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/net-14.png 415w, https://blogit.create.pt/wp-content/uploads/2017/11/net-14-300x136.png 300w" sizes="(max-width: 415px) 100vw, 415px" /></p>
<p>&nbsp;</p>
<p style="text-align: center"><strong>Now for the results!</strong></p>
<p><img decoding="async" class="aligncenter size-full wp-image-924" src="http://blogit-create.com/wp-content/uploads/2017/11/net-15.png" alt="" width="576" height="336" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/net-15.png 576w, https://blogit.create.pt/wp-content/uploads/2017/11/net-15-300x175.png 300w" sizes="(max-width: 576px) 100vw, 576px" /></p>
<p>As we can check, <strong>Test #2 (Local On-Premises LAN) was the winner here, latency wise.</strong> This was expected. Although this is the fastest, it’s the riskiest and <strong>NOT RECOMMENDED! It’s true that latency is reduced, but with only ~53ms difference between Azure Site-to-Site on a 100Kb message, </strong>is it worth to have a full infrastructure depending on your maintenance? And the networking gear it requires? What about having someone responsible for the room 24/7? How about costs? <strong>Think about it!</strong> Think about savings when migrating your business to Cloud!</p>
<p><strong>Second place</strong> goes to HTTP without VPN (Exposed services). <strong>Not a Cloud solution, and a risky one too! </strong>If you like hackers messing with your vital business services and trying to break in from all over the world 24/7, go right ahead!</p>
<p>Regarding fast and safe Cloud solutions, which is the main reason of reading this document after all, <strong>the winner was Test #4 (Azure Site-to-Site VPN), getting third place on the board, by a difference of 8 points! </strong>This managed to be the <strong>safest, more efficient and with 99.9% uptime</strong> method of expanding your office or network, and have a low-latency data exchange. <strong>This is the optimal and safer solution. No hackers, no maintenance, no worries!</strong></p>
<blockquote>
<p style="text-align: center">Consult us to migrate your business! <strong>You’re at the doorstep to your future!</strong></p>
</blockquote>
<p><strong> <img decoding="async" class="aligncenter size-full wp-image-934" src="http://blogit-create.com/wp-content/uploads/2017/11/create.png" alt="" width="302" height="101" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/create.png 302w, https://blogit.create.pt/wp-content/uploads/2017/11/create-300x100.png 300w" sizes="(max-width: 302px) 100vw, 302px" /></strong></p>
<p>&nbsp;</p>
<p><strong>For full test specifications, <a href="http://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-specifications">read here</a>!</strong></p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-conclusions/">Latency test between Azure and On-Premises – Conclusions</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-conclusions/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Latency test between Azure and On-Premises – Part Seven</title>
		<link>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-seven/</link>
					<comments>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-seven/#respond</comments>
		
		<dc:creator><![CDATA[Gustavo Brito]]></dc:creator>
		<pubDate>Mon, 27 Nov 2017 17:40:45 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[azure]]></category>
		<category><![CDATA[delay]]></category>
		<category><![CDATA[Hybrid]]></category>
		<category><![CDATA[Hybrid Cloud]]></category>
		<category><![CDATA[Integration]]></category>
		<category><![CDATA[Latency]]></category>
		<category><![CDATA[On-Premises]]></category>
		<category><![CDATA[webservices]]></category>
		<guid isPermaLink="false">http://blogit.create.pt/gustavobrito/?p=814</guid>

					<description><![CDATA[<p>In this scenario we would be using a Relay. This is yet another way of connecting your on-premises infrastructure to Azure, but not recommended at all in terms of latency. We’ll explain it, don’t worry. We would also be using an Hybrid Connection to establish a “link” between my local machine and Function Apps. First [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-seven/">Latency test between Azure and On-Premises – Part Seven</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>In this scenario we would be using a <em>Relay</em>. This is <strong>yet another way of connecting your on-premises</strong> <strong>infrastructure to Azure</strong>, but <strong>not recommended at all in terms of latency</strong>. We’ll explain it, don’t worry. We would also be using an Hybrid Connection to <strong>establish a “link” between my local machine and Function Apps</strong>.</p>
<p>First step is to configure an Hybrid Connection in Function Apps network configuration. Then, you can run Hybrid Connection Manager (assuming you’ve downloaded it from Azure), to manage your connection.</p>
<p><span id="more-814"></span></p>
<p><img decoding="async" class="aligncenter size-full wp-image-834" src="http://blogit-create.com/wp-content/uploads/2017/11/net-8.png" alt="" width="583" height="326" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/net-8.png 583w, https://blogit.create.pt/wp-content/uploads/2017/11/net-8-300x168.png 300w" sizes="(max-width: 583px) 100vw, 583px" /></p>
<p>&nbsp;</p>
<p>After saving which connection you want to use, you can always confirm connection status in this window:</p>
<p><img decoding="async" class="aligncenter size-full wp-image-844" src="http://blogit-create.com/wp-content/uploads/2017/11/net-9.png" alt="" width="590" height="321" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/net-9.png 590w, https://blogit.create.pt/wp-content/uploads/2017/11/net-9-300x163.png 300w" sizes="(max-width: 590px) 100vw, 590px" /></p>
<p>&nbsp;</p>
<p>But wait, I see <strong><em>Service Bus</em></strong> referenced in that print screen! What’s that?</p>
<p><img decoding="async" class="aligncenter size-full wp-image-854" src="http://blogit-create.com/wp-content/uploads/2017/11/net-10.png" alt="" width="600" height="343" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/net-10.png 600w, https://blogit.create.pt/wp-content/uploads/2017/11/net-10-300x172.png 300w" sizes="(max-width: 600px) 100vw, 600px" /></p>
<p>&nbsp;</p>
<p><strong><em>Service Bus</em></strong> is a technology of <strong>asynchronously</strong> sending and receiving messages from multiple publishers to multiple subscribers. This is, as you’ve probably guessed, a <strong>publish/subscribe</strong> mechanism.</p>
<p><img decoding="async" class="aligncenter size-full wp-image-864" src="http://blogit-create.com/wp-content/uploads/2017/11/net-11.png" alt="" width="624" height="320" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/net-11.png 624w, https://blogit.create.pt/wp-content/uploads/2017/11/net-11-300x154.png 300w" sizes="(max-width: 624px) 100vw, 624px" /></p>
<p>This is an example of a publish/subscribe architecture. The image should be self-explanatory. A sender sends a message, and all subscribers have filters to only process the messages that they subscribe to.</p>
<p style="text-align: center"><strong>Why didn’t you test this method?</strong></p>
<p>Well, before you call me anything, you got to understand <strong>why</strong>.</p>
<p style="text-align: center"><strong><em>Architecture!</em></strong></p>
<p>This concept relies on checking the bus for new messages from time to time, with a 10 second interval for instance. Each 10 second, your program/service, or whatever you want to call it, will check the bus for new messages and retrieves them. This generates a “<strong><em>waiting game</em></strong>”, that’s why it’s called an asynchronous process. <strong>It’s not designed to be fast, but to be reliable </strong>and to connect multiple services into one giant message box.</p>
<p><strong>Example:</strong></p>
<p>A highway. A highway in USA connects multiple states together, and you can choose where to leave it. You can choose a city in a certain state, or a totally different city in another state, it just depends where you’re going to. This is the same. The service bus it’s like that highway. A publisher publishes a car. Each message (car) has a topic (destination), and every subscriber (city/state) retrieves a message (car) depending on their topic (destination).</p>
<p>This is the main reason that latency doesn’t apply here. It’s a very robust and clean way of integrating services with applications, <strong>but latency isn’t a concern</strong> here<strong>. It’s just not made to be fast. It’s made to be simple.</strong></p>
<p><strong>Conclusions and summary <a href="http://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-conclusions">next post</a>!</strong></p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-seven/">Latency test between Azure and On-Premises – Part Seven</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-seven/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Latency test between Azure and On-Premises – Part Six</title>
		<link>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-six/</link>
					<comments>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-six/#respond</comments>
		
		<dc:creator><![CDATA[Gustavo Brito]]></dc:creator>
		<pubDate>Mon, 27 Nov 2017 17:35:22 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[azure]]></category>
		<category><![CDATA[delay]]></category>
		<category><![CDATA[Hybrid]]></category>
		<category><![CDATA[Hybrid Cloud]]></category>
		<category><![CDATA[Integration]]></category>
		<category><![CDATA[Latency]]></category>
		<category><![CDATA[On-Premises]]></category>
		<category><![CDATA[webservices]]></category>
		<guid isPermaLink="false">http://blogit.create.pt/gustavobrito/?p=714</guid>

					<description><![CDATA[<p>In this test, we’ll be using Function Apps and Logic Apps. Function Apps are a serverless concept of running custom code in Azure. Serverless is capable of scaling when needed. We deployed this C# function that represents the Client Application (but without GUI). &#160; This function calls our  REST Web Service (On-Premises) via Point-to-Site VPN [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-six/">Latency test between Azure and On-Premises – Part Six</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>In this test, we’ll be using Function Apps and Logic Apps. Function Apps are a <strong>serverless</strong> concept of running custom code in Azure. Serverless is capable of <strong>scaling when needed</strong>. We deployed this <em>C# </em>function that represents the Client Application (but without GUI).</p>
<p><img decoding="async" class="aligncenter size-full wp-image-734" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/net-1.png" alt="" width="599" height="367" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/net-1.png 599w, https://blogit.create.pt/wp-content/uploads/2017/11/net-1-300x184.png 300w" sizes="(max-width: 599px) 100vw, 599px" /></p>
<p>&nbsp;</p>
<p>This function calls our  REST Web Service (On-Premises) via Point-to-Site VPN connection to Azure. This can be called on-demand via URL, via Run button on the image, or via Logic Apps.</p>
<p><strong>Important note:<br />
In this scenario, the client makes <u>one single HTTP request</u>, instead of 10 requests as previous scenarios.</strong></p>
<p><span id="more-714"></span></p>
<p><img decoding="async" class="aligncenter size-full wp-image-744" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/net-2.png" alt="" width="640" height="394" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/net-2.png 640w, https://blogit.create.pt/wp-content/uploads/2017/11/net-2-300x185.png 300w, https://blogit.create.pt/wp-content/uploads/2017/11/net-2-356x220.png 356w" sizes="(max-width: 640px) 100vw, 640px" /></p>
<p>&nbsp;</p>
<p>This is a Logic App (Designer View). We start by a Recurrence Action, which is a timer set to run each 30 second. When recurrence action is fired, this logic app starts by calling “HttpTriggerCSharp1”, which is the function app showed previously. When this function app is finished, the logic app sends an e-mail to my Create account reporting total execution time.</p>
<p><img decoding="async" class="aligncenter size-full wp-image-754" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/net-3.png" alt="" width="622" height="268" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/net-3.png 622w, https://blogit.create.pt/wp-content/uploads/2017/11/net-3-300x129.png 300w" sizes="(max-width: 622px) 100vw, 622px" /></p>
<p>Finally, this is the e-mail that is sent by our logic app. We can see that total execution time reported by the function app is 63ms. Read on for full results!</p>
<p style="text-align: center"><strong>Test #6 10kb message</strong></p>
<p>Called by a Logic App trigger, the client makes a HTTP request to our On-Prem Service, receiving a 10kb message, the same as before.</p>
<p><img decoding="async" class="aligncenter size-full wp-image-764" src="http://blogit-create.com/wp-content/uploads/2017/11/net-4.png" alt="" width="584" height="295" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/net-4.png 584w, https://blogit.create.pt/wp-content/uploads/2017/11/net-4-300x152.png 300w" sizes="(max-width: 584px) 100vw, 584px" /></p>
<p>&nbsp;</p>
<p>We can see that total execution time was 59ms. This is measured by calling the function app via Run button. We’ll be doing this due to being easier to obtain results. This same result is reported by mail if the function is called by Logic Apps. Execution time oscillated between 47ms and 70ms, being the median value ~59ms. Not bad, and slightly faster than TeamViewer VPN, but slower than direct HTTP request without VPN.</p>
<p>Regarding this same result and comparing it to Site-to-Site VPN, we can see that this method took more ~15ms to execute. This can have a very important influence if you are choosing between PTS and Site-to-Site for your business. Also, <strong>Site-to-Site has a more constant flow</strong>, while this test <strong>was more irregular</strong> (had more peaks).</p>
<p><strong>Rest assured that this method works, if you can trade a slight latency increase over cost.</strong></p>
<p style="text-align: center"><strong>Test #6 100kb message</strong></p>
<p>Now for 100kb message, we’ll repeat the same steps to obtain the results.</p>
<p><img decoding="async" class="aligncenter size-full wp-image-774" src="http://blogit-create.com/wp-content/uploads/2017/11/net-5.png" alt="" width="599" height="300" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/net-5.png 599w, https://blogit.create.pt/wp-content/uploads/2017/11/net-5-300x150.png 300w" sizes="(max-width: 599px) 100vw, 599px" /></p>
<p>&nbsp;</p>
<p>As we can verify, the test took 78ms to execute. It peaked 97ms and drops to 68ms. Overall, it took <strong>more ~8ms to execute than Site-to-Site but was ~22ms faster than TeamViewer</strong>. Regarding direct HTTP request without any VPN, <strong>this was slower ~17ms</strong>. That’s quite a punch there. Let’s try 5Mb.</p>
<p style="text-align: center"><strong>Test#6 5mb message</strong></p>
<p>Now to the ultimate 5MB test. No more words needed here.</p>
<p><img decoding="async" class="aligncenter size-full wp-image-784" src="http://blogit-create.com/wp-content/uploads/2017/11/net-6.png" alt="" width="603" height="296" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/net-6.png 603w, https://blogit.create.pt/wp-content/uploads/2017/11/net-6-300x147.png 300w, https://blogit.create.pt/wp-content/uploads/2017/11/net-6-324x160.png 324w, https://blogit.create.pt/wp-content/uploads/2017/11/net-6-533x261.png 533w" sizes="(max-width: 603px) 100vw, 603px" /></p>
<p>&nbsp;</p>
<p>We can see that this is a deal breaker here. The latency here is way over any other test we did. <strong>3.8s</strong>! This may happen due to several reasons, the main ones being:</p>
<ol>
<li>Function App performance not meeting our requirements,</li>
<li>The way that Azure connects PTS VPN to Function Apps,</li>
<li>Consuming a large data might be doing some memory swaps server-side.</li>
</ol>
<p style="text-align: center"><strong>Overall test discussion</strong></p>
<p>Well, this is very surprising. Function Apps is clearly <strong>not the way for a large message.</strong> The processing is inefficient, therefore takes way too much time. <strong>PTS connection to a VM in Azure was faster by more than one second, and surprisingly cheaper</strong>!</p>
<p>This solution may still be in Preview, or may have bugs, or maybe it just <strong>wasn’t made for this type of usage</strong>. Regarding complexity, is clearly <strong>simpler to use than a VM</strong>, <strong>it’s serverless and scalable</strong>. You can get rid of VM and make your functions directly accessible via any web browser on any platform.</p>
<p>For exposing <strong>your logic in a cleaner, faster way, this is what you need</strong>. For data crunching or analytics, it’s also the cleanest way, but if <strong>you’re planning to use it to make HTTP requests to your on-prem services, you should consider other options, unless execution time isn’t on the equation.</strong></p>
<p><img decoding="async" class="aligncenter size-full wp-image-794" src="http://blogit-create.com/wp-content/uploads/2017/11/net-7.png" alt="" width="385" height="387" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/net-7.png 385w, https://blogit.create.pt/wp-content/uploads/2017/11/net-7-150x150.png 150w, https://blogit.create.pt/wp-content/uploads/2017/11/net-7-298x300.png 298w" sizes="(max-width: 385px) 100vw, 385px" /></p>
<p><strong>Read the next post <a href="http://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-seven">here</a> about relays and service bus!</strong></p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-six/">Latency test between Azure and On-Premises – Part Six</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-six/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Latency test between Azure and On-Premises – Part Five</title>
		<link>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-five/</link>
					<comments>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-five/#respond</comments>
		
		<dc:creator><![CDATA[Gustavo Brito]]></dc:creator>
		<pubDate>Mon, 27 Nov 2017 17:30:45 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[azure]]></category>
		<category><![CDATA[delay]]></category>
		<category><![CDATA[Hybrid]]></category>
		<category><![CDATA[Hybrid Cloud]]></category>
		<category><![CDATA[Integration]]></category>
		<category><![CDATA[Latency]]></category>
		<category><![CDATA[On-Premises]]></category>
		<category><![CDATA[webservices]]></category>
		<guid isPermaLink="false">http://blogit.create.pt/gustavobrito/?p=634</guid>

					<description><![CDATA[<p>Point-to-site. PTS for now on. PTS is a way of connecting single clients (machines) to a gateway in Azure, without connecting the entire infrastructure to it. In this case, you only need a client computer, instead of an enterprise router. The client machine connects directly to Azure via VPN. &#160; Will this be faster that [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-five/">Latency test between Azure and On-Premises – Part Five</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Point-to-site. PTS for now on. PTS is a way of connecting <strong>single</strong> clients (machines) to a gateway in Azure, <strong>without</strong> connecting the entire infrastructure to it. In this case, you only need a client computer, instead of an enterprise router. The client machine connects directly to Azure via <strong>VPN</strong>.</p>
<p><img decoding="async" class="aligncenter size-full wp-image-654" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/screen7.png" alt="" width="624" height="392" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/screen7.png 624w, https://blogit.create.pt/wp-content/uploads/2017/11/screen7-300x188.png 300w" sizes="(max-width: 624px) 100vw, 624px" /></p>
<p>&nbsp;</p>
<p>Will this be faster that Azure Site-to-Site?</p>
<p><span id="more-634"></span></p>
<p style="text-align: center"><strong>Test #5 10kb message</strong></p>
<p>10Kb is our base message. PTS didn’t show any issues regarding this one.</p>
<p><img decoding="async" class="aligncenter size-full wp-image-664" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/screen8.png" alt="" width="624" height="350" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/screen8.png 624w, https://blogit.create.pt/wp-content/uploads/2017/11/screen8-300x168.png 300w" sizes="(max-width: 624px) 100vw, 624px" /></p>
<p>Execution time is practically the same as before (on Site-to-Site scenario), with the same message size. <strong>1%</strong> is not noticeable on a large batch execution. Let’s see for 100KB. If results change more drastically, then we can analyze and find a reason for this.</p>
<p style="text-align: center"><strong>Test #5 100kb message</strong></p>
<p>We didn’t see a big difference before. Will 100kb be the game changer?</p>
<p><img decoding="async" class="aligncenter size-full wp-image-674" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/screen8-1.png" alt="" width="624" height="339" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/screen8-1.png 624w, https://blogit.create.pt/wp-content/uploads/2017/11/screen8-1-300x163.png 300w" sizes="(max-width: 624px) 100vw, 624px" /></p>
<p>Actually, it is! It’s ~10ms faster than before! Site-to-Site is beginning to lose to PTS.</p>
<p><strong>Probable causes are:</strong></p>
<ol>
<li>Router delays due to packet queues (Site-to-Site)</li>
<li>Router processing power limitations of encrypting packets (and routing them)</li>
<li>General hardware limitations</li>
</ol>
<p><strong> </strong>We start to see some differences now. ~10ms is a noticable improvement. How fast will 5MB be?</p>
<p style="text-align: center"><strong>Test #5 5mb message</strong></p>
<p>The ultimate test on PTS.</p>
<p>&nbsp;</p>
<p><img decoding="async" class="aligncenter size-full wp-image-684" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/screen8-2.png" alt="" width="599" height="321" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/screen8-2.png 599w, https://blogit.create.pt/wp-content/uploads/2017/11/screen8-2-300x161.png 300w" sizes="(max-width: 599px) 100vw, 599px" /></p>
<p>Well, PTS <strong>it’s not</strong> for large messages. It’s inefficient and takes a lot of time! <strong>You should only use this for a one-time-only scenario</strong>, like accessing a VM or downloading some files from Azure. Processing large messages with PTS is <strong>not</strong> recommended, as execution times increased ~1 second in average.</p>
<p style="text-align: center"><strong> O</strong><strong>verall test discussion</strong></p>
<p>This is an awkward situation. 100KB is faster than Site-to-Site, but 5MB it’s not. While processing small to medium messages you should be fine. <strong>Not recommended for production as it showed to have some unusual peaks, so latency may be compromised.</strong></p>
<p><img decoding="async" class="aligncenter size-full wp-image-694" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/net.png" alt="" width="624" height="142" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/net.png 624w, https://blogit.create.pt/wp-content/uploads/2017/11/net-300x68.png 300w" sizes="(max-width: 624px) 100vw, 624px" /></p>
<p>&nbsp;</p>
<p><strong>Go to <a href="http://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-six">next test</a>!</strong></p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-five/">Latency test between Azure and On-Premises – Part Five</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-five/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Latency test between Azure and On-Premises – Part Four</title>
		<link>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-four/</link>
					<comments>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-four/#respond</comments>
		
		<dc:creator><![CDATA[Gustavo Brito]]></dc:creator>
		<pubDate>Mon, 27 Nov 2017 17:25:30 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[azure]]></category>
		<category><![CDATA[delay]]></category>
		<category><![CDATA[Hybrid]]></category>
		<category><![CDATA[Hybrid Cloud]]></category>
		<category><![CDATA[Integration]]></category>
		<category><![CDATA[Latency]]></category>
		<category><![CDATA[On-Premises]]></category>
		<category><![CDATA[webservices]]></category>
		<guid isPermaLink="false">http://blogit.create.pt/gustavobrito/?p=554</guid>

					<description><![CDATA[<p>Starting with a real-world application of Azure (it’s used here on Create), this scenario is a direct 24/7 VPN link to a gateway in Azure. This is a business-oriented solution. The whole on-premises network is connected to a whole network of devices in Azure (only the ones associated to this VPN gateway obviously). Consider it [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-four/">Latency test between Azure and On-Premises – Part Four</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Starting with a real-world application of Azure (it’s used here on Create), this scenario is a direct 24/7 VPN link to a gateway in Azure. This is a <strong>business-oriented</strong> solution. The whole on-premises network is connected to a whole network of devices in Azure (only the ones associated to this VPN gateway obviously).</p>
<p>Consider it as an <strong>extended</strong> office, with VMs and Azure Functions running outside your premises, as if they were there right next to you! <strong>It’s the future.</strong></p>
<p>We’ll be using the same messages, as well the same service-client logic (Via HTTP GET). Instead of using TeamViewer VPN or exposing our service, <strong>we’ll use a secure VPN connection to a gateway, that has one VM that’s running the client app associated to it.</strong></p>
<p><span id="more-554"></span></p>
<p><img decoding="async" class="aligncenter size-full wp-image-574" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/cloud-1.png" alt="" width="540" height="275" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/cloud-1.png 540w, https://blogit.create.pt/wp-content/uploads/2017/11/cloud-1-300x153.png 300w" sizes="(max-width: 540px) 100vw, 540px" /></p>
<p>&nbsp;</p>
<p>We’ll start by doing 10kb, following 100kb and 5MB. Again, this is the most common scenario nowadays!</p>
<p style="text-align: center"><strong>Test #4 10kb message</strong></p>
<p>Running 10kb of data through Azure VPN Site-to-Site is surely a piece of cake.</p>
<p><img decoding="async" class="aligncenter size-full wp-image-584" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/cloud-2.png" alt="" width="624" height="328" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/cloud-2.png 624w, https://blogit.create.pt/wp-content/uploads/2017/11/cloud-2-300x158.png 300w" sizes="(max-width: 624px) 100vw, 624px" /></p>
<p>It took ~44ms to execute! It’s roughly the same as <strong>263bits through TeamViewer VPN</strong>! Using the same 10kb, this test scenario has a <strong>reduced</strong> execution time of~20ms! Now that’s an improvement. A secure, tight VPN to Azure, site-to-site, with reduced latency! That’s quite a hit in P2P (<em>Peer-to-Peer</em>) VPN connections!</p>
<p style="text-align: center"><strong>Test #4 100kb message</strong></p>
<p>Well, after seeing a ~20ms drop with a 10kb message, will execution times drop with a message that’s <strong>10 times bigger?</strong></p>
<p><img decoding="async" class="aligncenter size-full wp-image-594" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/cloud-3.png" alt="" width="608" height="335" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/cloud-3.png 608w, https://blogit.create.pt/wp-content/uploads/2017/11/cloud-3-300x165.png 300w" sizes="(max-width: 608px) 100vw, 608px" /></p>
<p>&nbsp;</p>
<p>Remember how TeamViewer VPN managed ~100ms on this test? If that’s quick, this is <strong>blazing fast!</strong> Less ~30ms than P2P VPN solution. Azure Site-to-Site connection is proving to be better than TeamViewer, and only ~9ms slower than HTTP without VPN. Let’s go for 5MB.</p>
<p style="text-align: center"><strong>Test #4 5MB message</strong></p>
<p>Let’s go through the ultimate test. Can we saturate the Azure VPN connection with this?</p>
<p><img decoding="async" class="aligncenter size-full wp-image-604" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/cloud-4.png" alt="" width="630" height="335" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/cloud-4.png 630w, https://blogit.create.pt/wp-content/uploads/2017/11/cloud-4-300x160.png 300w" sizes="(max-width: 630px) 100vw, 630px" /></p>
<p>&nbsp;</p>
<p>Well, that’s astonishing. For a message this size, execution time it’s greater by ~150ms only! Guess that’s the price to pay for a secure tunnel. Direct HTTP execution times are smaller, but considering risk over latency, this is a great and optimal candidate.</p>
<p style="text-align: center"><strong>Overall test discussion</strong></p>
<p>Well, this did prove me wrong. Initially, I had the idea that TeamViewers’ P2P VPN connection was born to be faster, but it’s not! Azure Site-to-Site is <strong>faster</strong>, <strong>safer</strong>, <strong>scalable</strong> and <strong>available to multiple devices</strong>, both on Azure and on-premises. Personally, I think this solution is optimal for enterprise integration, as is your business logic needs. For a <strong>fast and secure</strong> connection to Cloud computing, this is an excellent candidate. It has <strong>99.9% uptime</strong>, which is great for 24/7 intensive data crunching or message exchanging.</p>
<p><img decoding="async" class="aligncenter size-full wp-image-614" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/cloud-5.png" alt="" width="214" height="135" /></p>
<p>&nbsp;</p>
<p><strong>Proceed to test #5 <a href="http://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-five">here</a>!</strong></p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-four/">Latency test between Azure and On-Premises – Part Four</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-four/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Latency test between Azure and On-Premises – Part Three</title>
		<link>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-three/</link>
					<comments>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-three/#respond</comments>
		
		<dc:creator><![CDATA[Gustavo Brito]]></dc:creator>
		<pubDate>Mon, 27 Nov 2017 17:20:12 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[azure]]></category>
		<category><![CDATA[delay]]></category>
		<category><![CDATA[Hybrid]]></category>
		<category><![CDATA[Hybrid Cloud]]></category>
		<category><![CDATA[Integration]]></category>
		<category><![CDATA[Latency]]></category>
		<category><![CDATA[On-Premises]]></category>
		<category><![CDATA[webservices]]></category>
		<guid isPermaLink="false">http://blogit.create.pt/gustavobrito/?p=474</guid>

					<description><![CDATA[<p>This scenario is based by direct HTTP connection via exposed web service, accessible without VPN. This can be a very common situation nowadays too, as it’s just like a website or a web API, and it’s cheap! VPN connections can be the main cause of delays between two points. A secure tunnel brings delay into [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-three/">Latency test between Azure and On-Premises – Part Three</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>This scenario is based by direct HTTP connection via exposed web service, accessible without VPN. This can be a very common situation nowadays too, as it’s just like a website or a web API, and it’s cheap! VPN connections can be the main cause of delays between two points. A secure tunnel brings delay into the equation by itself.</p>
<p>We’ll be exposing the REST Service (on-premises), and connecting the client application via HTTP (Azure), transmitting the same messages as previous tests.</p>
<p><span id="more-474"></span></p>
<p>Starting at 10kb, then 100kb and 5Mb. 10kb, only a fifth (1/5) of an average HTML file. For instance, Google main page size is 392.8 kB in total.</p>
<p>&nbsp;</p>
<p><img decoding="async" class="aligncenter size-full wp-image-494" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/screen6.png" alt="" width="623" height="300" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/screen6.png 623w, https://blogit.create.pt/wp-content/uploads/2017/11/screen6-300x144.png 300w" sizes="(max-width: 623px) 100vw, 623px" /></p>
<p>&nbsp;</p>
<p><strong>Exposed services</strong>. The hackers’ paradise and snoopers’ ocean. A security risk for sure. VPN protects you from all that. It’s a secure tunnel to anywhere.</p>
<p>&nbsp;</p>
<p style="text-align: center"><strong>Test #3 10kb message</strong></p>
<p>After changing port forwarding configurations in our on-prem router, we’re able to expose our REST web service, listening on port 65443 (a random port was chosen). This will be for testing purposes only and will be removed immediately after all testing’s done (for security reasons).</p>
<p>&nbsp;</p>
<p><img decoding="async" class="aligncenter size-full wp-image-504" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/screen6-1.png" alt="" width="606" height="327" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/screen6-1.png 606w, https://blogit.create.pt/wp-content/uploads/2017/11/screen6-1-300x162.png 300w" sizes="(max-width: 606px) 100vw, 606px" /></p>
<p>&nbsp;</p>
<p>Results are in! It took almost the same time as before (with TeamViewer VPN)! This is amazing, because the VPN we used added no delay whatsoever, at least for this message size! Execution times peaked at ~55ms.</p>
<p style="text-align: center"><strong>Test #3 100kb message</strong></p>
<p>Now for 100kb of text. Will HTTP be faster than HTTP over VPN?</p>
<p>&nbsp;</p>
<p><img decoding="async" class="aligncenter size-full wp-image-514" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/screen6-2.png" alt="" width="647" height="334" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/screen6-2.png 647w, https://blogit.create.pt/wp-content/uploads/2017/11/screen6-2-300x155.png 300w" sizes="(max-width: 647px) 100vw, 647px" /></p>
<p>&nbsp;</p>
<p>WOW! Less ~40ms each! TeamViewer VPN is losing ground now! This took ~60ms each for 100kb of data. This is almost half the time that VPN took! How about 5MB of data?</p>
<p style="text-align: center"><strong>Test #3 5MB message</strong></p>
<p>The ultimate 5MB of text over HTTP. No explaining needed here… Image speaks by itself.</p>
<p>&nbsp;</p>
<p><img decoding="async" class="aligncenter size-full wp-image-524" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/screen6-3.png" alt="" width="638" height="330" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/screen6-3.png 638w, https://blogit.create.pt/wp-content/uploads/2017/11/screen6-3-300x155.png 300w" sizes="(max-width: 638px) 100vw, 638px" /></p>
<p>&nbsp;</p>
<p>Wait, didn’t TeamViewer VPN take ~1600ms each to execute? This is very surprising, as we start to notice serious delay due to VPN! ~1.2 seconds each is what HTTP takes to execute a request and a 5MB response. If you do the math, for each 10 messages, you save ~4 seconds! That’s a lot!</p>
<p style="text-align: center"><strong> </strong><strong>Overall test discussion</strong></p>
<p>Well this is a no brainer. TeamViewer VPN didn’t stand a chance here. For 10kb of data, the results are nearly the same. Neither VPN or HTTP (no VPN) struggled with 10kb, but 100kb and 5MB came with a lot of differences. No VPN proved to be faster in larger messages, but VPN provides a safer, encrypted transmission. I call this your decision: Speed over Security. Exposed services are a serious risk, but VPN is a bit slower. In execution time, HTTP without VPN is clearly a winner here.</p>
<p><img decoding="async" class="size-full wp-image-534 aligncenter" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/cloud.png" alt="" width="162" height="123" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/cloud.png 162w, https://blogit.create.pt/wp-content/uploads/2017/11/cloud-80x60.png 80w" sizes="(max-width: 162px) 100vw, 162px" /></p>
<p><strong> Test#4 on the <a href="http://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-four">next post</a>!</strong></p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-three/">Latency test between Azure and On-Premises – Part Three</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-three/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Latency test between Azure and On-Premises – Part Two</title>
		<link>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-two/</link>
					<comments>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-two/#respond</comments>
		
		<dc:creator><![CDATA[Gustavo Brito]]></dc:creator>
		<pubDate>Mon, 27 Nov 2017 17:15:56 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[azure]]></category>
		<category><![CDATA[delay]]></category>
		<category><![CDATA[Hybrid]]></category>
		<category><![CDATA[Hybrid Cloud]]></category>
		<category><![CDATA[Integration]]></category>
		<category><![CDATA[Latency]]></category>
		<category><![CDATA[On-Premises]]></category>
		<category><![CDATA[webservices]]></category>
		<guid isPermaLink="false">http://blogit.create.pt/gustavobrito/?p=384</guid>

					<description><![CDATA[<p>In this scenario, we’ll keep it simple. This is the full on-premises scenario. You must worry about maintenance and power bills, and I’m not even counting on infrastructure costs (high-end routers, switches, cabinets, etc.). The machine with the service is running via Ethernet cable, while the client machine is running via Wi-Fi. This is the [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-two/">Latency test between Azure and On-Premises – Part Two</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>In this scenario, we’ll keep it simple. This is the full on-premises scenario. You <strong>must</strong> worry about maintenance and power bills, and I’m <strong>not</strong> even counting on infrastructure costs (high-end routers, switches, cabinets, etc.).</p>
<p>The machine with the service is running via Ethernet cable, while the client machine is running via Wi-Fi. This is the most common connection scenario nowadays.</p>
<p>We’ll do 10kb test for initial values and escalate to 100kb and 5 MB. These messages are the same as the previous test scenario (test #1 TeamViewer VPN).</p>
<p>Let’s start!</p>
<p style="text-align: center"><strong>Test #2 10kb message</strong></p>
<p>With a 10kb message, it took ~6ms to run, for each message. Even with peaks of -10ms, it’s much faster than previous scenario with the same message. This dues to proximity and LAN connection advantages. It’s a short way between them. No need to route packets for miles. Bandwidth is a crucial factor here. To exceed a 1gbps LAN connection, you need a large file travelling through, and that’s not the case here.</p>
<p><span id="more-384"></span></p>
<p><img decoding="async" class="aligncenter size-full wp-image-394" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/screen4.png" alt="" width="574" height="302" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/screen4.png 574w, https://blogit.create.pt/wp-content/uploads/2017/11/screen4-300x158.png 300w" sizes="(max-width: 574px) 100vw, 574px" /></p>
<p>&nbsp;</p>
<p>In case you’re wondering, we’ve added a Reset index button and batch execution. <strong>Why?</strong></p>
<h1 style="text-align: center"><strong>New test functionalities for real-world applications</strong></h1>
<p>This is the main reason. Batch send and receive messages, calculating median execution times, and getting more real-world results. The reset index button resets the OK button count, to control the number of requests made. It isn’t rocket science. Let’s move on to 100kb.</p>
<p style="text-align: center"><strong>Test #2 100kb message</strong></p>
<p>Now for a 100kb message. This one took, in average and for each message, 17ms to execute. Peaked ~25ms.</p>
<p>&nbsp;</p>
<p><img decoding="async" class="aligncenter size-full wp-image-404" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/screen4-1.png" alt="" width="611" height="330" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/screen4-1.png 611w, https://blogit.create.pt/wp-content/uploads/2017/11/screen4-1-300x162.png 300w" sizes="(max-width: 611px) 100vw, 611px" /></p>
<p>&nbsp;</p>
<p>The same message traveled way faster than before, and response times fell. 10 times the size, increasing ~10ms on execution time. Absolutely not bad at all!</p>
<p style="text-align: center"><strong>Test #2 5MB message</strong></p>
<p>This is the ultimate test. The big data travelling. The worst-case scenario. Whatever you want to call it, it’s the more intensive in terms of bandwidth.</p>
<p>&nbsp;</p>
<p><img decoding="async" class="aligncenter size-full wp-image-414" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/screen4-2.png" alt="" width="625" height="330" /></p>
<p>&nbsp;</p>
<p><strong>What!?</strong> 10 messages, with 5mb, in only ~563ms each? What? That’s much faster than Azure!</p>
<p><strong>Why you ask?</strong></p>
<p>This happens a lot. Local Area Networks (LAN) are usually faster. The connection to the outside world is flooded with data and millions of routers in between. No wonder it takes longer. Which would take longer? Copying a 1GB file from your collegues computer, or downloading the same file from the Internet? <strong>Yep, your collegues computer.</strong> LAN connection is way faster (via Ethernet, Wi-Fi may not apply, depending on the hardware you have available).</p>
<p style="text-align: center"><strong>Final overview of this scenario</strong></p>
<p>Well, here we are. At this point, you must be thinking: “<strong>Oh, why the hell should I migrate my solution to Azure?</strong>”</p>
<p>Let me explain:</p>
<h4><strong>1 – Dust! </strong></h4>
<p>Your <strong>worst</strong> enemy inside 24/7 powered on servers. If you are not carefull, it may overheat and damage some components, and all your data or business may stop! You have to do maintenance frequently, or hire someone to do it for you!</p>
<p><img decoding="async" class="aligncenter size-full wp-image-424" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/screen5.png" alt="" width="318" height="240" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/screen5.png 318w, https://blogit.create.pt/wp-content/uploads/2017/11/screen5-300x226.png 300w, https://blogit.create.pt/wp-content/uploads/2017/11/screen5-80x60.png 80w" sizes="(max-width: 318px) 100vw, 318px" /></p>
<p><strong><em>2 – Hardware!</em></strong></p>
<p>Your <strong>main</strong> hardware concerns. <strong>RAID</strong> (<em>Redundant Array of Inexpensive Drives</em>) is used worldwide for data mirroring and data safety. This has a high failure rate (depending on your configuration on the array); therefore, you can lose data for good. Resuming, you have Hard Disk Drives, Raid controllers, memory chips, motherboards, NICs, Power supplies… Everything is critical inside a server! You don’t want to be this guy when something fails.</p>
<p>&nbsp;</p>
<p><img decoding="async" class="aligncenter size-full wp-image-434" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/screen5-1.png" alt="" width="321" height="224" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/screen5-1.png 321w, https://blogit.create.pt/wp-content/uploads/2017/11/screen5-1-300x209.png 300w, https://blogit.create.pt/wp-content/uploads/2017/11/screen5-1-100x70.png 100w" sizes="(max-width: 321px) 100vw, 321px" /></p>
<p>&nbsp;</p>
<p><strong><em>3 – Infrastructure</em></strong></p>
<p><strong>Cost.</strong> It’s written all over it. Power hungry servers are heat sources, so you must have air conditioners to cool them down. The usual monthly cost of a medium sized server room is 1400$, without indirect costs! Everything in this room needs maintenance and, again, can break down. This generates downtime and can be very harmful for your business. You must have people who oversee this room (an employee or an IT company for instance).</p>
<p>&nbsp;</p>
<p><img decoding="async" class="aligncenter size-full wp-image-444" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/screen5-2.png" alt="" width="351" height="274" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/screen5-2.png 351w, https://blogit.create.pt/wp-content/uploads/2017/11/screen5-2-300x234.png 300w" sizes="(max-width: 351px) 100vw, 351px" /></p>
<p>&nbsp;</p>
<p>Despite these disadvantages, <strong>this is the fastest way of exchanging data, the fastest way of sending messages</strong>. But… but what?</p>
<h3 style="text-align: center"><strong>But it’s not scalable.</strong></h3>
<p><strong>Scalability</strong>. If your business services suddenly need more CPU processing power, or more RAM, your server might be uncappable of handling it! Physical hardware limitations exist. In Azure, with few clicks, you have all the performance boost you need, <strong>with little to no downtime at all!</strong></p>
<p><strong>Replication</strong>. Server replication is a clever way of avoiding downtimes. With bigger costs for on-premises, cloud manages this for you out-of-the-box, and it’s cheaper!</p>
<blockquote>
<p style="text-align: center"><strong>&#8220;Day-to-day usage of on-premises server rooms is a thing of the past!&#8221;</strong></p>
</blockquote>
<p><strong>Read on to the <a href="http://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-three">next part</a>, Via HTTP GET (without VPN)!</strong></p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-two/">Latency test between Azure and On-Premises – Part Two</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-two/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Latency test between Azure and On-Premises – Part One</title>
		<link>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-one/</link>
					<comments>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-one/#respond</comments>
		
		<dc:creator><![CDATA[Gustavo Brito]]></dc:creator>
		<pubDate>Mon, 27 Nov 2017 17:01:08 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[azure]]></category>
		<category><![CDATA[delay]]></category>
		<category><![CDATA[Hybrid]]></category>
		<category><![CDATA[Hybrid Cloud]]></category>
		<category><![CDATA[Integration]]></category>
		<category><![CDATA[Latency]]></category>
		<category><![CDATA[On-Premises]]></category>
		<category><![CDATA[webservices]]></category>
		<guid isPermaLink="false">http://blogit.create.pt/gustavobrito/?p=294</guid>

					<description><![CDATA[<p>First test &#8211; TeamViewer VPN &#160; In this first test, we are connected via TeamViewer VPN (specification of this technology at the last post) to an Azure VM (Virtual Machine). This VM is running Windows 10 retail version, with Visual Studio 2017 installed. Visual Studio is used to change the requests made to the OnPremService, [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-one/">Latency test between Azure and On-Premises – Part One</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p style="text-align: center"><strong>First test &#8211; TeamViewer VPN</strong></p>
<p>&nbsp;</p>
<p>In this first test, we are connected via TeamViewer VPN (specification of this technology at the last post) to an Azure VM (Virtual Machine). This VM is running Windows 10 retail version, with Visual Studio 2017 installed. Visual Studio is used to change the requests made to the OnPremService, to have more precise results. We are also using Edge browser from Microsoft to run client web app.</p>
<p>&nbsp;</p>
<p><img decoding="async" class="aligncenter wp-image-304" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/screen1.png" alt="" width="658" height="109" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/screen1.png 624w, https://blogit.create.pt/wp-content/uploads/2017/11/screen1-300x50.png 300w" sizes="(max-width: 658px) 100vw, 658px" /></p>
<p><span id="more-294"></span></p>
<p>&nbsp;</p>
<p>This is a screenshot of the UI of Azure Client App. In this screen, we see an “OK” button and three text fields. This two first fields are going to be populated with the current time (in milliseconds), and the third one with the total elapsed time, also in milliseconds. The first field will be filled with a timestamp, wich is generated when you click the OK button. The second field is filled with a timestamp, but only when the client receives a response from the OnPremService. The third is then populated with the difference of timestamps (second field – first field). This will give us the elapsed time between pressing the button (making a request) and receiving a response (with data from OnPremService).</p>
<p>&nbsp;</p>
<p><img decoding="async" class="aligncenter size-full wp-image-312" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/screen2.png" alt="" width="624" height="178" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/screen2.png 624w, https://blogit.create.pt/wp-content/uploads/2017/11/screen2-300x86.png 300w" sizes="(max-width: 624px) 100vw, 624px" /></p>
<p>&nbsp;</p>
<p>Here we see the results. The “77” written on the button is the total number of requests made by the Azure Client App. This number is sent by the OnPremService. We can see that this operation took 42ms to execute.</p>
<p>&nbsp;</p>
<p><em><strong>Median values</strong></em></p>
<table>
<tbody>
<tr>
<td width="312">Number of Requests</td>
<td width="312">Total elapsed time</td>
</tr>
<tr>
<td width="312">1</td>
<td width="312">177ms</td>
</tr>
<tr>
<td width="312">5</td>
<td width="312">~45ms</td>
</tr>
<tr>
<td width="312">10</td>
<td width="312">~44ms</td>
</tr>
<tr>
<td width="312">20</td>
<td width="312">~42ms</td>
</tr>
<tr>
<td width="312">50</td>
<td width="312">~41ms</td>
</tr>
<tr>
<td width="312">75</td>
<td width="312">~47ms</td>
</tr>
<tr>
<td width="312">100</td>
<td width="312">~45ms</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<p>Above are the median values of total elapsed times. The first one’s higher because the browser needs to render the page and all services are woken inside IIS and internet routes being formed. After the first request, we see that total elapsed times are very consistent, and didn’t have peaks.</p>
<p>This is only for a simple request-response mechanism via VPN. The data travelling inside the VPN is not larger than 263 bits until now.  This makes a absolute minimal “ping-like” test, showing us the real bare-minimum connection latency results.</p>
<p>&nbsp;</p>
<p style="text-align: center"><strong>Test #1 10KB MESSAGE</strong></p>
<p>Now we are going to repeat all those steps, and instead of showing a table with results, we’ll be resuming the results, thus improving the readability.<br />
In this test, we’ll be increasing the response to 10KB (Kilobytes), being 10kb in plain text (shown in the text area below the button), to see how total elapsed time changes.<br />
After increasing data size, the total elapsed time stayed roughly at ~65ms, with random peaks (worst results are roughly between 125 and 150ms). We see an increase of ~20ms.<br />
Not too bad, considering that data size is now ~<strong>300</strong> times larger than the base test!</p>
<p><img decoding="async" class="size-full wp-image-314 aligncenter" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/screen3.png" alt="" width="325" height="207" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/screen3.png 325w, https://blogit.create.pt/wp-content/uploads/2017/11/screen3-300x191.png 300w, https://blogit.create.pt/wp-content/uploads/2017/11/screen3-324x207.png 324w" sizes="(max-width: 325px) 100vw, 325px" /></p>
<p style="text-align: center"><strong> </strong><strong>test #1 100KB MESSAGE</strong></p>
<p>With the same UI and the same functionalities, let’s try to increase even more the data size, to a new 100kb of text (100 times the size as before).<br />
Initial request elapsed time was ~440ms, and right after it, stayed roughly at ~100ms, with peaks of 150ms average. Again, not too bad!</p>
<p><img decoding="async" class="aligncenter size-full wp-image-324" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/screen3-1.png" alt="" width="325" height="207" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/screen3-1.png 325w, https://blogit.create.pt/wp-content/uploads/2017/11/screen3-1-300x191.png 300w, https://blogit.create.pt/wp-content/uploads/2017/11/screen3-1-324x207.png 324w" sizes="(max-width: 325px) 100vw, 325px" /></p>
<p>&nbsp;</p>
<p style="text-align: center"><strong>test #1 5MB MESSAGE</strong></p>
<p>&nbsp;</p>
<p>With the same UI and the same functionalities, let’s try to increase even more the data size to 5120kb (50 times the size), being 5MB of plain text. This is the last test of this scenario.<br />
Initial request took ~1.8 seconds (1809ms) to execute. This is a quite considerable amount of time, when execution time is critical.<br />
Next requests stayed roughly the same as the initial request, and all sort of spikes were detected. We noticed some requests taking nearly one second to execute, to taking up to two and a half (2,5) seconds to complete. This is more inconsistent because file size is huge for HTTP Get method. Although this might not be a VPN problem, this becomes a very unpractical way of using HTTP for a big message.</p>
<p><img decoding="async" class="aligncenter size-full wp-image-334" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/screen3-2.png" alt="" width="330" height="207" /></p>
<p style="text-align: center"><strong>Final overview of this scenario</strong></p>
<p>Resuming, this is a very consistent method. If you need to send data repeatedly, in small amounts, this is a great candidate. HTTP methods via TeamViewer VPN proved to be low-latency and very easy to implement, having fast responses to a small data size extent.</p>
<p><strong>Table of average execution times regarding this test scenario and data sizes</strong></p>
<table>
<tbody>
<tr>
<td width="208"><strong>#TEST</strong></td>
<td width="208">
<p style="text-align: left"><strong>DATA SIZE</strong></p>
</td>
<td width="208"><strong>AVG ELAPSED TIME</strong></td>
</tr>
<tr>
<td width="208">1</td>
<td width="208">263 bits</td>
<td width="208">~45ms</td>
</tr>
<tr>
<td width="208">2</td>
<td width="208">10kb</td>
<td width="208">~65ms</td>
</tr>
<tr>
<td width="208">3</td>
<td width="208">100kb</td>
<td width="208">~100ms</td>
</tr>
<tr>
<td width="208">4</td>
<td width="208">5MB</td>
<td width="208">~1600ms</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p style="text-align: center"><img decoding="async" class="aligncenter size-full wp-image-344" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/screen3-3.png" alt="" width="577" height="337" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/screen3-3.png 577w, https://blogit.create.pt/wp-content/uploads/2017/11/screen3-3-300x175.png 300w" sizes="(max-width: 577px) 100vw, 577px" /></p>
<p>&nbsp;</p>
<p style="text-align: center"><strong>Graph of data size vs execution time</strong></p>
<p>In this graph, we can verify that there is an exponential growth of average execution time, when increasing data size above 100kb. Going to 1MB shouldn’t be too bad, but you start to sacrifice execution time over transmitting large data. It’s best to send data chucks and get all together in the client app. Max execution time is very above average. This dues to all services being hibernated and coming back to life takes time. When all is up and running, average times go way down!</p>
<p>&nbsp;</p>
<p><strong>Make sure you don&#8217;t miss <a href="http://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-two">next part</a>!</strong></p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-one/">Latency test between Azure and On-Premises – Part One</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-one/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Latency test between Azure and On-Premises &#8211; Intro</title>
		<link>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-intro/</link>
					<comments>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-intro/#comments</comments>
		
		<dc:creator><![CDATA[Gustavo Brito]]></dc:creator>
		<pubDate>Mon, 27 Nov 2017 17:00:48 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[azure]]></category>
		<category><![CDATA[delay]]></category>
		<category><![CDATA[Hybrid]]></category>
		<category><![CDATA[Hybrid Cloud]]></category>
		<category><![CDATA[Integration]]></category>
		<category><![CDATA[Latency]]></category>
		<category><![CDATA[On-Premises]]></category>
		<category><![CDATA[webservices]]></category>
		<guid isPermaLink="false">http://blogit.create.pt/gustavobrito/?p=264</guid>

					<description><![CDATA[<p>&#160; In these series of posts, we’re going to compare different ways of connecting to Azure. We’ll setup a web service and a client. We’re going to see which architecture is optimal in terms of latency, cost and effort. Despite results being very similar, the scenarios are very different between them. The Cloud has multiple [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-intro/">Latency test between Azure and On-Premises &#8211; Intro</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>&nbsp;</p>
<p><img decoding="async" class="aligncenter wp-image-274" src="http://blogit.create.pt/gustavobrito/wp-content/uploads/sites/274/2017/11/azure.png" alt="" width="705" height="275" /></p>
<p>In these series of posts, we’re going to compare different ways of connecting to Azure. We’ll setup a web service and a client. We’re going to see which architecture is optimal in terms of latency, cost and effort. Despite results being very similar, the scenarios are very different between them. The Cloud has multiple ways of integrating your business. Some are cheaper, some are more expensive. We’re here to break the barrier between cost, effort and actual benefits of each scenario. This article was made by analyzing live results, repeating each test more than once, to ensure that no other variables interfere with our results.</p>
<blockquote>
<p style="text-align: center"><em> </em>“With Azure, we can rely on our own core competencies, and not have to build the underlying infrastructure.”</p>
<p style="text-align: center"><em>Nik Shroff, Director of Microsoft Solutions, Adobe</em></p>
<p>&nbsp;</p></blockquote>
<p><span id="more-264"></span></p>
<p style="text-align: center"><strong><em>INTRODUCING…</em></strong></p>
<p>We live in a world that is managed by computers, and IT is improving every day. Humans are creating clever and autonomous ways to manage businesses and citizens private life. Social networks and Cloud storage (<em>i.e.</em> OneDrive) are examples of the huge power that’s available on Cloud nowadays.</p>
<p>Today, people rely on internet to their day-to-day life, from their personal agendas to a “<em>like</em>” on Facebook or a post on Instagram. Businesses rely on computing power that’s available outside, for a fee. This can be cheap if you are running a small business with few transactions. Consider this: Imagine the power requirements and bandwidth that you’ll need to run Facebook servers at home. Could you host billions of people seeing photos and reacting to posts each minute, or even each second? Imagine the huge processing power and data storage requirements… that’s right, you couldn’t!</p>
<p style="text-align: center"><strong><em>&#8230;CLOUD</em></strong></p>
<p>Cloud. The <em>thing</em> that’s running behind the biggest companies and technologies. People sometimes think that cloud is an enormous computer with tons of processors, billions of TB of RAM and millions of hard-drives, and all connected to the internet. <strong>Wrong</strong>. Cloud is a set of multiple computers, replicated all-over the world. When you connect to a social network, in a second you could be connected to a server in Spain, and the next second you are connected to a server in Austria!</p>
<p>Cloud is a clever way to have your processing needs satisfied, without worrying with maintenance needs or millions of dollars in power bills. All inside a cloud is scalable. If your business grows, the cloud plan grows according to your needs, without much effort or services being offline.</p>
<p>In this article, we’re going to explain the technical details and real-world results. How long does it take to process information when all operations are hosted in the Cloud? How long does it take to process the same information when your operations are all on-premises? How can you have the best of both worlds (Hybrid Cloud)?</p>
<p style="text-align: center"><strong>WAIT, WHAT?? Hybrid Cloud?</strong></p>
<p>Hybrid Cloud? What’s that?</p>
<p>I’ll give you an example. Imagine that you run a start-up of wallet manufacturing. You have few of them and you’re ready to open a store.</p>
<p>Now you are with an open store and wallets to sell. You have a small database and a computer to register clients, to print receipts and to manage sales. In this situation, you have the <em>on-premises </em>solution. All is stored inside your facilities.</p>
<p>Now imagine that you have a boost of sales, you are becoming huge in selling wallets. You have a batch of millions in production plus more millions in stock. You also have wallets all-over the world, in stores, online selling companies like <em>Amazon</em>, etc. You are the Nº 1 in wallets worldwide. Would you still run all information on your computer inside your store? Seriously?</p>
<p><strong>NO</strong>! Now, when your processing and data storage needs are huge and increasing every day, you can’t have all of this <em>on-premises</em>, <strong>it’s just too expensive and irrational.</strong> You feel the need to have a Cloud solution, that is available to all your employees and stores all-over the world. Here’s what you have now:</p>
<ul>
<li>You have software in a computer (per-store) that manages the employees and stock on that same store;</li>
<li>You have a huge database stored in cloud;</li>
<li>You have all client information in Cloud;</li>
</ul>
<p>A full Cloud solution is when you have all the above stored in cloud. A hybrid-cloud solution is when you divide and share technologies between Cloud and on-premises. That’s it. Quite simple right?</p>
<p style="text-align: center"><strong>ACTUAL NUMBERS</strong></p>
<p>Now that all concepts are known, we did some tests to have base numbers to find out if, in a certain scenario, it’s better to have your on-premises solution stored and running in a Hybrid Cloud solution.</p>
<p style="text-align: center"><strong><em>Base test</em></strong></p>
<p>Test Scenario</p>
<p>We have a REST service, written in <em>C#</em>, that is hosted <em>on-prem</em>. This service returns a simple string (for now), with the number of clicks that were submitted by the client web app. This service is called “<strong>OnPremService</strong>” (for further reference).</p>
<p>We also have a Web App (Web application), also written in <em>C#</em>. This Web App is the client app, the one that calls the OnPremService with a request. This Web App is called “<strong>Azure Client App</strong>” (for further reference). This web app is running on Azure (West Europe) and available via any browser.</p>
<p>The Azure Client App we’ll be connected to OnPremService in multiple ways:</p>
<ul>
<li><strong>Test #1</strong> &#8211; Via TeamViewer VPN (with HTTP GET);</li>
<li><strong>Test #2</strong> – Local On-Premises (LAN without VPN, using HTTP GET);</li>
<li><strong>Test #3</strong> &#8211; Via HTTP request (without VPN);</li>
<li><strong>Test #4</strong> &#8211; Via Azure Site-to-Site VPN (with HTTP GET);</li>
<li><strong>Test #5</strong> &#8211; Via Azure Point-to-Site VPN (with HTTP GET);</li>
<li><strong>Test #6</strong> &#8211; Via Azure Function Apps &amp; Logic Apps (with HTTP GET);</li>
</ul>
<p>Also, we’ll talk about another way of doing the migration, but is not recommended if latency is an important factor.</p>
<ul>
<li><strong>Discussion #1</strong> – Via Relay using Hybrid Connection (Topic) (with HTTP GET);</li>
</ul>
<p><strong> </strong></p>
<p><strong>Read on to <a href="http://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-part-one">part one</a> and start finding out which one’s faster for your needs!</strong></p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-intro/">Latency test between Azure and On-Premises &#8211; Intro</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-intro/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
	</channel>
</rss>
