<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Cloud Archives - Blog IT</title>
	<atom:link href="https://blogit.create.pt/category/cloud/feed/" rel="self" type="application/rss+xml" />
	<link>https://blogit.create.pt/category/cloud/</link>
	<description>Create IT blogger community</description>
	<lastBuildDate>Tue, 30 Jan 2024 18:30:17 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>Provision a database programmatically in Azure SQL database with a failover group</title>
		<link>https://blogit.create.pt/miguelisidoro/2024/01/24/provision-a-database-programmatically-in-azure-sql-database-with-a-failover-group/</link>
					<comments>https://blogit.create.pt/miguelisidoro/2024/01/24/provision-a-database-programmatically-in-azure-sql-database-with-a-failover-group/#respond</comments>
		
		<dc:creator><![CDATA[Miguel Isidoro]]></dc:creator>
		<pubDate>Wed, 24 Jan 2024 14:32:24 +0000</pubDate>
				<category><![CDATA[Sql Server]]></category>
		<category><![CDATA[C#]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[azure]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[SQL Server]]></category>
		<guid isPermaLink="false">https://blogit.create.pt/?p=12852</guid>

					<description><![CDATA[<p>This post will explain how to provision a database programmatically in a Azure SQL database and add it to an Azure SQL failover group. Introduction An Azure SQL Server failover group is a group of databases that can be automatically or manually failed over from a primary server to a secondary server in a different [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/miguelisidoro/2024/01/24/provision-a-database-programmatically-in-azure-sql-database-with-a-failover-group/">Provision a database programmatically in Azure SQL database with a failover group</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>This post will explain how to provision a database programmatically in a Azure SQL database and add it to an Azure SQL failover group.</p>



<h2 class="wp-block-heading">Introduction</h2>



<p>An Azure SQL Server failover group is a group of databases that can be automatically or manually failed over from a primary server to a secondary server in a different Azure region in case of a disaster in the primary server. Failover groups provide high availability and disaster recovery for Azure SQL Server databases.</p>



<p>The process described in this post is composed by two main steps:</p>



<ul class="wp-block-list">
<li>Provisioning the database</li>



<li>Adding the database to the failover group </li>
</ul>



<h2 class="wp-block-heading">Provisioning the database</h2>



<p>The first step is to create the database. This step is composed by the following actions:</p>



<ul class="wp-block-list">
<li>Get the tenant information</li>



<li>Create the database</li>
</ul>



<p>Tenant information class:</p>



<figure class="wp-block-image size-full"><img fetchpriority="high" decoding="async" width="791" height="381" src="https://blogit.create.pt/wp-content/uploads/2024/01/Regions.jpg" alt="" class="wp-image-12869" srcset="https://blogit.create.pt/wp-content/uploads/2024/01/Regions.jpg 791w, https://blogit.create.pt/wp-content/uploads/2024/01/Regions-300x145.jpg 300w, https://blogit.create.pt/wp-content/uploads/2024/01/Regions-768x370.jpg 768w, https://blogit.create.pt/wp-content/uploads/2024/01/Regions-696x335.jpg 696w" sizes="(max-width: 791px) 100vw, 791px" /></figure>



<p>Code to create database programatically:</p>



<figure class="wp-block-image size-large"><img decoding="async" width="1024" height="722" src="https://blogit.create.pt/wp-content/uploads/2024/01/CreateDatabase-1024x722.jpg" alt="" class="wp-image-12866" srcset="https://blogit.create.pt/wp-content/uploads/2024/01/CreateDatabase-1024x722.jpg 1024w, https://blogit.create.pt/wp-content/uploads/2024/01/CreateDatabase-300x212.jpg 300w, https://blogit.create.pt/wp-content/uploads/2024/01/CreateDatabase-768x542.jpg 768w, https://blogit.create.pt/wp-content/uploads/2024/01/CreateDatabase-696x491.jpg 696w, https://blogit.create.pt/wp-content/uploads/2024/01/CreateDatabase-596x420.jpg 596w, https://blogit.create.pt/wp-content/uploads/2024/01/CreateDatabase-100x70.jpg 100w, https://blogit.create.pt/wp-content/uploads/2024/01/CreateDatabase.jpg 1065w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<h2 class="wp-block-heading">Adding the database to the failover group</h2>



<p>The second step adds the newly created database to the failover group. This step is composed by the following actions:</p>



<ul class="wp-block-list">
<li>Get the failover group where we want to add the database</li>



<li>Re-add the existing databases to the failover group &#8211; necessary since when we get the failover group the list of databases of the group is empty and, without this step, the failover group would only have the new database</li>



<li>Add the new database to the failover group</li>
</ul>



<figure class="wp-block-image size-full"><img decoding="async" width="790" height="344" src="https://blogit.create.pt/wp-content/uploads/2024/01/Failovergroup.jpg" alt="" class="wp-image-12876" srcset="https://blogit.create.pt/wp-content/uploads/2024/01/Failovergroup.jpg 790w, https://blogit.create.pt/wp-content/uploads/2024/01/Failovergroup-300x131.jpg 300w, https://blogit.create.pt/wp-content/uploads/2024/01/Failovergroup-768x334.jpg 768w, https://blogit.create.pt/wp-content/uploads/2024/01/Failovergroup-696x303.jpg 696w" sizes="(max-width: 790px) 100vw, 790px" /></figure>



<h2 class="wp-block-heading">Other Articles</h2>



<p>To learn why your business should migrate to SharePoint Online and Office 365, click <a href="https://blogit.create.pt////miguelisidoro/2019/07/29/why-your-business-should-migrate-to-sharepoint-online-and-office-365-the-value-offer-part-1/" target="_blank" rel="noreferrer noopener">here</a> and <a href="https://blogit.create.pt////miguelisidoro/2019/07/29/why-your-business-should-migrate-to-sharepoint-online-and-office-365-the-value-offer-part-2/" target="_blank" rel="noreferrer noopener">here</a>.</p>



<p>If you want to learn how to develop SPFx solutions, click <a href="https://blogit.create.pt/miguelisidoro/2022/05/09/sharepoint-framework-spfx-learning-guide/" target="_blank" rel="noreferrer noopener">here</a>.</p>



<p>If you want to learn how you can rename a modern SharePoint site, click <a href="https://blogit.create.pt////miguelisidoro/2019/09/23/how-to-rename-a-modern-sharepoint-site-url-in-office-365/" target="_blank" rel="noreferrer noopener">here</a>.</p>



<p>If you want to learn how to save time time scheduling your meetings, click&nbsp;<a href="https://blogit.create.pt////miguelisidoro/2020/04/12/save-time-scheduling-microsoft-teams-meetings-using-findtime/" target="_blank" rel="noreferrer noopener">here</a>.</p>



<p>If you want to learn how to enable Microsoft Teams Attendance List Download, click&nbsp;<a href="https://blogit.create.pt////miguelisidoro/2020/09/20/how-to-enable-teams-meeting-attendance-list-download-in-microsoft-365/" target="_blank" rel="noreferrer noopener">here</a>.</p>



<p>If you want to learn how to create a dynamic org-wide team in Microsoft Teams with all active employees, click&nbsp;<a href="https://blogit.create.pt/miguelisidoro/2020/09/21/how-to-create-a-dynamic-team-in-microsoft-teams-with-all-active-employees-in-microsoft-365/" target="_blank" rel="noreferrer noopener">here</a>.</p>



<p>If you want to modernize your SharePoint classic root site to a modern SharePoint site, click&nbsp;<a rel="noreferrer noopener" href="https://blogit.create.pt////miguelisidoro/2019/08/27/how-to-modernize-your-tenant-root-site-collection-in-office-365-using-invoke-spositeswap/" target="_blank">here</a>.</p>



<p>If you are a SharePoint administrator or a SharePoint developer who wants to learn more about how to install a SharePoint 2019 farm in an automated way using PowerShell, I invite you to click&nbsp;<a rel="noreferrer noopener" href="https://blogit.create.pt////miguelisidoro/2018/12/09/how-to-install-a-sharepoint-2019-farm-using-powershell-and-autospinstaller-part-1/" target="_blank">here</a>&nbsp;and&nbsp;<a rel="noreferrer noopener" href="https://blogit.create.pt////miguelisidoro/2018/12/09/how-to-install-a-sharepoint-2019-farm-using-powershell-and-autospinstaller-part-2/" target="_blank">here</a>.</p>



<p>If you learn how to greatly speed up your SharePoint farm update process to ensure your SharePoint farm keeps updated and you stay one step closer to start your move to the cloud, click&nbsp;<a rel="noreferrer noopener" href="https://blogit.create.pt////miguelisidoro/2019/05/02/how-to-speed-up-the-installation-of-sharepoint-cumulative-updates-using-powershell-step-by-step/" target="_blank">here</a>.</p>



<p>If you prefer to use the traditional method to update your farm and want to learn all the steps and precautions necessary to successfully keep your SharePoint farm updated, click&nbsp;<a rel="noreferrer noopener" href="https://blogit.create.pt////miguelisidoro/2019/04/08/how-to-install-sharepoint-cumulative-updates-in-a-sharepoint-farm-step-by-step/" target="_blank">here</a>.</p>



<p>If you want to learn how to upgrade a SharePoint 2013 farm to SharePoint 2019, click&nbsp;<a rel="noreferrer noopener" href="https://blogit.create.pt////miguelisidoro/2019/03/06/how-to-upgrade-from-sharepoint-2013-to-sharepoint-2019-step-by-step-part-1/" target="_blank">here&nbsp;</a>and&nbsp;<a rel="noreferrer noopener" href="https://blogit.create.pt////miguelisidoro/2019/03/06/how-to-upgrade-from-sharepoint-2013-to-sharepoint-2019-step-by-step-part-2/" target="_blank">here</a>.</p>



<p>If SharePoint 2019 is still not an option, you can learn more about how to install a SharePoint 2016 farm in an automated way using PowerShell, click&nbsp;<a rel="noreferrer noopener" href="https://blogit.create.pt////miguelisidoro/2018/07/28/how-to-install-a-sharepoint-2016-farm-using-powershell-and-autospinstaller-part-1/" target="_blank">here</a>&nbsp;and&nbsp;<a rel="noreferrer noopener" href="https://blogit.create.pt////miguelisidoro/2018/07/28/how-to-install-a-sharepoint-2016-farm-using-powershell-and-autospinstaller-part-2/" target="_blank">here</a>.</p>



<p>If you want to learn how to upgrade a SharePoint 2010 farm to SharePoint 2016, click&nbsp;<a rel="noreferrer noopener" href="https://blogit.create.pt////miguelisidoro/2019/02/04/sharepoint-upgrade-upgrading-a-sharepoint-2010-farm-to-sharepoint-2016-step-by-step-part-1/" target="_blank">here&nbsp;</a>and&nbsp;<a rel="noreferrer noopener" href="https://blogit.create.pt////miguelisidoro/2019/02/04/sharepoint-upgrade-upgrading-a-sharepoint-2010-farm-to-sharepoint-2016-step-by-step-part-2/" target="_blank">here</a>.</p>



<p>If you are new to SharePoint and Office 365 and want to learn all about it, take a look at these&nbsp;<a rel="noreferrer noopener" href="https://blogit.create.pt////miguelisidoro/2018/10/17/sharepoint-and-office-365-learning-resources/" target="_blank">learning resources</a>.</p>



<p>If you are work in a large organization who is using Office 365 or thinking to move to Office 365 and is considering between a single or multiple Office 365 tenants, I invite you to read&nbsp;<a rel="noreferrer noopener" href="https://blogit.create.pt////miguelisidoro/2019/01/07/pros-and-cons-of-single-tenant-vs-multiple-tenants-in-office-365/" target="_blank">this article</a>.</p>



<p>If you want to know all about the latest SharePoint and Office 365 announcements from Ignite and some more recent announcements, including Microsoft Search, What’s New to Build a Modern Intranet with SharePoint in Office 365, Deeper Integration between Microsoft Teams and SharePoint and the latest news on SharePoint development, click&nbsp;<a rel="noreferrer noopener" href="https://blogit.create.pt////miguelisidoro/2018/11/21/whats-new-for-sharepoint-and-office-365-after-microsoft-ignite-2018/" target="_blank">here</a>.</p>



<p>If your organization is still not ready to go all in to SharePoint Online and Office 365, a hybrid scenario may be the best choice. SharePoint 2019 RTM was recently announced and if you to learn all about SharePoint 2019 and all its features, click <a href="https://blogit.create.pt////miguelisidoro/2018/11/01/meet-the-new-modern-sharepoint-server-sharepoint-2019-rtm-is-here/" target="_blank" rel="noreferrer noopener">here</a>.</p>



<p>Happy Coding!</p>
<p>The post <a href="https://blogit.create.pt/miguelisidoro/2024/01/24/provision-a-database-programmatically-in-azure-sql-database-with-a-failover-group/">Provision a database programmatically in Azure SQL database with a failover group</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/miguelisidoro/2024/01/24/provision-a-database-programmatically-in-azure-sql-database-with-a-failover-group/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Performing broker migrations on your Kafka consumers</title>
		<link>https://blogit.create.pt/joaorafael/2023/03/15/kafka-consumer-broker-migration-aws-msk/</link>
					<comments>https://blogit.create.pt/joaorafael/2023/03/15/kafka-consumer-broker-migration-aws-msk/#respond</comments>
		
		<dc:creator><![CDATA[João Rafael]]></dc:creator>
		<pubDate>Wed, 15 Mar 2023 12:44:45 +0000</pubDate>
				<category><![CDATA[Deployment]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Tips and Tricks]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[Kafka]]></category>
		<category><![CDATA[migrations]]></category>
		<category><![CDATA[MSK]]></category>
		<category><![CDATA[Streaming]]></category>
		<guid isPermaLink="false">https://blogit.create.pt/?p=12805</guid>

					<description><![CDATA[<p>When using Kafka for communication between systems, you may find yourself having to change your consumer configuration. This may happen due to the publisher changing topic, or a new message broker being created and a company-wide mandate that everyone switch until a set date. Doing this sort of migration isn&#8217;t very complicated, but any errors [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/joaorafael/2023/03/15/kafka-consumer-broker-migration-aws-msk/">Performing broker migrations on your Kafka consumers</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>When using Kafka for communication between systems, you may find yourself having to change your consumer configuration. This may happen due to the publisher changing topic, or a new message broker being created and a company-wide mandate that everyone switch until a set date. Doing this sort of migration isn&#8217;t very complicated, but any errors may prove to be costly. If we re-consume a new topic from the beginning, for example, we may accidentally fill up our database, for example. We want to start consuming the new topic at around the same time we stopped consuming the old one (with a buffer of a few minutes preferably, just to be safe).</p>



<p>The first step here is to ensure that your consumer code is idempotent, but that’s something that you should always ensure &#8211; just imagine that you have to re-consume a couple of messages, in that case lacking idempotency may result in your system getting strange data and bugs.</p>



<h2 class="wp-block-heading">Setting up Kafka on your machine</h2>



<p><em>(Feel free to skip this section if you have everything set up.)</em></p>



<p>Kafka releases aren’t included in most package managers, except for Homebrew in Mac systems. They should be available, however, in the <a href="https://kafka.apache.org/downloads" target="_blank" rel="noreferrer noopener">official download page</a>. For Unix systems (Mac, Linux, WSL, *BSD), simply get one of the binary downloads and extract it wherever you prefer. Then, make sure to add the bin folder to your $PATH, so that you can use these scripts in any folder. In my case, I put the extracted archive in $HOME/.local/share/kafka, so I edit my .zshrc (or .bashrc in a plainer Linux/WSL system) and add:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
export PATH=$HOME/.local/share/kafka/bin:$PATH
</pre></div>


<p><br>Now we can use the official kafka scripts in any directory. That is, unless we have some form of authentication. We can add additional configs to a file somewhere, and include that file in our commands. Let’s put a file called <code>dev.cfg</code> in <code>$HOME/.config/kafka/</code> (the exact directory doesn’t matter, but .config is the appropriate directory for configs), and write the following on it:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-512
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
    username=&quot;me&quot; \
    password=&quot;noneofyourbusiness&quot;;
</pre></div>


<p>This is just an example, your broker&#8217;s security config may vary. After doing this, just add to any of the Kafka commands you use:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
--command-config $HOME/.config/kafka/dev.cfg
</pre></div>


<h2 class="wp-block-heading">Checking consumer offsets</h2>



<p>Before migrating topics, it&#8217;s best if we make sure that the old topic consumer is totally up-to-date. We can use this command to check the consumption lag:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
kafka-consumer-groups.sh \
	--bootstrap-server=&#039;&lt;your broker endpoints, separated by commas&gt;&#039; \
	--describe \
	--group=&#039;&lt;your consumer group&gt;&#039;

</pre></div>


<p>This should yield something like the output below:</p>



<pre class="wp-block-code"><code>TOPIC      PARTITION   NEWEST_OFFSET   OLDEST_OFFSET     CONSUMER_OFFSET     LEAD         LAG
your-topic 0           30000000        0                 30000000            30000000     0
your-topic 1           23900000        0                 23900000            23900000     0

CLIENT_HOST      CLIENT_ID         TOPIC          ASSIGNED_PARTITIONS
/128.0.0.0       your-consumer     your-topic     0
/128.0.0.1     	 your-consumer     your-topic     1</code></pre>



<p>From this we can see that there is zero lag, since the newest offset is equal to the consumer offset on both partitions. We can also see that our consumers are still running (they&#8217;re the clients with ID &#8220;your-consumer&#8221;).</p>



<h2 class="wp-block-heading">Performing the Kafka consumer migration</h2>



<p>Before switching topics or brokers, we should first create a consumer group for the new configuration. This will allow us to change the offsets <em>before</em> beginning the message consumption proper. The easiest way to do this is to use <a href="https://github.com/edenhill/kcat"><code>kcat</code></a>, a simple command-line tool that allows us to consume and produce Kafka messages. To create a consumer group, you must consume messages, so it&#8217;s a matter of setting up <code>kcat</code> (check the install guide on the kcat repo) and consuming a few messages from the new topic:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
kcat \
	-C \
	-b &#039;&lt;your brokers&gt;&#039; \
	-t new-topic \
	-G new-consumer-group

# If you have authentication, just add: 

	-X security.protocol=SASL_SSL \
	-X sasl.mechanisms=SCRAM-SHA-512 \
	-X sasl.username=&quot;me&quot; \
	-X sasl.password=&quot;noneofyourbusiness&quot;

</pre></div>


<p>Just run this for a second, check that messages come out, and then run the describe command for the new group:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
kafka-consumer-groups.sh \
	--bootstrap-server=&#039;new-broker&#039; \
	--describe \
	--group=&#039;new-group&#039;


TOPIC     PARTITION   NEWEST_OFFSET   OLDEST_OFFSET     CONSUMER_OFFSET     LEAD         LAG
new-topic 0           12387000        0                 12                  12           12386988
new-topic 1           12360000        0                 38                  38           12359962

CLIENT_HOST      CLIENT_ID         TOPIC          ASSIGNED_PARTITIONS
</pre></div>


<p>We have huge lag here, but that&#8217;s fine, as we&#8217;re going to reset the offsets anyway. Stop the workers consuming from the older topic by whatever means you can and note down the time at which they stopped. We will now reset the consumer group offset for this new group. Just a note: in this case, we don&#8217;t have any consumers connected to this group, but if we did we&#8217;d also have to turn them off. You <strong>cannot</strong> reset a consumer group offset if there are clients connected to it.</p>



<p>To reset the offsets, just run the following command (here we&#8217;ll assume that we stopped the previous consumers at 10:00 of March 15th, 2023, and subtract ten minutes just to be safe):</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
kafka-consumer-groups.sh \
	--bootstrap-server=&#039;new-brokers&#039; \
	--group=&#039;new-group&#039; \
	--topic=&#039;new-topic&#039; \
	--reset-offsets \
	--to-datetime 2023-03-15T09:50:00.000 \
	--dry-run
</pre></div>


<p>Running this dry run will not make any changes, It will only show you the new offsets for the consumer group, e.g.:</p>



<pre class="wp-block-code"><code>TOPIC     PARTITION   NEWEST_OFFSET   OLDEST_OFFSET     CONSUMER_OFFSET     LEAD         LAG
new-topic 0           12387000        0                 12386998            12386988     2
new-topic 1           12360000        0                 12369951            12369951     49

CLIENT_HOST      CLIENT_ID         TOPIC          ASSIGNED_PARTITIONS</code></pre>



<p>If nothing looks strange to you, go ahead, substitute <code>--dry-run</code> with <code>--execute</code> and run the command. Afterwards, deploy the new consumer configuration, turn on the worker, and let it run.</p>
<p>The post <a href="https://blogit.create.pt/joaorafael/2023/03/15/kafka-consumer-broker-migration-aws-msk/">Performing broker migrations on your Kafka consumers</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/joaorafael/2023/03/15/kafka-consumer-broker-migration-aws-msk/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Entity Framework &#8211; Update Database using Azure Pipelines</title>
		<link>https://blogit.create.pt/vini/2022/11/07/entity-framework-update-database-using-azure-pipelines/</link>
					<comments>https://blogit.create.pt/vini/2022/11/07/entity-framework-update-database-using-azure-pipelines/#respond</comments>
		
		<dc:creator><![CDATA[Vinícius Biavatti]]></dc:creator>
		<pubDate>Mon, 07 Nov 2022 11:18:06 +0000</pubDate>
				<category><![CDATA[Architecture]]></category>
		<category><![CDATA[Deployment]]></category>
		<category><![CDATA[Databases]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[azure]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[entity framework]]></category>
		<category><![CDATA[migrations]]></category>
		<category><![CDATA[pipelines]]></category>
		<category><![CDATA[sql database]]></category>
		<guid isPermaLink="false">https://blogit.create.pt/?p=12770</guid>

					<description><![CDATA[<p>Introduction The pipelines bring to us the opportunity to create automatic tasks to execute development operations, like deploys, migrations, tests, etc. Focused in the deploy routine, some applications need other operations to ensure the stability between the application and other resources, like the database. During the development process, the developers usually customize and improve the [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/vini/2022/11/07/entity-framework-update-database-using-azure-pipelines/">Entity Framework &#8211; Update Database using Azure Pipelines</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">Introduction</h2>



<p>The pipelines bring to us the opportunity to create automatic tasks to execute development operations, like deploys, migrations, tests, etc. Focused in the deploy routine, some applications need other operations to ensure the stability between the application and other resources, like the database.</p>



<p>During the development process, the developers usually customize and improve the application database in the development environment scope, and it can be made easily by using frameworks that contains routines to create source files that have the responsability to keep the history of changes and are used to update the database for determinated version. These files are known as migrations.&nbsp;</p>



<p>In this post, we will see how to apply migrations for the databases that are located in different environments (staging, production, etc.), using the Code-First entity concept, and the Entity Framework Core.</p>



<h2 class="wp-block-heading">Migrations</h2>



<p>To generate database migrations, we can execute the &#8220;Add-Migraton &lt;name&gt;&#8221; command from the Entity Framework. This command will generate some source files that will contain the code to apply the migration to the database. To execute these migration files, you just need to execute the &#8220;Update-Database&#8221; command. But,&nbsp;<a href="https://learn.microsoft.com/en-us/ef/core/managing-schemas/migrations/applying?tabs=dotnet-core-cli#sql-scripts" target="_blank" rel="noreferrer noopener">Microsoft recommends</a>&nbsp;to create a SQL file individually, and suggest to execute this file direct to the database, without need to execute an intermediary software (C# + EF) to access the database. To do this, we will have to follow the steps below:</p>



<ul class="wp-block-list">
<li>1. Create a Pipeline in Azure Devops;</li>



<li>2. Install dotnet-ef-tool;</li>



<li>3. Generate SQL script;</li>



<li>4. Execute SQL script to the database;</li>



<li>5. Script File as a Pipeline Artifact.</li>
</ul>



<p><em><strong>NOTE</strong>: We don&#8217;t need to worry about which migration need to be applyed to the database. The dotnet-ef-tool command will generate a script cantaining the needed logic. Check the third step for more details.</em></p>



<h2 class="wp-block-heading">1. Create a Pipeline in Azure Devops</h2>



<p>We will create a common deployment pipeline in Azure Devops for ASP.NET Core backend applications. Check the end of the post to see the full result of the pipeline.</p>



<h2 class="wp-block-heading">2. Install dotnet-ef-tool</h2>



<p>To execute tool commands of Entity Framework, we have to install a package named dotnet-ef-tool. This needs to be available into the pipeline&#8217;s context to be used for the SQL script generation. To install this tool by a pipeline task, we can use the task below:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: yaml; title: ; notranslate">
- task: DotNetCoreCLI@2
  displayName: &#039;Install EF tool&#039;
  inputs:
    command: &#039;custom&#039;
    custom: &#039;tool&#039;
    arguments: &#039;install --global dotnet-ef&#039;
</pre></div>


<h2 class="wp-block-heading">3. Generate SQL script</h2>



<p>To generate the SQL script containing the logic to apply migrations, we will use the dotnet-ef-tool. By default, the command does not generate the logic to check which migration should be applied, so we have to enable this behavior using the parameter: &#8211;idempotent. By using this behavior, we don&#8217;t need to worry about the migrations that will be applyed to the database since the generated script already has it. Below, you can see the task that could be used to generate the script file:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: yaml; title: ; notranslate">
- task: DotNetCoreCLI@2
  displayName: &#039;Generate SQL Script&#039;
  inputs:
    command: &#039;custom&#039;
    custom: &#039;ef&#039;
    arguments: &#039;migrations script --idempotent --project $(Build.SourcesDirectory)\Data.csproj --output $(System.ArtifactsDirectory)/script.sql&#039;
</pre></div>


<p><em><strong>NOTE</strong>: We don&#8217;t need to stablish any database connection for the script generation, since we are following the Code First concept for entity creation, so, the command will just look at the entities source code of the project to generate the SQL file. If we don&#8217;t use the &#8211;idempotent argument, the script will generate the SQL without the conditional logic to determinate which migration should be applied, causing an error if the database already exists.</em></p>



<h2 class="wp-block-heading">4. Execute SQL script to the database</h2>



<p>Now, with the SQL script available, we just need to execute this file to the database. In this post, we will use a common Azure Database as an example, so for this case, we can use the following task to accomplish this requirement:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: yaml; title: ; notranslate">
- task: SqlAzureDacpacDeployment@1
  displayName: &#039;Update Database&#039;
  inputs:
    azureSubscription: &#039;&lt;Service Connection Identifier&gt;&#039;
    AuthenticationType: &#039;connectionString&#039; # You can use other method to authenticate to the database if you want
    ConnectionString: &#039;&lt;Database Connection String&gt;&#039;
    deployType: &#039;SqlTask&#039;
    SqlFile: &#039;$(System.ArtifactsDirectory)/script.sql&#039;
</pre></div>


<h2 class="wp-block-heading">5. Script File as a Pipeline Artifact</h2>



<p>Well done. The last task will only be used to make the file available to be accessed by the pipeline runner user. To do this, you can use the task below to add the generated script.sql file to the file bundle (artifacts). To check the artifacts, just access the artifacts option from the pipeline details after the job&#8217;s execution.</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: yaml; title: ; notranslate">
- task: PublishBuildArtifacts@1
  displayName: &#039;Publish Artifacts&#039;
  inputs:
    PathtoPublish: &#039;$(System.ArtifactsDirectory)/script.sql&#039;
    ArtifactName: &#039;$(artifactName)&#039;
    publishLocation: &#039;Container&#039;
</pre></div>


<h2 class="wp-block-heading">Conclusion</h2>



<p>In this post, we could learn an easy way to apply the database migrations to the remote environment by using automation (pipelines) to accomplish it. This method is simple, and does not have any database validation before script execution. So, if you have an active and sensitive environment, be sure that the needed validations are considered before the database update, to ensure that the database will not be update wrongly.</p>



<h2 class="wp-block-heading">Pipeline Result</h2>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: yaml; title: ; notranslate">
trigger:
- none

pool:
  vmImage: windows-latest

variables:
  solution: &#039;**/*.sln&#039;
  buildPlatform: &#039;AnyCPU&#039;
  buildConfiguration: &#039;Release&#039;
  artifactName: &#039;artifacts&#039;
  environment: &#039;Development&#039;

- task: NuGetToolInstaller@1
  displayName: &#039;NuGet Installation&#039;

- task: NuGetCommand@2
  displayName: &#039;NuGet Restore&#039;
  inputs:
    restoreSolution: &#039;$(solution)&#039;

- task: VSBuild@1
  displayName: &#039;Build Project&#039;
  inputs:
    solution: &#039;backend/WebAPI&#039;
    msbuildArgs: &#039;/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:PackageLocation=&quot;$(build.artifactStagingDirectory)&quot;&#039;
    platform: &#039;$(buildPlatform)&#039;
    configuration: &#039;$(buildConfiguration)&#039;

- task: PublishSymbols@2
  displayName: &#039;Publish Symbols&#039;
  inputs:
    SearchPattern: &#039;**/bin/**/*.pdb&#039;
    SymbolServerType: &#039;FileShare&#039;
    CompressSymbols: false

- task: PublishBuildArtifacts@1
  displayName: &#039;Publish Artifacts&#039;
  inputs:
    PathtoPublish: &#039;$(Build.ArtifactStagingDirectory)&#039;
    ArtifactName: &#039;artifacts&#039;
    publishLocation: &#039;Container&#039;

- task: DownloadBuildArtifacts@0
  displayName: &#039;Build Artifacts&#039;
  inputs:
    buildType: &#039;current&#039;
    downloadType: &#039;single&#039;
    artifactName: &#039;$(artifactName)&#039;
    downloadPath: &#039;$(System.ArtifactsDirectory)&#039;

- task: DotNetCoreCLI@2
  displayName: &#039;Install EF tool&#039;
  inputs:
    command: &#039;custom&#039;
    custom: &#039;tool&#039;
    arguments: &#039;install --global dotnet-ef&#039;

- task: DotNetCoreCLI@2
  displayName: &#039;Generate SQL Script&#039;
  inputs:
    command: &#039;custom&#039;
    custom: &#039;ef&#039;
    arguments: &#039;migrations script --idempotent --project $(Build.SourcesDirectory)\Data.csproj --output $(System.ArtifactsDirectory)/script.sql&#039;

- task: SqlAzureDacpacDeployment@1
  displayName: &#039;Update Database&#039;
  inputs:
    azureSubscription: &#039;&lt;Service Connection Identifier&gt;&#039;
    AuthenticationType: &#039;connectionString&#039;
    ConnectionString: &#039;&lt;Connection String&gt;&#039;
    deployType: &#039;SqlTask&#039;
    SqlFile: &#039;$(System.ArtifactsDirectory)/script.sql&#039;

- task: AzureRmWebAppDeployment@4
  displayName: &#039;Deploy Application&#039;
  inputs:
    ConnectionType: &#039;AzureRM&#039;
    azureSubscription: &#039;&lt;Service Connection Identifier&gt;&#039;
    appType: &#039;webApp&#039;
    WebAppName: &#039;armazemlegalapi&#039;
    deployToSlotOrASE: true
    ResourceGroupName: &#039;&lt;Resource Group Identifier&gt;&#039;
    SlotName: &#039;production&#039;
    packageForLinux: &#039;$(System.ArtifactsDirectory)/Project.zip&#039;

- task: PublishBuildArtifacts@1
  displayName: &#039;Publish Artifacts&#039;
  inputs:
    PathtoPublish: &#039;$(System.ArtifactsDirectory)/script.sql&#039;
    ArtifactName: &#039;$(artifactName)&#039;
    publishLocation: &#039;Container&#039;
</pre></div>


<h2 class="wp-block-heading">References</h2>



<ul class="wp-block-list">
<li><a href="https://learn.microsoft.com/en-us/ef/core/managing-schemas/migrations/applying?tabs=dotnet-core-cli#sql-scripts">Microsoft Managing Schemas Article</a></li>
</ul>
<p>The post <a href="https://blogit.create.pt/vini/2022/11/07/entity-framework-update-database-using-azure-pipelines/">Entity Framework &#8211; Update Database using Azure Pipelines</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/vini/2022/11/07/entity-framework-update-database-using-azure-pipelines/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Smaller .NET 6 docker images</title>
		<link>https://blogit.create.pt/telmorodrigues/2022/03/08/smaller-net-6-docker-images/</link>
					<comments>https://blogit.create.pt/telmorodrigues/2022/03/08/smaller-net-6-docker-images/#respond</comments>
		
		<dc:creator><![CDATA[Telmo Rodrigues]]></dc:creator>
		<pubDate>Tue, 08 Mar 2022 12:07:30 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[.NET]]></category>
		<category><![CDATA[docker]]></category>
		<guid isPermaLink="false">https://blogit.create.pt/?p=12634</guid>

					<description><![CDATA[<p>Introduction This post compares different strategies to dockerize a .NET 6 application and how to create a &#60; 100mb docker image to host a .NET 6 asp.net web application. Using the docker multi stage builds feature and a self-contained .NET with some build options the final docker image can be reduced from 760mb to 83mb, [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/telmorodrigues/2022/03/08/smaller-net-6-docker-images/">Smaller .NET 6 docker images</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">Introduction</h2>



<p>This post compares different strategies to dockerize a .NET 6 application and how to create a &lt; 100mb docker image to host a .NET 6 asp.net web application. Using the docker multi stage builds feature and a self-contained .NET with some build options the final docker image can be reduced from 760mb to 83mb, almost 10x smaller!</p>



<p>All the images used were taken from the official Microsoft .NET docker <a href="https://hub.docker.com/_/microsoft-dotnet">repository</a>.</p>



<p></p>



<h2 class="wp-block-heading">1. .NET 6 SDK image</h2>



<p>Let&#8217;s start with a simple Dockerfile based on the official .NET 6 SDK build image.</p>



<pre class="wp-block-code"><code>FROM mcr.microsoft.com/dotnet/sdk:6.0
WORKDIR /app

COPY . ./

RUN dotnet publish "WebApi.csproj" -c Release -o /app/publish

ENTRYPOINT &#091;"dotnet", "/app/publish/WebApi.dll"]</code></pre>



<pre class="wp-block-code"><code>$ docker build -t api-aspnet .
$ docker images api-aspnet
REPOSITORY   TAG       IMAGE ID       CREATED       SIZE
api-aspnet   latest    700870327c08   1 weeks ago   761MB
</code></pre>



<p>We are using a docker single stage build using the base .net sdk image. After building the image we can see that the final image size is 761mb. Let&#8217;s try to check where the space is being used. For that we can use the docker history to see the layers created for this image.</p>



<pre class="wp-block-code"><code>$ docker history api-aspnet
IMAGE          CREATED        CREATED BY                                      SIZE      COMMENT
700870327c08   1 weeks ago    ENTRYPOINT &#091;"dotnet" "/app/publish/WebApi.dl…   0B        buildkit.dockerfile.v0
&lt;missing&gt;      1 weeks ago    RUN /bin/sh -c dotnet publish "WebApi.csproj…   4.18MB    buildkit.dockerfile.v0
&lt;missing&gt;      1 weeks ago    RUN /bin/sh -c dotnet build -c Release # bui…   12.2MB    buildkit.dockerfile.v0
&lt;missing&gt;      1 weeks ago    COPY . ./ # buildkit                            721kB     buildkit.dockerfile.v0
&lt;missing&gt;      1 weeks ago    RUN /bin/sh -c dotnet restore # buildkit        28.2MB    buildkit.dockerfile.v0
&lt;missing&gt;      1 weeks ago    COPY WebApi.csproj ./ # buildkit                327B      buildkit.dockerfile.v0
&lt;missing&gt;      1 weeks ago    WORKDIR /app                                    0B        buildkit.dockerfile.v0
&lt;missing&gt;      2 months ago   /bin/sh -c powershell_version=7.2.1     &amp;&amp; c…   40.6MB
&lt;missing&gt;      2 months ago   /bin/sh -c curl -fSL --output dotnet.tar.gz …   392MB
&lt;missing&gt;      2 months ago   /bin/sh -c apt-get update     &amp;&amp; apt-get ins…   74.8MB
&lt;missing&gt;      2 months ago   /bin/sh -c #(nop)  ENV ASPNETCORE_URLS= DOTN…   0B
&lt;missing&gt;      2 months ago   /bin/sh -c #(nop) COPY dir:a54b266469a09b122…   20.3MB
&lt;missing&gt;      2 months ago   /bin/sh -c #(nop)  ENV ASPNET_VERSION=6.0.1 …   0B
&lt;missing&gt;      2 months ago   /bin/sh -c ln -s /usr/share/dotnet/dotnet /u…   24B
&lt;missing&gt;      2 months ago   /bin/sh -c #(nop) COPY dir:6c537cc098876a5f6…   70.6MB
&lt;missing&gt;      2 months ago   /bin/sh -c #(nop)  ENV DOTNET_VERSION=6.0.1     0B
&lt;missing&gt;      2 months ago   /bin/sh -c #(nop)  ENV ASPNETCORE_URLS=http:…   0B
&lt;missing&gt;      2 months ago   /bin/sh -c apt-get update     &amp;&amp; apt-get ins…   37MB
&lt;missing&gt;      2 months ago   /bin/sh -c #(nop)  CMD &#091;"bash"]                 0B
&lt;missing&gt;      2 months ago   /bin/sh -c #(nop) ADD file:09675d11695f65c55…   80.4MB
</code></pre>



<figure class="wp-block-table is-style-regular"><table><tbody><tr><td>base image</td><td>20%</td></tr><tr><td>.NET SDK + runtime</td><td>60%</td></tr><tr><td>Linux libs</td><td>15%</td></tr><tr><td>Web App source + compiled code</td><td>25%</td></tr></tbody></table></figure>



<p>The .NET SDK + runtime takes most of the space, around 60%. So let&#8217;s try to shrink it.</p>



<p></p>



<h2 class="wp-block-heading">2. .NET 6 SDK image + docker multi stage build</h2>



<p>The previous section showed us that the .NET SDK + runtime takes most of the space of the final image. This is because we are using the same docker base image to build and to run the app so the final image has all the things needed to compile and run the app. Using a docker multi stage build we can cut the .NET SDK from the final image and use it only to build our app and include only the .NET runtime in the final image to run the application.</p>



<pre class="wp-block-code"><code>FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /app

COPY . ./

RUN dotnet publish "WebApi.csproj" -c Release -o /app/publish

FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS runtime

WORKDIR /app
COPY --from=build /app/publish .

ENTRYPOINT &#091;"dotnet", "WebApi.dll"]</code></pre>



<p>Here we define 2 images: the <code>build</code> image used to compile the app which includes all the things needed to build our .NET 6 application and the final one, the <code>runtime</code>&nbsp; which only includes the .NET runtime to run our app. The first one to use for development (which contained everything needed to build your application), and a slimmed-down one to use for production, which only contained your compiled code and what is needed to run it. With multi-stage builds, you use multiple&nbsp;<code>FROM</code>&nbsp;statements in your Dockerfile. Each&nbsp;<code>FROM</code>&nbsp;instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image.</p>



<pre class="wp-block-code"><code>$ docker build -t api-aspnet-multistage .
$ docker images api-aspnet-multistage
REPOSITORY              TAG       IMAGE ID       CREATED       SIZE
api-aspnet-multistage   latest    1021a94feca4   1 weeks ago   212MB
</code></pre>



<p>Bang, we reduced our image to 212mb, this is because the complete SDK was removed from the final image.<br>Let&#8217;s see again the layers and the space being used for each one:</p>



<pre class="wp-block-code"><code>USER@DESKTOP-BJUBV5T MINGW64 ~/Documents/sonae/identitymanager-credenciaiscontinente (fix/invalid-account-association-check)
$ docker history api-aspnet-multistage
IMAGE          CREATED        CREATED BY                                      SIZE      COMMENT
1021a94feca4   1 weeks ago    ENTRYPOINT &#091;"dotnet" "WebApi.dll"]              0B        buildkit.dockerfile.v0
&lt;missing&gt;      1 weeks ago    COPY /app/publish . # buildkit                  4.18MB    buildkit.dockerfile.v0
&lt;missing&gt;      1 weeks ago    WORKDIR /app                                    0B        buildkit.dockerfile.v0
&lt;missing&gt;      2 months ago   /bin/sh -c #(nop) COPY dir:a54b266469a09b122…   20.3MB
&lt;missing&gt;      2 months ago   /bin/sh -c #(nop)  ENV ASPNET_VERSION=6.0.1 …   0B
&lt;missing&gt;      2 months ago   /bin/sh -c ln -s /usr/share/dotnet/dotnet /u…   24B
&lt;missing&gt;      2 months ago   /bin/sh -c #(nop) COPY dir:6c537cc098876a5f6…   70.6MB
&lt;missing&gt;      2 months ago   /bin/sh -c #(nop)  ENV DOTNET_VERSION=6.0.1     0B
&lt;missing&gt;      2 months ago   /bin/sh -c #(nop)  ENV ASPNETCORE_URLS=http:…   0B
&lt;missing&gt;      2 months ago   /bin/sh -c apt-get update     &amp;&amp; apt-get ins…   37MB
&lt;missing&gt;      2 months ago   /bin/sh -c #(nop)  CMD &#091;"bash"]                 0B
&lt;missing&gt;      2 months ago   /bin/sh -c #(nop) ADD file:09675d11695f65c55…   80.4MB
</code></pre>



<figure class="wp-block-table is-style-regular"><table><tbody><tr><td>base image</td><td>50%</td></tr><tr><td>.NET runtime</td><td>40%</td></tr><tr><td>Web App source + compiled code</td><td>10%</td></tr></tbody></table></figure>



<p>Let´s try to reduce the bigger part, the base image.</p>



<p></p>



<h2 class="wp-block-heading">3. Multi stage build with Alpine linux Microsoft official image</h2>



<p>We can change our base image to use the Alpine Linux instead of the default one Debian bullsyeye which is the base image of the most of the official Microsoft .NET docker images.</p>



<p></p>



<pre class="wp-block-code"><code>FROM mcr.microsoft.com/dotnet/sdk:6.0-alpine AS build
WORKDIR /app

COPY . ./
RUN dotnet publish "WebApi.csproj" -c Release -o /app/publish

FROM mcr.microsoft.com/dotnet/aspnet:6.0-alpine

WORKDIR /app
COPY --from=build /app/publish .
ENTRYPOINT &#091;"dotnet", "WebApi.dll"]</code></pre>



<pre class="wp-block-code"><code>docker build -t api-multi-alpine .
docker images api-multi-alpine
REPOSITORY         TAG       IMAGE ID       CREATED       SIZE
api-multi-alpine   latest    0c7f1c3b0bfa   1 weeks ago   104MB
</code></pre>



<pre class="wp-block-code"><code>$ docker history api-multi-alpine
IMAGE          CREATED        CREATED BY                                      SIZE      COMMENT
0c7f1c3b0bfa   1 weeks ago    ENTRYPOINT &#091;"dotnet" "WebApi.dll"]              0B        buildkit.dockerfile.v0
&lt;missing&gt;      1 weeks ago    COPY /app/publish . # buildkit                  4.12MB    buildkit.dockerfile.v0
&lt;missing&gt;      1 weeks ago    WORKDIR /app                                    0B        buildkit.dockerfile.v0
&lt;missing&gt;      2 months ago   /bin/sh -c wget -O aspnetcore.tar.gz https:/…   20.3MB
&lt;missing&gt;      2 months ago   /bin/sh -c #(nop)  ENV ASPNET_VERSION=6.0.1 …   0B
&lt;missing&gt;      2 months ago   /bin/sh -c wget -O dotnet.tar.gz https://dot…   69.8MB
&lt;missing&gt;      2 months ago   /bin/sh -c #(nop)  ENV DOTNET_VERSION=6.0.1     0B
&lt;missing&gt;      3 months ago   /bin/sh -c #(nop)  ENV ASPNETCORE_URLS=http:…   0B
&lt;missing&gt;      3 months ago   /bin/sh -c apk add --no-cache         ca-cer…   4.35MB
&lt;missing&gt;      3 months ago   /bin/sh -c #(nop)  CMD &#091;"/bin/sh"]              0B
&lt;missing&gt;      3 months ago   /bin/sh -c #(nop) ADD file:762c899ec0505d1a3…   5.61MB
</code></pre>



<figure class="wp-block-table is-style-regular"><table><tbody><tr><td>base image</td><td>10%</td></tr><tr><td>.NET runtime</td><td>80%</td></tr><tr><td>Web App source + compiled code</td><td>10%</td></tr></tbody></table></figure>



<p>Using the alpine instead of the debian base image we were able to reduce our base image size to 10% of our final image.<br>Now the .NET runtime is the part that is using most of the space of our image so let&#8217;s try to reduce this part.</p>



<p></p>



<h2 class="wp-block-heading">4. Raw alpine with a trim-self-contained and trimmed .NET build</h2>



<p>Instead of using the official Microsoft Alpine image which comes with the full .NET runtime we can use the vanilla Alpine base image and build our App using the .NET self-contained build <code>PublishSingleFile</code>  option.<br>Publishing our app as self-contained produces an application that includes the .NET runtime and libraries in a single binary, so we don&#8217;t have to use the .NET runtime installed on our base system.</p>



<p><br>We can also use the <code>PublishTrimmed</code>  option while building and publishing our app. With this options the final binary includes only a subset of the .NET framework assemblies, the assemblies that are referenced by our app. The other ones are removed from the final procuded binary file. However, there is a risk that the build-time analysis of the application can cause failures at run time, due to not being able to reliably analyze various problematic code patterns (largely centered on reflection use). To mitigate these problems, warnings are produced whenever the trimmer cannot fully analyze a code pattern. For information on what the trim warnings mean and how to resolve them, see Introduction to trim warnings.</p>



<p><br>Also, with a self-contained build we also need to specify the target platform which the final binary will be compiled. In this case we are using the alpine-x64 because our base image is the Alpine Linux.</p>



<pre class="wp-block-code"><code>FROM mcr.microsoft.com/dotnet/sdk:6.0-alpine AS build
WORKDIR /app

COPY . ./

RUN dotnet publish "WebApi.csproj" -c Release -o /app/publish \
    --runtime alpine-x64 \
    --self-contained true \
    /p:PublishTrimmed=true \
    /p:TrimMode=Link \
    /p:PublishSingleFile=true

FROM mcr.microsoft.com/dotnet/aspnet:6.0-alpine

WORKDIR /app
COPY --from=build /app/publish .
ENTRYPOINT &#091;"./WebApi"]</code></pre>



<pre class="wp-block-code"><code>
docker build -t api-multi-alpine-raw .
docker images api-multi-alpine-raw</code></pre>



<pre class="wp-block-code"><code>$ docker history api-multi-alpine-raw
IMAGE          CREATED        CREATED BY                                      SIZE      COMMENT
ac78719fc09a   1 weeks ago    ENTRYPOINT &#091;"./WebApi"]                         0B        buildkit.dockerfile.v0
&lt;missing&gt;      1 weeks ago    COPY /app/publish . # buildkit                  42.2MB    buildkit.dockerfile.v0
&lt;missing&gt;      1 weeks ago    WORKDIR /app                                    0B        buildkit.dockerfile.v0
&lt;missing&gt;      1 weeks ago    RUN /bin/sh -c apk add --no-cache libstdc++ …   35.5MB    buildkit.dockerfile.v0
&lt;missing&gt;      3 months ago   /bin/sh -c #(nop)  CMD &#091;"/bin/sh"]              0B
&lt;missing&gt;      3 months ago   /bin/sh -c #(nop) ADD file:8f5bc5ce64ef781ad…   5.59MB
</code></pre>



<p>The final image was reduced to 83mb because it only includes the Alpine base image and a single binary that includes our WebApp compiled code and the assemblies that are referenced by our app!</p>



<p></p>



<h2 class="wp-block-heading">References</h2>



<p><a href="https://docs.docker.com/develop/develop-images/multistage-build/">https://docs.docker.com/develop/develop-images/multistage-build/</a><br><a href="https://docs.microsoft.com/en-us/dotnet/core/deploying/">https://docs.microsoft.com/en-us/dotnet/core/deploying/</a><br><a href="https://docs.microsoft.com/en-us/dotnet/core/deploying/trimming/trim-self-contained">https://docs.microsoft.com/en-us/dotnet/core/deploying/trimming/trim-self-contained</a></p>



<p><a href="https://hub.docker.com/_/microsoft-dotnet">https://hub.docker.com/_/microsoft-dotnet</a></p>
<p>The post <a href="https://blogit.create.pt/telmorodrigues/2022/03/08/smaller-net-6-docker-images/">Smaller .NET 6 docker images</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/telmorodrigues/2022/03/08/smaller-net-6-docker-images/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Case Study: Azure Service Bus and Event-Driven Architectures</title>
		<link>https://blogit.create.pt/davidpereira/2021/06/02/case-study-azure-service-bus-and-event-driven-architectures/</link>
					<comments>https://blogit.create.pt/davidpereira/2021/06/02/case-study-azure-service-bus-and-event-driven-architectures/#respond</comments>
		
		<dc:creator><![CDATA[David Pereira&nbsp;and&nbsp;Francisco Grilo]]></dc:creator>
		<pubDate>Wed, 02 Jun 2021 11:02:12 +0000</pubDate>
				<category><![CDATA[Azure Service Bus]]></category>
		<category><![CDATA[Architecture]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[EDA]]></category>
		<category><![CDATA[event-driven architecture]]></category>
		<guid isPermaLink="false">https://blogit.create.pt/?p=12291</guid>

					<description><![CDATA[<p>Introduction In&#160;this&#160;article&#160;we&#160;will&#160;talk&#160;about&#160;Event-Driven&#160;Architectures.&#160;We&#160;choose&#160;to&#160;use&#160;the&#160;Azure&#160;Cloud&#160;Infrastructure.Service&#160;Bus&#160;provides&#160;reliable,&#160;secure&#160;asynchronous&#160;messaging&#160;at&#160;scale.&#160;This&#160;article&#160;is&#160;written&#160;by&#160;the&#160;engineering&#160;team&#160;at&#160;CreateIT and&#160;it&#160;is&#160;intended&#160;to&#160;show&#160;you&#160;a&#160;case&#160;study in one of our projects for a client. We&#8217;ll&#160;take&#160;a&#160;deeper&#160;dive&#160;into&#160;the&#160;Service&#160;Bus&#160;technology,&#160;architecture,&#160;and&#160;design&#160;choices.&#160;The&#160;post&#160;will&#160;cover&#160;both&#160;conceptual&#160;material&#160;as&#160;well&#160;as&#160;implementation&#160;details.&#160;Most&#160;importantly,&#160;we&#160;will&#160;discuss&#160;design&#160;and&#160;implementation&#160;of&#160;some&#160;of&#160;the&#160;features&#160;that&#160;provide&#160;secure&#160;and&#160;reliable&#160;messaging&#160;at&#160;scale,&#160;while&#160;minimizing&#160;operational&#160;cost. Service&#160;Bus&#160;Entities When we are working with Azure Service Bus, we can choose two Entities: Topics or Queues. You can have multiple Topics or Queues per Service Bus Namespace, but firstly you need to differ one from another. If you want a FIFO queue and only have one Consumer, then Queues are the way to go. If you need multiple Consumers, then the Topic is the better option. In this specific case we will create a Subscription per Consumer (Topics are only available from the Standard Pricing Tier). Event-driven architectures Benefits&#160;with&#160;event-driven&#160;architectures What&#160;are&#160;the&#160;benefits&#160;of&#160;using&#160;a&#160;queue&#160;in&#160;the&#160;middle&#160;of&#160;these&#160;systems? We can decide to load balance the input from Customer Services. Let&#8217;s say there are a lot of updates being made to a customer, meaning a [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/davidpereira/2021/06/02/case-study-azure-service-bus-and-event-driven-architectures/">Case Study: Azure Service Bus and Event-Driven Architectures</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading" id="Introduction">Introduction</h2>



<p>In&nbsp;this&nbsp;article&nbsp;we&nbsp;will&nbsp;talk&nbsp;about&nbsp;Event-Driven&nbsp;Architectures.&nbsp;We&nbsp;choose&nbsp;to&nbsp;use&nbsp;the&nbsp;Azure&nbsp;Cloud&nbsp;Infrastructure.<br>Service&nbsp;Bus&nbsp;provides&nbsp;reliable,&nbsp;secure&nbsp;asynchronous&nbsp;messaging&nbsp;at&nbsp;scale.&nbsp;This&nbsp;article&nbsp;is&nbsp;written&nbsp;by&nbsp;the&nbsp;engineering&nbsp;team&nbsp;at&nbsp;CreateIT and&nbsp;it&nbsp;is&nbsp;intended&nbsp;to&nbsp;show&nbsp;you&nbsp;a&nbsp;case&nbsp;study in one of our projects for a client. </p>



<p>We&#8217;ll&nbsp;take&nbsp;a&nbsp;deeper&nbsp;dive&nbsp;into&nbsp;the&nbsp;Service&nbsp;Bus&nbsp;technology,&nbsp;architecture,&nbsp;and&nbsp;design&nbsp;choices.&nbsp;The&nbsp;post&nbsp;will&nbsp;cover&nbsp;both&nbsp;conceptual&nbsp;material&nbsp;as&nbsp;well&nbsp;as&nbsp;implementation&nbsp;details.&nbsp;Most&nbsp;importantly,&nbsp;we&nbsp;will&nbsp;discuss&nbsp;design&nbsp;and&nbsp;implementation&nbsp;of&nbsp;some&nbsp;of&nbsp;the&nbsp;features&nbsp;that&nbsp;provide&nbsp;secure&nbsp;and&nbsp;reliable&nbsp;messaging&nbsp;at&nbsp;scale,&nbsp;while&nbsp;minimizing&nbsp;operational&nbsp;cost.</p>



<h4 class="wp-block-heading" id="Service-Bus-Entities">Service&nbsp;Bus&nbsp;Entities</h4>



<p>When we are working with Azure Service Bus, we can choose two Entities: <strong>Topics</strong> or <strong>Queues</strong>. You can have multiple Topics or Queues per Service Bus Namespace, but firstly you need to differ one from another. If you want a FIFO queue and only have one Consumer, then Queues are the way to go. If you need multiple Consumers, then the Topic is the better option. In this specific case we will create a Subscription per Consumer (Topics are only available from the Standard Pricing Tier).</p>



<h2 class="wp-block-heading" id="Event-Driven-Architectures">Event-driven architectures</h2>



<h3 class="wp-block-heading" id="Benefits-with-event-driven-architectures">Benefits&nbsp;with&nbsp;event-driven&nbsp;architectures</h3>



<p>What&nbsp;are&nbsp;the&nbsp;benefits&nbsp;of&nbsp;using&nbsp;a&nbsp;queue&nbsp;in&nbsp;the&nbsp;middle&nbsp;of&nbsp;these&nbsp;systems?</p>



<ul class="wp-block-list" style="max-width:991px;margin-top:0px;margin-bottom:0px"><li>We can decide to <strong>load balance the input</strong> from Customer Services. Let&#8217;s say there are a lot of updates being made to a customer, meaning a lot of events being published. We scale the number of consumers and user the <strong>competing pattern</strong></li><li>We can <strong>throttle the input</strong>. If on black Friday there are a ton of events and in case our Audit Log system is down. We simply store these events on the queue and consume them when the service is back online again. Of course we&#8217;d need to implement some logic for this behavior, but adding this &#8220;middleware&#8221; buys us options.</li></ul>



<p>In our use case, we wanted to move to an implementation where the Web API doesn&#8217;t get affected by any changes on these external systems. But in order to change the implementation, we must first figure out what are the challenges associated with this change.</p>



<h3 class="wp-block-heading" id="Challenges-with-event-driven-architectures"><strong>Challenges&nbsp;with&nbsp;event-driven&nbsp;architectures</strong></h3>



<h4 class="wp-block-heading" id="Message/Event-order"><strong>Message/Event&nbsp;order</strong></h4>



<p>Azure Service Bus has a feature called <strong>sessions</strong>. A session provides a context to send and retrieve messages that will preserve ordered delivery. However, in our use case we chose not to use it.</p>



<h4 class="wp-block-heading" id="Message-Lock-Duration"><strong>Message&nbsp;Lock&nbsp;Duration</strong></h4>



<p>When we are using Queues, every message has a lock Duration. During this time the consumer needs to process it. But if this consumer needs to contact multiple external systems this time may rise and our messages could stop in the Dead-Letter. So the best practice is to change it accordingly to your needs. We recommend you to set this time extremely high in the beginning, and then do some tests to calculate the average.</p>



<p>After that add 30% more of its value, in case of some lengthy  requests (in this case if we have outliers, they might stop in the Queue). If you are using a Topic, you will have a Lock Duration per Subscription. So make sure to adjust this time accordingly to its functions.</p>



<h2 class="wp-block-heading" id="Implementation"><strong>Implementation</strong></h2>



<p>In the Figure 1 you can see the initial architecture for the Customer Management system. It was responsible to make the requests to the other systems.</p>



<figure class="wp-block-image size-large"><img decoding="async" width="1024" height="612" src="https://blogit.create.pt/wp-content/uploads/2021/05/MicrosoftTeams-image-10-1024x612.png" alt="" class="wp-image-12299" srcset="https://blogit.create.pt/wp-content/uploads/2021/05/MicrosoftTeams-image-10-1024x612.png 1024w, https://blogit.create.pt/wp-content/uploads/2021/05/MicrosoftTeams-image-10-300x179.png 300w, https://blogit.create.pt/wp-content/uploads/2021/05/MicrosoftTeams-image-10-768x459.png 768w, https://blogit.create.pt/wp-content/uploads/2021/05/MicrosoftTeams-image-10-696x416.png 696w, https://blogit.create.pt/wp-content/uploads/2021/05/MicrosoftTeams-image-10-703x420.png 703w, https://blogit.create.pt/wp-content/uploads/2021/05/MicrosoftTeams-image-10.png 1063w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption>Figure 1 &#8211; Initial architecture diagram </figcaption></figure>



<p>With the new implementation, a message broker was introduced and we&nbsp;used&nbsp;the <a href="https://martinfowler.com/articles/201701-event-driven.html#Event-carriedStateTransfer" target="_blank" rel="noreferrer noopener">event-carried state transfer pattern</a>, meaning our events&nbsp;had&nbsp;all&nbsp;the&nbsp;information&nbsp;the&nbsp;consumer&nbsp;needed&nbsp;in&nbsp;order&nbsp;to&nbsp;do&nbsp;their&nbsp;job.&nbsp;We&nbsp;took&nbsp;in&nbsp;consideration&nbsp;the&nbsp;<strong>event&nbsp;notifications&nbsp;pattern</strong>,&nbsp;where&nbsp;the&nbsp;consumer&nbsp;would&nbsp;have&nbsp;to&nbsp;make&nbsp;a&nbsp;request&nbsp;to&nbsp;the&nbsp;API&nbsp;that&nbsp;originated&nbsp;the&nbsp;event,&nbsp;in&nbsp;order&nbsp;to&nbsp;get&nbsp;more&nbsp;information.&nbsp;But&nbsp;this&nbsp;brings&nbsp;new&nbsp;problems&nbsp;to&nbsp;the&nbsp;table.&nbsp;What&nbsp;if&nbsp;when&nbsp;the&nbsp;consumer&nbsp;code&nbsp;runs,&nbsp;the&nbsp;information&nbsp;for&nbsp;that&nbsp;customer&nbsp;ID&nbsp;changed?&nbsp;What&nbsp;if&nbsp;the&nbsp;event&nbsp;was&nbsp;<code>CUSTOMER_CREATED</code>&nbsp;but&nbsp;in&nbsp;the&nbsp;meanwhile&nbsp;the&nbsp;customer&nbsp;was&nbsp;deleted?</p>



<h3 class="wp-block-heading" id="Retries-with-Polly"><strong>Retries&nbsp;with&nbsp;Polly</strong></h3>



<p>In a distributed system, many things can go wrong. The network can fail or have additional latency, systems may be temporarily down, etc. We use the <code>Azure.ServiceBus.Messaging</code> NuGet package so we are able to check if the exception is a transient fault or not (more information on <a href="https://github.com/Azure/azure-sdk-for-net/blob/Azure.Messaging.ServiceBus_7.1.1/sdk/servicebus/Azure.Messaging.ServiceBus/README.md#exception-handling)" target="_blank" rel="noreferrer noopener">these docs</a>), then use <a href="https://github.com/App-vNext/Polly" target="_blank" rel="noreferrer noopener">Polly</a> to setup retry logic and fallbacks. There are other options to implement retry policies, for example we took in consideration the <a href="https://docs.microsoft.com/en-us/azure/architecture/best-practices/retry-service-specific#service-bus" target="_blank" rel="noreferrer noopener">Retry guidance for Azure Services</a> documentation from Microsoft. Since we use the latest Azure SDK,  the appropriate class would be <a href="https://docs.microsoft.com/en-us/dotnet/api/azure.messaging.servicebus.servicebusretrypolicy?view=azure-dotnet" target="_blank" rel="noreferrer noopener">ServiceBusRetryPolicy</a>.<br>We configured Polly to retry to publish a message three times (this configuration is on <code>appsettings.json</code>), with exponential times between each attempt.<br>If after the third retry we can&#8217;t publish the message we need to save it, because it has crucial information. So to solve this issue we created a Fallback Gateway, to write these messages to a Container inside an Azure Storage Account.</p>



<h3 class="wp-block-heading" id="Filters-for-message-routing"><strong>Filters&nbsp;for&nbsp;message&nbsp;routing</strong></h3>



<p>This section only applies for Topics Entities on the Azure Service Bus.<br>We can add <a href="https://docs.microsoft.com/en-us/azure/service-bus-messaging/topic-filters" target="_blank" rel="noreferrer noopener">Filters</a> on our Subscriptions to help us with routing each message to its specific Consumer. We considered two filter types:</p>



<ul class="wp-block-list" style="max-width:1003px;margin-top:-19px;margin-bottom:49px"><li>SQL Filter</li><li>Correlation Filter</li></ul>



<p style="margin-bottom:12px">Using the Correlation Filter you can configure Custom Properties and create Filters for your needs. You just need to make sure the producer of the messages, includes the header, that you are currently using to filter, on the message.</p>



<p>With SQL Filters you can create conditional expression to evaluate the current message. Just make sure that all the system properties are prefixed with <em>sys</em>. in the expression. Either way, both filters work just choose one that suits you the most!</p>



<h3 class="wp-block-heading" id="Dead-Letter-Queue"><strong>Dead-Letter&nbsp;Queue</strong></h3>



<p class="has-text-align-left">In case the consumer application can&#8217;t process the message after the <strong>Max Delivery Count</strong> attempts, instead of returning it to the queue it will be sent automatically to the Dead-Letter queue. If you are using the Topic, each subscriber has its own Dead-Letter queue. You can also, configure different Max Delivery Counts Values for each Subscriber. </p>



<p>All messages that are published to the Service Bus have a TTL (Time-To-Live). After this time ends the message will be transferred automatically to the Dead-Letter. So make sure you adjust this time accordingly to your needs.</p>



<p class="has-text-align-left">With&nbsp;this&nbsp;we&nbsp;are&nbsp;able&nbsp;to&nbsp;save&nbsp;messages&nbsp;that&nbsp;weren&#8217;t&nbsp;processed&nbsp;by&nbsp;the&nbsp;consumer&nbsp;application,&nbsp;but&nbsp;we&nbsp;should&nbsp;always&nbsp;strive&nbsp;to&nbsp;have&nbsp;an empty&nbsp;dead-letter&nbsp;queue.</p>



<h2 class="wp-block-heading" id="Conclusion">Conclusion</h2>



<p>Our&nbsp;first&nbsp;steps&nbsp;into&nbsp;an&nbsp;Event-Driven&nbsp;Architecture&nbsp;was&nbsp;a&nbsp;truly&nbsp;success!<br>We&nbsp;were&nbsp;able&nbsp;to&nbsp;expand&nbsp;our&nbsp;previous&nbsp;solution&nbsp;to&nbsp;be&nbsp;compatible&nbsp;with&nbsp;multiple&nbsp;external&nbsp;systems&nbsp;and&nbsp;instead&nbsp;of&nbsp;having&nbsp;the&nbsp;API&nbsp;sending&nbsp;a&nbsp;HTTPS&nbsp;request&nbsp;for&nbsp;each&nbsp;one&nbsp;we&nbsp;had&nbsp;this&nbsp;Application&nbsp;sending&nbsp;one&nbsp;message&nbsp;to&nbsp;a&nbsp;Topic&nbsp;in&nbsp;the&nbsp;Service&nbsp;Bus.<br>One&nbsp;of&nbsp;our&nbsp;goals&nbsp;was&nbsp;to&nbsp;have&nbsp;load&nbsp;balance&nbsp;in&nbsp;the&nbsp;Publisher&nbsp;Application.&nbsp;We&nbsp;went&nbsp;from&nbsp;a&nbsp;1-&gt;3&nbsp;dependency&nbsp;to&nbsp;a&nbsp;1-&gt;&nbsp;1, as you can see in the Figure 2.</p>



<figure class="wp-block-image size-large"><img decoding="async" width="1024" height="269" src="https://blogit.create.pt/wp-content/uploads/2021/05/MicrosoftTeams-image-11-1024x269.png" alt="" class="wp-image-12300" srcset="https://blogit.create.pt/wp-content/uploads/2021/05/MicrosoftTeams-image-11-1024x269.png 1024w, https://blogit.create.pt/wp-content/uploads/2021/05/MicrosoftTeams-image-11-300x79.png 300w, https://blogit.create.pt/wp-content/uploads/2021/05/MicrosoftTeams-image-11-768x202.png 768w, https://blogit.create.pt/wp-content/uploads/2021/05/MicrosoftTeams-image-11-1536x403.png 1536w, https://blogit.create.pt/wp-content/uploads/2021/05/MicrosoftTeams-image-11-696x183.png 696w, https://blogit.create.pt/wp-content/uploads/2021/05/MicrosoftTeams-image-11-1068x280.png 1068w, https://blogit.create.pt/wp-content/uploads/2021/05/MicrosoftTeams-image-11-1600x420.png 1600w, https://blogit.create.pt/wp-content/uploads/2021/05/MicrosoftTeams-image-11.png 1840w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption>Figure 2 &#8211; Architecture Diagram with Azure Service Bus</figcaption></figure>



<p>Which&nbsp;is&nbsp;great&nbsp;and&nbsp;keeps&nbsp;the&nbsp;system&nbsp;scalable&nbsp;and&nbsp;future&nbsp;proof. Our&nbsp;solution&nbsp;became&nbsp;more&nbsp;decoupled&nbsp;in&nbsp;order&nbsp;to&nbsp;keep&nbsp;the&nbsp;Application&nbsp;agnostic&nbsp;to&nbsp;these&nbsp;changes.<br>If&nbsp;you&nbsp;have&nbsp;a&nbsp;similar&nbsp;situation&nbsp;with&nbsp;an Event-Driven Architecture then we totally recommend you to check more about this technology and it&#8217;s features.</p>



<p>We would like to share a link to <a href="https://github.com/Azure/azure-service-bus" target="_blank" rel="noreferrer noopener">Microsoft Azure Service Bus GitHub</a>. Most of the implementations of either the publisher or the subscriber were&nbsp;inspired&nbsp;by&nbsp;this&nbsp;documentation,&nbsp;so&nbsp;make&nbsp;sure&nbsp;you&nbsp;check&nbsp;it&nbsp;out!<br>If&nbsp;you&nbsp;have&nbsp;any&nbsp;questions,&nbsp;please&nbsp;write&nbsp;them&nbsp;down&nbsp;below.</p>



<h2 class="wp-block-heading" id="Additional-Links"><strong>Additional&nbsp;Links</strong></h2>



<p><a href="https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-messaging-exceptions" target="_blank" rel="noreferrer noopener">Service&nbsp;Bus&nbsp;Exceptions</a><br><a href="https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-messaging-overview" target="_blank" rel="noreferrer noopener">Service&nbsp;Bus&nbsp;Basic&nbsp;Steps</a><br><a href="https://docs.microsoft.com/pt-pt/azure/service-bus-messaging/service-bus-dotnet-get-started-with-queues" target="_blank" rel="noreferrer noopener">Tutorial&nbsp;with&nbsp;DotNet</a></p>
<p>The post <a href="https://blogit.create.pt/davidpereira/2021/06/02/case-study-azure-service-bus-and-event-driven-architectures/">Case Study: Azure Service Bus and Event-Driven Architectures</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/davidpereira/2021/06/02/case-study-azure-service-bus-and-event-driven-architectures/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Send Email In AWS Using SQS, SNS and Lambda Function</title>
		<link>https://blogit.create.pt/guilhermeperuzzi/2019/10/21/send-email-in-aws-using-sqs-sns-and-lambda-function/</link>
					<comments>https://blogit.create.pt/guilhermeperuzzi/2019/10/21/send-email-in-aws-using-sqs-sns-and-lambda-function/#respond</comments>
		
		<dc:creator><![CDATA[Guilherme Peruzzi]]></dc:creator>
		<pubDate>Mon, 21 Oct 2019 16:08:29 +0000</pubDate>
				<category><![CDATA[Architecture]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[Email]]></category>
		<category><![CDATA[Lambda]]></category>
		<category><![CDATA[SNS]]></category>
		<category><![CDATA[SQS]]></category>
		<guid isPermaLink="false">https://blogit.create.pt/?p=11611</guid>

					<description><![CDATA[<p>This post explains how to build an AWS infrastructure so you can send an email from AWS to a subscribed email account. We will use SQS, SNS and Lambda Function for this example. Introduction In AWS the main ways that we have to send an email are: Using Amazon Simple Email Service (SES) Using Amazon [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/guilhermeperuzzi/2019/10/21/send-email-in-aws-using-sqs-sns-and-lambda-function/">Send Email In AWS Using SQS, SNS and Lambda Function</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>This post explains how to build an AWS infrastructure so you can send an email from AWS to a subscribed email account. We will use SQS, SNS and Lambda Function for this example.</p>



<h2 class="wp-block-heading">Introduction</h2>



<p>In AWS the main ways that we have to send an email are:</p>



<ul class="wp-block-list"><li>Using Amazon Simple Email Service (SES)</li><li>Using Amazon Simple Notification Service (SNS)</li></ul>



<p>For this example we are going to use Amazon Simple Notification Service.</p>



<p>You can check the main differences between SES and SNS <a href="https://chaosgears.com/aws-ses-and-sns-your-first-notification-service/">here</a></p>



<h2 class="wp-block-heading">The components</h2>



<p>Now i&#8217;m going to explain the components that we are going to use.</p>



<h3 class="wp-block-heading">Amazon Simple Queue Service (SQS)</h3>



<p style="text-align:justify">Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message oriented middleware, and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available. </p>



<p>More Info: <a href="https://aws.amazon.com/sqs/">SQS</a></p>



<h3 class="wp-block-heading">Amazon Simple Notification Service (SNS)</h3>



<p style="text-align:justify">Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Amazon SNS provides topics for high-throughput, push-based, many-to-many messaging. Using Amazon SNS topics, your publisher systems can fan out messages to a large number of subscriber endpoints for parallel processing, including Amazon SQS queues, AWS Lambda functions, and HTTP/S webhooks. Additionally, SNS can be used to fan out notifications to end users using mobile push, SMS, and email.</p>



<p>More Info: <a href="https://aws.amazon.com/sns/">SNS</a></p>



<h3 class="wp-block-heading">AWS Lambda</h3>



<p style="text-align:justify">AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume &#8211; there is no charge when your code is not running.</p>



<p style="text-align:justify">With Lambda, you can run code for virtually any type of application or backend service &#8211; all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.</p>



<p>More Info:  <a href="https://aws.amazon.com/lambda/">Lambda</a></p>



<h2 class="wp-block-heading">Objective</h2>



<p style="text-align:justify">We are going to build a system using a SQS queue where we are going to send the body of the email. The lambda that long pools the queue will be triggered when the message is sent to the queue and will publish the message to the SNS topic so we can send an email with the message body. </p>



<p>More Info:<a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-short-and-long-polling.html#sqs-long-polling"> Long Pooling</a></p>



<h2 class="wp-block-heading">Lets Go</h2>



<p style="text-align:justify">Create a SQS Queue where we are going to send the email body message. This queue will be a Standard Queue. This example will not have a deadletter queue configurated, but is always good to have one.</p>



<figure class="wp-block-image"><img decoding="async" width="1892" height="690" src="https://i1.wp.com/blogit.create.pt/wp-content/uploads/2019/10/image.png?fit=696%2C254&amp;ssl=1" alt="queue" class="wp-image-11623" srcset="https://blogit.create.pt/wp-content/uploads/2019/10/image.png 1892w, https://blogit.create.pt/wp-content/uploads/2019/10/image-300x109.png 300w, https://blogit.create.pt/wp-content/uploads/2019/10/image-768x280.png 768w, https://blogit.create.pt/wp-content/uploads/2019/10/image-1024x373.png 1024w, https://blogit.create.pt/wp-content/uploads/2019/10/image-696x254.png 696w, https://blogit.create.pt/wp-content/uploads/2019/10/image-1068x389.png 1068w, https://blogit.create.pt/wp-content/uploads/2019/10/image-1152x420.png 1152w" sizes="(max-width: 1892px) 100vw, 1892px" /></figure>



<p>We need to create a SNS Topic. An Amazon SNS topic is a logical access point which acts as a communication channel.</p>



<figure class="wp-block-image"><img decoding="async" width="1853" height="676" src="https://i0.wp.com/blogit.create.pt/wp-content/uploads/2019/10/image-1.png?fit=696%2C254&amp;ssl=1" alt="topic" class="wp-image-11624" srcset="https://blogit.create.pt/wp-content/uploads/2019/10/image-1.png 1853w, https://blogit.create.pt/wp-content/uploads/2019/10/image-1-300x109.png 300w, https://blogit.create.pt/wp-content/uploads/2019/10/image-1-768x280.png 768w, https://blogit.create.pt/wp-content/uploads/2019/10/image-1-1024x374.png 1024w, https://blogit.create.pt/wp-content/uploads/2019/10/image-1-696x254.png 696w, https://blogit.create.pt/wp-content/uploads/2019/10/image-1-1068x390.png 1068w, https://blogit.create.pt/wp-content/uploads/2019/10/image-1-1151x420.png 1151w" sizes="(max-width: 1853px) 100vw, 1853px" /></figure>



<figure class="wp-block-image"><img decoding="async" width="1842" height="665" src="https://i2.wp.com/blogit.create.pt/wp-content/uploads/2019/10/image-2.png?fit=696%2C251&amp;ssl=1" alt="topic" class="wp-image-11625" srcset="https://blogit.create.pt/wp-content/uploads/2019/10/image-2.png 1842w, https://blogit.create.pt/wp-content/uploads/2019/10/image-2-300x108.png 300w, https://blogit.create.pt/wp-content/uploads/2019/10/image-2-768x277.png 768w, https://blogit.create.pt/wp-content/uploads/2019/10/image-2-1024x370.png 1024w, https://blogit.create.pt/wp-content/uploads/2019/10/image-2-696x251.png 696w, https://blogit.create.pt/wp-content/uploads/2019/10/image-2-1068x386.png 1068w, https://blogit.create.pt/wp-content/uploads/2019/10/image-2-1163x420.png 1163w" sizes="(max-width: 1842px) 100vw, 1842px" /></figure>



<p style="text-align:justify">Create a subscription to that topic. The subscription resource subscribes an endpoint to an Amazon Simple Notification Service (Amazon SNS) topic. For a subscription to be created, the owner of the endpoint must confirm the subscription. The protocol of the subscription will be Email. The endpoint will be the email address that will receive the email. For this example i used a temporary mail so that i didn&#8217;t fill my personal inbox with emails. </p>



<p>More Info: <a href="https://temp-mail.org/en/">Temp-Mail</a></p>



<figure class="wp-block-image"><img decoding="async" width="1834" height="747" src="https://i0.wp.com/blogit.create.pt/wp-content/uploads/2019/10/image-3.png?fit=696%2C283&amp;ssl=1" alt="subscription" class="wp-image-11626" srcset="https://blogit.create.pt/wp-content/uploads/2019/10/image-3.png 1834w, https://blogit.create.pt/wp-content/uploads/2019/10/image-3-300x122.png 300w, https://blogit.create.pt/wp-content/uploads/2019/10/image-3-768x313.png 768w, https://blogit.create.pt/wp-content/uploads/2019/10/image-3-1024x417.png 1024w, https://blogit.create.pt/wp-content/uploads/2019/10/image-3-696x283.png 696w, https://blogit.create.pt/wp-content/uploads/2019/10/image-3-1068x435.png 1068w, https://blogit.create.pt/wp-content/uploads/2019/10/image-3-1031x420.png 1031w" sizes="(max-width: 1834px) 100vw, 1834px" /></figure>



<p style="text-align:justify">After you configure the subscription you will receive an email with a link to confirm the subscription.</p>



<figure class="wp-block-image"><img decoding="async" width="1127" height="734" src="https://i1.wp.com/blogit.create.pt/wp-content/uploads/2019/10/image-4.png?fit=696%2C453&amp;ssl=1" alt="email" class="wp-image-11627" srcset="https://blogit.create.pt/wp-content/uploads/2019/10/image-4.png 1127w, https://blogit.create.pt/wp-content/uploads/2019/10/image-4-300x195.png 300w, https://blogit.create.pt/wp-content/uploads/2019/10/image-4-768x500.png 768w, https://blogit.create.pt/wp-content/uploads/2019/10/image-4-1024x667.png 1024w, https://blogit.create.pt/wp-content/uploads/2019/10/image-4-696x453.png 696w, https://blogit.create.pt/wp-content/uploads/2019/10/image-4-1068x696.png 1068w, https://blogit.create.pt/wp-content/uploads/2019/10/image-4-645x420.png 645w" sizes="(max-width: 1127px) 100vw, 1127px" /></figure>



<p>When you click on the link you will be redirected to the subscription confirmation page</p>



<figure class="wp-block-image"><img decoding="async" width="844" height="442" src="https://blogit.create.pt////wp-content/uploads/2019/10/image-5.png" alt="confirmed" class="wp-image-11628" srcset="https://blogit.create.pt/wp-content/uploads/2019/10/image-5.png 844w, https://blogit.create.pt/wp-content/uploads/2019/10/image-5-300x157.png 300w, https://blogit.create.pt/wp-content/uploads/2019/10/image-5-768x402.png 768w, https://blogit.create.pt/wp-content/uploads/2019/10/image-5-696x364.png 696w, https://blogit.create.pt/wp-content/uploads/2019/10/image-5-802x420.png 802w" sizes="(max-width: 844px) 100vw, 844px" /></figure>



<p style="text-align:justify">Create the lambda that will use long pooling to check if there are new messages in the queue and publish the messages to the SNS Topic.</p>



<p style="text-align:justify">In this example i used Python to develop the lambda and called it &#8220;EmailLambda&#8221;. Note: you need to set the role of the lambda so that the lambda has permissions to long pool the sqs queue and publish a message to the SNS topic.</p>



<p>More Info: <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/list_amazonsns.html">SNS IAM Roles</a>  <a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-using-identity-based-policies.html">SQS IAM Roles</a> </p>



<figure class="wp-block-image"><img decoding="async" width="1849" height="870" src="https://i1.wp.com/blogit.create.pt/wp-content/uploads/2019/10/image-6.png?fit=696%2C328&amp;ssl=1" alt="lambda" class="wp-image-11629" srcset="https://blogit.create.pt/wp-content/uploads/2019/10/image-6.png 1849w, https://blogit.create.pt/wp-content/uploads/2019/10/image-6-300x141.png 300w, https://blogit.create.pt/wp-content/uploads/2019/10/image-6-768x361.png 768w, https://blogit.create.pt/wp-content/uploads/2019/10/image-6-1024x482.png 1024w, https://blogit.create.pt/wp-content/uploads/2019/10/image-6-696x327.png 696w, https://blogit.create.pt/wp-content/uploads/2019/10/image-6-1068x503.png 1068w, https://blogit.create.pt/wp-content/uploads/2019/10/image-6-893x420.png 893w" sizes="(max-width: 1849px) 100vw, 1849px" /></figure>



<p>Here is the lambda code that i used.</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: python; title: ; notranslate">
#!/usr/bin/python3
import json
import boto3
import os

def lambda_handler(event, context):
    batch_processes=&#x5B;]
    for record in event&#x5B;&#039;Records&#039;]:
        send_request(record&#x5B;&quot;body&quot;])
		

def send_request(body):
    # Create an SNS client
    sns = boto3.client(&#039;sns&#039;)

    # Publish a simple message to the specified SNS topic
    response = sns.publish(
        TopicArn=os.environ&#x5B;&#039;email_topic&#039;],    
        Message=body,    
    )

    # Print out the response
    print(response)
 
</pre></div>


<p>Set the lambda enviroment variable email_topic to the ARN of the SNS Topic we created earlier.</p>



<figure class="wp-block-image"><img decoding="async" width="1687" height="409" src="https://i1.wp.com/blogit.create.pt/wp-content/uploads/2019/10/image-7.png?fit=696%2C169&amp;ssl=1" alt="topic" class="wp-image-11631" srcset="https://blogit.create.pt/wp-content/uploads/2019/10/image-7.png 1687w, https://blogit.create.pt/wp-content/uploads/2019/10/image-7-300x73.png 300w, https://blogit.create.pt/wp-content/uploads/2019/10/image-7-768x186.png 768w, https://blogit.create.pt/wp-content/uploads/2019/10/image-7-1024x248.png 1024w, https://blogit.create.pt/wp-content/uploads/2019/10/image-7-696x169.png 696w, https://blogit.create.pt/wp-content/uploads/2019/10/image-7-1068x259.png 1068w" sizes="(max-width: 1687px) 100vw, 1687px" /></figure>



<p>Add the lambda trigger and set it with the sqs queue that will receive the body of the email.</p>



<figure class="wp-block-image"><img decoding="async" width="1859" height="875" src="https://i1.wp.com/blogit.create.pt/wp-content/uploads/2019/10/image-8.png?fit=696%2C328&amp;ssl=1" alt="trigger" class="wp-image-11632" srcset="https://blogit.create.pt/wp-content/uploads/2019/10/image-8.png 1859w, https://blogit.create.pt/wp-content/uploads/2019/10/image-8-300x141.png 300w, https://blogit.create.pt/wp-content/uploads/2019/10/image-8-768x361.png 768w, https://blogit.create.pt/wp-content/uploads/2019/10/image-8-1024x482.png 1024w, https://blogit.create.pt/wp-content/uploads/2019/10/image-8-696x328.png 696w, https://blogit.create.pt/wp-content/uploads/2019/10/image-8-1068x503.png 1068w, https://blogit.create.pt/wp-content/uploads/2019/10/image-8-892x420.png 892w" sizes="(max-width: 1859px) 100vw, 1859px" /></figure>



<h3 class="wp-block-heading">Lets Test it !!!</h3>



<p>Now the only thing that we need to do is send a new message to the SQS queue with the email body that we want.</p>



<figure class="wp-block-image"><img decoding="async" width="1917" height="875" src="https://i0.wp.com/blogit.create.pt/wp-content/uploads/2019/10/image-9.png?fit=696%2C317&amp;ssl=1" alt="send message" class="wp-image-11633" srcset="https://blogit.create.pt/wp-content/uploads/2019/10/image-9.png 1917w, https://blogit.create.pt/wp-content/uploads/2019/10/image-9-300x137.png 300w, https://blogit.create.pt/wp-content/uploads/2019/10/image-9-768x351.png 768w, https://blogit.create.pt/wp-content/uploads/2019/10/image-9-1024x467.png 1024w, https://blogit.create.pt/wp-content/uploads/2019/10/image-9-696x318.png 696w, https://blogit.create.pt/wp-content/uploads/2019/10/image-9-1068x487.png 1068w, https://blogit.create.pt/wp-content/uploads/2019/10/image-9-920x420.png 920w" sizes="(max-width: 1917px) 100vw, 1917px" /></figure>



<figure class="wp-block-image"><img decoding="async" width="980" height="898" src="https://blogit.create.pt////wp-content/uploads/2019/10/image-10.png" alt="Send message" class="wp-image-11634" srcset="https://blogit.create.pt/wp-content/uploads/2019/10/image-10.png 980w, https://blogit.create.pt/wp-content/uploads/2019/10/image-10-300x275.png 300w, https://blogit.create.pt/wp-content/uploads/2019/10/image-10-768x704.png 768w, https://blogit.create.pt/wp-content/uploads/2019/10/image-10-696x638.png 696w, https://blogit.create.pt/wp-content/uploads/2019/10/image-10-458x420.png 458w" sizes="(max-width: 980px) 100vw, 980px" /></figure>



<p>Now we will receive an email from  no-reply@sns.amazonaws.com with the body of the queue message.</p>



<figure class="wp-block-image"><img decoding="async" width="1101" height="407" src="https://i0.wp.com/blogit.create.pt/wp-content/uploads/2019/10/image-11.png?fit=696%2C258&amp;ssl=1" alt="Email" class="wp-image-11635" srcset="https://blogit.create.pt/wp-content/uploads/2019/10/image-11.png 1101w, https://blogit.create.pt/wp-content/uploads/2019/10/image-11-300x111.png 300w, https://blogit.create.pt/wp-content/uploads/2019/10/image-11-768x284.png 768w, https://blogit.create.pt/wp-content/uploads/2019/10/image-11-1024x379.png 1024w, https://blogit.create.pt/wp-content/uploads/2019/10/image-11-696x257.png 696w, https://blogit.create.pt/wp-content/uploads/2019/10/image-11-1068x395.png 1068w" sizes="(max-width: 1101px) 100vw, 1101px" /></figure>



<p>And there you have it. An email notification in AWS using SQS, SNS and Lambda.</p>



<p>Thanks for reading this post and have an&#8230;</p>



<figure class="wp-block-image"><img decoding="async" src="https://d2cnjxvu6pstmv.cloudfront.net/2016/03/14165712/awesomeday1.jpg" alt="Image result for aws awesome day" /></figure>



<p></p>
<p>The post <a href="https://blogit.create.pt/guilhermeperuzzi/2019/10/21/send-email-in-aws-using-sqs-sns-and-lambda-function/">Send Email In AWS Using SQS, SNS and Lambda Function</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/guilhermeperuzzi/2019/10/21/send-email-in-aws-using-sqs-sns-and-lambda-function/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>NoSQL First Act &#8211; a historical introduction</title>
		<link>https://blogit.create.pt/goncalomelo/2019/08/13/nosql-first-act-a-historical-introduction/</link>
					<comments>https://blogit.create.pt/goncalomelo/2019/08/13/nosql-first-act-a-historical-introduction/#respond</comments>
		
		<dc:creator><![CDATA[Gonçalo Melo]]></dc:creator>
		<pubDate>Tue, 13 Aug 2019 17:57:42 +0000</pubDate>
				<category><![CDATA[Architecture]]></category>
		<category><![CDATA[Databases]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Sql]]></category>
		<category><![CDATA[Cluster]]></category>
		<category><![CDATA[Database history]]></category>
		<category><![CDATA[Database model]]></category>
		<category><![CDATA[Horizontal scale]]></category>
		<category><![CDATA[impedance mismatch]]></category>
		<category><![CDATA[NoSQL]]></category>
		<category><![CDATA[Relational database]]></category>
		<category><![CDATA[SQL]]></category>
		<category><![CDATA[Vertical scale]]></category>
		<guid isPermaLink="false">https://blogit.create.pt/?p=9739</guid>

					<description><![CDATA[<p>NoSQL databases introduction and dominant features. A historical perspective to their appearance in a world dominated by relational model databases.</p>
<p>The post <a href="https://blogit.create.pt/goncalomelo/2019/08/13/nosql-first-act-a-historical-introduction/">NoSQL First Act &#8211; a historical introduction</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>NoSQL gets a lot of &#8220;heat&#8221; about not having a good direct definition. And the term NoSQL only gives somes clues of what this is not. </p>



<p>NoSQL is like a new definition of something that is a database but is different than the usual relational model. Likewise, making a parallel by going back 30 years ago. Back then probably no one knew what a relational database was also. Let&#8217;s use this and start with a small historical and motivational point of view for NoSQL. PS: I was not a developer back then.</p>



<p>This article will be a first part of a series of articles. The goal will be to unite some knowledge about this topic. </p>



<h2 class="wp-block-heading">Starting in the 1980&#8217;s</h2>



<p>Relational databases emerged bringing ACID properties &#8211; Atomicity, Consistency, Isolation and Persistency. Properties taken for granted nowadays. Also, they also brought the SQL language. SQL is common enough across different systems for one to use. Although there are different flavors, can almost be considered a standard.</p>



<p>Relation databases also allowed for a simple and very common integration mechanism between 2 or more systems. Data can be easily shared by reading or writing data in a table on a shared relational database. This is today still a very common integration pattern.</p>



<h2 class="wp-block-heading">In the 1990&#8217;s</h2>



<p>Object database&#8217;s started to appear with more strength. They had been around for some time. Most important, they implement a different database paradigm model. This new paradigm tried to solve the impedance mismatch problem. The impedance mismatch was caused by relational databases. It emerges from the need to map objects used in memory in our application model to tables in a relational database. This page <a href="https://en.wikipedia.org/wiki/Object-relational_impedance_mismatch">https://en.wikipedia.org/wiki/Object-relational_impedance_mismatch</a> is a good reference for this issue.</p>



<p>Saving an application object into these database model should be a direct operation. Conceptually no mapping would be necessary when an object database type was used. Albeit, these gains, object databases were not able to substitute relational databases. </p>



<h2 class="wp-block-heading">In the 2000&#8217;s</h2>



<p>As internet availability grows 2 things started to have a big impact:</p>



<ul class="wp-block-list"><li>Generation of enormous amount of data that had to be stored and processed;</li><li>Accesses from anywhere in the globe became more and more frequent. Therefore latency was an issue to consider. And even speed of light limitation can contribute significantly to this. Data can now be stored far from where we are. For example, a distance of 10.000 km will add about 100 ms on each round trip. Therefore, data needs to spread around the globe to provide a quick access.</li></ul>



<p>In this environment,
relational databases do not always solve customer needs, either because of
flexibility, price or performance issues:</p>



<ul class="wp-block-list"><li>Relational databases enforces a ridged set of rules in the relational model. These will impact the flexibility of the application development;</li><li>Price because of the need for more powerful machines and also the software licenses that go with this;</li><li>Performance, because relational databases do not naturally meet the increasing workload. Huge CPU, memory, storage and throughput is needed to be perform in this environment. And for relational model, vertical scaling can only get you so far.</li></ul>



<h2 class="wp-block-heading">Scaling Vertically versus Horizontally</h2>



<p>Relational databases usual scaling method is a vertical one. As a result whenever the workload incresses we will use a bigger machine with more hardware resources. </p>



<p>On the other hand, the big internet companies adopted an horizontal scaling paradigm. This is a cluster type environment with lots and lots of machines. In other words, this means that more machines will be used to handle bigger workloads.</p>



<p>Relational databases do not thrive very well in this paradigm. They hardly take any benefit of using more machines. Hence, NoSQL like databases arises to take advantage of the horizontal scale paradigm. Also companies like Google and Amazon started researching in this area. As a result, Google created BigTable and Amazon DynamoDB.</p>



<h2 class="wp-block-heading">Relational databases dominated the market, why are NoSQL databases being used now?</h2>



<p>I think this is important. Dominance and usage factors are usually a combination of several aspects. What has been changing:</p>



<ul class="wp-block-list"><li>Developers hide databases behind integration layers. This makes it simpler to use. And in addition, easier to replace one database or persistence method per other; </li><li>Cloud growth in some cases means we can try different approaches with less effort regarding infrastructure;</li><li>Handling big quantities of data introduced new needs. NoSQL provides a good development flexibility. And in addition can also take advantage of cluster solutions.</li></ul>



<p>Finally, what is NoSQL?</p>



<h2 class="wp-block-heading">Common NoSQL databases characteristics</h2>



<p>NoSQL doesn’t have a clear direct meaning. The term alludes to something like &#8220;Not Only SQL&#8221;. A more exact explanation should be &#8220;Non Relational database&#8221;. This would reinforce the new model paradigm and not the SQL language itself. The fact that some NoSQL databases actually supports some form of SQL language adds even more to the confusion.</p>



<p>But, the name is catchy and so common that will probably stay. The point is: defining NoSQL is hard. Therefore, we will do the next best thing. Introduce the dominant traits of these database systems:</p>



<ul class="wp-block-list"><li>Non relational;</li><li>Usually (not all) cluster-friendly;</li><li>Most of them are open source;</li><li>No-schema / schema-less;</li><li>Have a big internet drive (cluster and bigdata friendly).</li></ul>



<p>By these common traits, almost any non relational database can be a NoSQL database! Also, there are probably published studies in all these areas dating back 30 or 40 year ago. Out of curiosity, the name NoSQL itself is referred to being born by the end of 2009. The term emerged as a twitter hashtag for a meetup were people talked about these subjects. Even though some of these traits were not new at the time.</p>



<h2 class="wp-block-heading">Next steps</h2>



<p>NoSQL has a lot to explore. It will provide a more direct paradigm that will make simpler specific needs like this ones to <a href="https://blogit.create.pt////diogoguiomar/2018/02/26/query-a-json-array-column-in-sql/">query Json objects inside Sql Server</a> and <a href="https://blogit.create.pt////goncalomelo/2018/12/20/query-performance-for-json-objects-inside-sql-server/">performance considerations</a>.</p>



<p>My plan for the next article(s) will be to:</p>



<ul class="wp-block-list"><li>Present some of the NoSQL data models; </li><li>Talk about the no schema or the schema less feature;</li><li>Discuss the aggregate concept. And how this relates to the CAP theorem.</li></ul>



<h2 class="wp-block-heading">Further reads</h2>



<p>There is a lot of information online. For me, I like Martin Fowler approach. Some of these information&#8217;s are from his content. You can check his website here: <a href="https://martinfowler.com/nosql.html">https://martinfowler.com/nosql.html</a>. He also has a book: <a href="https://martinfowler.com/books/nosql.html">https://martinfowler.com/books/nosql.html</a>.</p>



<p></p>
<p>The post <a href="https://blogit.create.pt/goncalomelo/2019/08/13/nosql-first-act-a-historical-introduction/">NoSQL First Act &#8211; a historical introduction</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/goncalomelo/2019/08/13/nosql-first-act-a-historical-introduction/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Straight A&#8217;s in WebPagetest with Umbraco</title>
		<link>https://blogit.create.pt/andresantos/2018/11/27/straight-as-in-webpagetest-with-umbraco/</link>
					<comments>https://blogit.create.pt/andresantos/2018/11/27/straight-as-in-webpagetest-with-umbraco/#respond</comments>
		
		<dc:creator><![CDATA[André Santos]]></dc:creator>
		<pubDate>Tue, 27 Nov 2018 21:40:06 +0000</pubDate>
				<category><![CDATA[Umbraco]]></category>
		<category><![CDATA[Performance]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[azure cdn]]></category>
		<category><![CDATA[blobstorage]]></category>
		<category><![CDATA[cdn]]></category>
		<category><![CDATA[imageprocessor]]></category>
		<category><![CDATA[performance]]></category>
		<category><![CDATA[UmbracoFileSystemProviders.Azure]]></category>
		<category><![CDATA[WebPagetest]]></category>
		<guid isPermaLink="false">https://blogit.create.pt/?p=7450</guid>

					<description><![CDATA[<p>Before launching a new website, there&#8217;s a checklist I go through, to make sure that everything is ready. One of the items in my checklist is to test the website against WebPagetest. WebPagetest is a tool that was originally developed by AOL for use internally and was open-sourced in 2008 under a BSD license. The [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/andresantos/2018/11/27/straight-as-in-webpagetest-with-umbraco/">Straight A&#8217;s in WebPagetest with Umbraco</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><span class="dropcap dropcap3">B</span>efore launching a new website, there&#8217;s a checklist I go through, to make sure that everything is ready. One of the items in my checklist is to test the website against <strong>WebPagetest</strong>.</p>
<blockquote class="td_quote_box td_box_right"><p>WebPagetest is a tool that was originally developed by <a href="http://dev.aol.com/">AOL</a> for use internally and was open-sourced in 2008 under a BSD license. The online version at <a href="https://www.webpagetest.org/">www.webpagetest.org</a> is run for the benefit of the performance community with several companies and individuals providing the testing infrastructure around the globe.</p></blockquote>
<p>This tool tests any website against 6 major performance affecting factors, and provides a myriad of graphs and logs that make abundantly clear what might be slowing down your site.</p>
<p>In this post I&#8217;ll provide ways to make your site get straight A&#8217;s in WebPagetest.</p>
<p><span id="more-7450"></span></p>
<h1>Setup</h1>
<p>To start this off, let&#8217;s setup our environment. We&#8217;ll just need the following:</p>
<ul>
<li>Visual Studio 2017</li>
<li>Microsoft Azure account</li>
</ul>
<p>In Visual Studio, let&#8217;s create a new empty ASP.NET Web Application project. Then, we&#8217;ll need the latest and greatest Umbraco NuGet package (I used version 7.12.4). Once it finishes installing, just launch the website and install Umbraco with all defaults. This will bootstrap Umbraco with the Starter Website, which we&#8217;ll use as our &#8220;guinea pig&#8221; for WebPagetest.</p>
<p>Next: publish it! We can use the publish wizard to automatically create our new WebApp and SQL Database in Azure. Before installing Umbraco in Azure, we&#8217;ll need to change the Web.config so that the install wizard is run again (I use <a href="https://filezilla-project.org/">Filezilla</a> to change it in Azure):</p>
<p>Clear the Umbraco version number:</p>
<pre class="brush: xml; title: Web.config; notranslate">
&lt;add key=&quot;umbracoConfigurationStatus&quot; value=&quot;&quot;&gt;
</pre>
<p>Clear the Umbraco connection string:</p>
<pre class="brush: xml; title: Web.config; notranslate">
&lt;add name=&quot;umbracoDbDSN&quot; connectionstring=&quot;&quot; providername=&quot;System.Data.SqlClient&quot;&gt;
</pre>
<p>With these changes in place, we&#8217;re good to go. This time, we&#8217;ll not use the defaults in the Umbraco install wizard, since we&#8217;ll want to use the SQL Database we&#8217;ve just created in Azure.</p>
<h1>First test</h1>
<p>For our tests, we&#8217;ll use the people page of the Starter Website. This is the score I got with a standard (S0) database and a basic WebApp:</p>
<p><img decoding="async" class="size-medium wp-image-7454 aligncenter" src="https://blogit.create.pt////wp-content/uploads/2018/09/first-webpagetest-300x91.png" alt="" width="300" height="91" srcset="https://blogit.create.pt/wp-content/uploads/2018/09/first-webpagetest-300x91.png 300w, https://blogit.create.pt/wp-content/uploads/2018/09/first-webpagetest-768x233.png 768w, https://blogit.create.pt/wp-content/uploads/2018/09/first-webpagetest-696x211.png 696w, https://blogit.create.pt/wp-content/uploads/2018/09/first-webpagetest.png 841w" sizes="(max-width: 300px) 100vw, 300px" /></p>
<p>Since this is a very small site, only used for demonstration purposes, half of the metrics are already cleared! However, this is not usually the case for bigger websites. For this reason, I&#8217;ll still present some solutions to improve the grade for these metrics.</p>
<h1>First Byte Time</h1>
<p><em>These test measures the time it takes for the first byte to reach the client&#8217;s browser after the initial http request.</em></p>
<p>There are two main factors that influence this result:</p>
<ul>
<li>Server power</li>
<li>The webpage complexity (integrations with external services, complex logic involved, etc)</li>
</ul>
<h3>How to get an A</h3>
<p>The easiest way to mitigate this problem is by caching. You can see how to do output caching in Umbraco by reading this old post of mine: <a href="https://blogit.create.pt////andresantos/2016/06/30/umbraco-and-donut-output-cache/">https://blogit.create.pt////andresantos/2016/06/30/umbraco-and-donut-output-cache/</a>.</p>
<h1>Keep-alive Enabled</h1>
<p><em>Keep alive is a method to allow the same tcp connection for HTTP conversation instead of opening a new one with each new request.</em></p>
<h3>How to get an A</h3>
<p>This setting is active by default in IIS, so, it&#8217;s also active by default in Azure WebApps, so it&#8217;s easy to get an A!</p>
<h1>Compress Transfer</h1>
<p><em>Gzip compresses your webpages, style sheets and javascripts, before sending them over to the browser. This drastically reduces transfer time since the files are much smaller.</em></p>
<h3>How to get an A</h3>
<p>Just add this setting to your Web.config file:</p>
<pre class="brush: xml; title: Web.config; notranslate">
&lt;httpcompression dynamiccompressionenablecpuusage=&quot;0&quot; dynamiccompressiondisablecpuusage=&quot;90&quot; nocompressionforhttp10=&quot;false&quot; nocompressionforproxies=&quot;false&quot;&gt;
    &lt;statictypes&gt;
        &lt;add mimetype=&quot;text/*&quot; enabled=&quot;true&quot;&gt;
        &lt;add mimetype=&quot;message/*&quot; enabled=&quot;true&quot;&gt;
        &lt;add mimetype=&quot;application/javascript&quot; enabled=&quot;true&quot;&gt;
        &lt;add mimetype=&quot;application/font-woff&quot; enabled=&quot;true&quot;&gt;
        &lt;add mimetype=&quot;application/font-woff2&quot; enabled=&quot;true&quot;&gt;
        &lt;add mimetype=&quot;application/vnd.ms-fontobject&quot; enabled=&quot;true&quot;&gt;
        &lt;add mimetype=&quot;application/octet-stream&quot; enabled=&quot;true&quot;&gt;
        &lt;add mimetype=&quot;*/*&quot; enabled=&quot;false&quot;&gt;
    &lt;/add&gt;&lt;/add&gt;&lt;/add&gt;&lt;/add&gt;&lt;/add&gt;&lt;/add&gt;&lt;/add&gt;&lt;/add&gt;&lt;/statictypes&gt;
    &lt;dynamictypes&gt;
        &lt;add mimetype=&quot;text/*&quot; enabled=&quot;true&quot;&gt;
        &lt;add mimetype=&quot;message/*&quot; enabled=&quot;true&quot;&gt;
        &lt;add mimetype=&quot;application/javascript&quot; enabled=&quot;true&quot;&gt;
        &lt;add mimetype=&quot;*/*&quot; enabled=&quot;false&quot;&gt;
    &lt;/add&gt;&lt;/add&gt;&lt;/add&gt;&lt;/add&gt;&lt;/dynamictypes&gt;
&lt;/httpcompression&gt;
</pre>
<h1>Compress Images</h1>
<p><em><span class="ILfuVd">Image compression is minimizing the size in bytes of a graphics file without degrading the quality of the image to an unacceptable level.</span></em></p>
<h3>How to get an A</h3>
<p>In Umbraco, to get an A in this grade, you need to do two things:</p>
<ol>
<li>Crop every image and use srcsets where you can</li>
<li>Use the PostProcessor plugin for Image Processor</li>
</ol>
<p>Cropping an image is easy in Umbraco:</p>
<pre class="brush: xml; title: People.cshtml; notranslate">
&lt;div class=&quot;employee-grid__item__image&quot; style=&quot;background-image: url('@person.Photo.GetCropUrl(width: 323, height: 300, quality: 85)')&quot;&gt;&lt;/div&gt;
</pre>
<p>In order to use the PostProcessor plugin, you just need to install it via nuget: <a href="https://www.nuget.org/packages/ImageProcessor.Web.PostProcessor/1.3.1.25">https://www.nuget.org/packages/ImageProcessor.Web.PostProcessor/1.3.1.25</a>.</p>
<h1>Cache Static Content</h1>
<p><em>Static content is content that changes rarely. For this reason it can be cached in the user&#8217;s browser to avoid downloading the same file over and over again.</em></p>
<h3>How to get an A</h3>
<p>Just set the time it takes for the content to expire in the user&#8217;s browser and add extra mime types if you want:</p>
<pre class="brush: xml; title: Web.config; notranslate">
&lt;staticcontent&gt;
    &lt;clientcache cachecontrolmode=&quot;UseMaxAge&quot; cachecontrolmaxage=&quot;7.24:00:00&quot;&gt;
    &lt;remove fileextension=&quot;.air&quot;&gt;
    &lt;mimemap fileextension=&quot;.air&quot; mimetype=&quot;application/vnd.adobe.air-application-installer-package+zip&quot;&gt;
    &lt;remove fileextension=&quot;.svg&quot;&gt;
    &lt;mimemap fileextension=&quot;.svg&quot; mimetype=&quot;image/svg+xml&quot;&gt;
    &lt;remove fileextension=&quot;.woff&quot;&gt;
    &lt;mimemap fileextension=&quot;.woff&quot; mimetype=&quot;application/x-font-woff&quot;&gt;
    &lt;remove fileextension=&quot;.woff2&quot;&gt;
    &lt;mimemap fileextension=&quot;.woff2&quot; mimetype=&quot;application/x-font-woff2&quot;&gt;
    &lt;remove fileextension=&quot;.less&quot;&gt;
    &lt;mimemap fileextension=&quot;.less&quot; mimetype=&quot;text/css&quot;&gt;
    &lt;remove fileextension=&quot;.mp4&quot;&gt;
    &lt;mimemap fileextension=&quot;.mp4&quot; mimetype=&quot;video/mp4&quot;&gt;
    &lt;remove fileextension=&quot;.json&quot;&gt;
    &lt;mimemap fileextension=&quot;.json&quot; mimetype=&quot;application/json&quot;&gt;
&lt;/mimemap&gt;&lt;/remove&gt;&lt;/mimemap&gt;&lt;/remove&gt;&lt;/mimemap&gt;&lt;/remove&gt;&lt;/mimemap&gt;&lt;/remove&gt;&lt;/mimemap&gt;&lt;/remove&gt;&lt;/mimemap&gt;&lt;/remove&gt;&lt;/mimemap&gt;&lt;/remove&gt;&lt;/clientcache&gt;&lt;/staticcontent&gt;
</pre>
<h1>Effective use of CDN</h1>
<p><em> A content delivery network (CDN) refers to a geographically distributed group of servers which work together to provide fast delivery of Internet content. A CDN allows for the quick transfer of assets needed for loading Internet content including HTML pages, javascript files, stylesheets, images, and videos.</em></p>
<h3>How to get an A</h3>
<p>In Umbraco, you can achieve this last grade by doing two things:</p>
<ol>
<li>Use an Azure Blob Storage for media storage by installing this nuget package: <a href="https://github.com/JimBobSquarePants/UmbracoFileSystemProviders.Azure">https://github.com/JimBobSquarePants/UmbracoFileSystemProviders.Azure</a>.</li>
<li>Create an Azure CDN for serving these blobs through a content delivery network.</li>
</ol>
<p>After creating an Azure CDN service and waiting about an hour for it to be available, the following presents my configuration for the media assets to be provided by it:</p>
<pre class="brush: xml; title: config/imageprocessor/security.config; notranslate">
&lt;!--?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?--&gt;
&lt;security&gt;
  &lt;services&gt;
    &lt;service name=&quot;LocalFileImageService&quot; type=&quot;ImageProcessor.Web.Services.LocalFileImageService, ImageProcessor.Web&quot;&gt;
    &lt;!--Disable the LocalFileImageService and enable this one when using virtual paths. --&gt;
    &lt;service prefix=&quot;media/&quot; name=&quot;CloudImageService&quot; type=&quot;ImageProcessor.Web.Services.CloudImageService, ImageProcessor.Web&quot;&gt;
      &lt;settings&gt;
        &lt;setting key=&quot;Container&quot; value=&quot;media&quot;&gt;
        &lt;setting key=&quot;MaxBytes&quot; value=&quot;8194304&quot;&gt;
        &lt;setting key=&quot;Timeout&quot; value=&quot;30000&quot;&gt;
        &lt;setting key=&quot;Host&quot; value=&quot;https://&lt;umbracositename&gt;.blob.core.windows.net/media&quot;&gt;
      &lt;/setting&gt;&lt;/setting&gt;&lt;/setting&gt;&lt;/setting&gt;&lt;/settings&gt;
    &lt;/service&gt;
  &lt;/service&gt;&lt;/services&gt;
&lt;/security&gt;
</pre>
<pre class="brush: xml; title: config/imageprocessor/security.config; notranslate">
&lt;!--?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?--&gt;
&lt;caching currentcache=&quot;AzureBlobCache&quot;&gt;
  &lt;caches&gt;
    &lt;cache name=&quot;AzureBlobCache&quot; type=&quot;ImageProcessor.Web.Plugins.AzureBlobCache.AzureBlobCache, ImageProcessor.Web.Plugins.AzureBlobCache&quot; maxdays=&quot;365&quot;&gt;
      &lt;settings&gt;
        &lt;setting key=&quot;CachedStorageAccount&quot; value=&quot;DefaultEndpointsProtocol=https;AccountName=&lt;accountname&gt;;AccountKey=&lt;accountkey&gt;;EndpointSuffix=core.windows.net&quot;&gt;
        &lt;setting key=&quot;CachedBlobContainer&quot; value=&quot;cache&quot;&gt;
        &lt;setting key=&quot;UseCachedContainerInUrl&quot; value=&quot;false&quot;&gt;
        &lt;setting key=&quot;SourceStorageAccount&quot; value=&quot;DefaultEndpointsProtocol=https;AccountName=&lt;accountname&gt;;AccountKey=&lt;accountkey&gt;;EndpointSuffix=core.windows.net&quot;&gt;
        &lt;setting key=&quot;SourceBlobContainer&quot; value=&quot;media&quot;&gt;
        &lt;setting key=&quot;StreamCachedImage&quot; value=&quot;false&quot;&gt;
        &lt;setting key=&quot;CachedCDNRoot&quot; value=&quot;https://&lt;cdnrootname&gt;.azureedge.net&quot;&gt;
        &lt;setting key=&quot;CachedCDNTimeout&quot; value=&quot;1000&quot;&gt;
      &lt;/setting&gt;&lt;/setting&gt;&lt;/setting&gt;&lt;/setting&gt;&lt;/setting&gt;&lt;/setting&gt;&lt;/setting&gt;&lt;/setting&gt;&lt;/settings&gt;
    &lt;/cache&gt;
  &lt;/caches&gt;
&lt;/caching&gt;
</pre>
<h1>Conclusion</h1>
<p>So, if you followed these tips correctly, you&#8217;ll be able to run WebPagetest and get the same result as I did:</p>
<p><img decoding="async" class="wp-image-7937 size-medium aligncenter" src="https://blogit.create.pt////wp-content/uploads/2018/11/straightAs-300x89.png" alt="straightAs" width="300" height="89" srcset="https://blogit.create.pt/wp-content/uploads/2018/11/straightAs-300x89.png 300w, https://blogit.create.pt/wp-content/uploads/2018/11/straightAs-768x228.png 768w, https://blogit.create.pt/wp-content/uploads/2018/11/straightAs-696x207.png 696w, https://blogit.create.pt/wp-content/uploads/2018/11/straightAs.png 837w" sizes="(max-width: 300px) 100vw, 300px" /></p>
<p>You can find the complete report here: <a href="https://www.webpagetest.org/result/181127_2A_bea6941dcd20d38ab54c29409fca9363/">https://www.webpagetest.org/result/181127_2A_bea6941dcd20d38ab54c29409fca9363/</a>.</p>
<p>The post <a href="https://blogit.create.pt/andresantos/2018/11/27/straight-as-in-webpagetest-with-umbraco/">Straight A&#8217;s in WebPagetest with Umbraco</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/andresantos/2018/11/27/straight-as-in-webpagetest-with-umbraco/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Latency test between Azure and On-Premises – Specifications</title>
		<link>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-specifications/</link>
					<comments>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-specifications/#respond</comments>
		
		<dc:creator><![CDATA[Gustavo Brito]]></dc:creator>
		<pubDate>Mon, 27 Nov 2017 18:00:04 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[azure]]></category>
		<category><![CDATA[delay]]></category>
		<category><![CDATA[Hybrid]]></category>
		<category><![CDATA[Hybrid Cloud]]></category>
		<category><![CDATA[Integration]]></category>
		<category><![CDATA[Latency]]></category>
		<category><![CDATA[On-Premises]]></category>
		<category><![CDATA[webservices]]></category>
		<guid isPermaLink="false">http://blogit.create.pt/gustavobrito/?p=954</guid>

					<description><![CDATA[<p>Internet Connection Create IT as an Internet connection of 100mbps Down/20mbps Up. Azure was capping at a 150mbps symmetrical connection. TeamViewer VPN, Azure Site-to-Site and Point-to-Site connections were capped at 10mpbs. Azure plans In Azure, the most economical plans were chosen, considering our requirements. Some plans were free, some were cheaper than the others, but [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-specifications/">Latency test between Azure and On-Premises – Specifications</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p style="text-align: center"><strong><em>Internet Connection</em></strong></p>
<p>Create IT as an Internet connection of 100mbps Down/20mbps Up. Azure was capping at a 150mbps symmetrical connection.</p>
<p>TeamViewer VPN, Azure Site-to-Site and Point-to-Site connections were capped at 10mpbs.</p>
<p><span id="more-954"></span></p>
<p style="text-align: center"><strong><em>Azure plans</em></strong></p>
<p>In Azure, the most economical plans were chosen, considering our requirements. Some plans were free, some were cheaper than the others, but having VPN capabilities and all configurations supported, we chose the cheaper plans that were available with all these needed features supported.</p>
<p style="text-align: center"><strong><em>Browsers</em></strong></p>
<p>We used Chrome, Firefox and Edge. This was to eliminate browser differences. Not showing any differences in total execution times, we kept using Chrome as a default testing browser.</p>
<p style="text-align: center"><strong><em>LAN Connection</em></strong></p>
<p>Lan wise, our internal network is based on a 1gbps Local Area Network.</p>
<p style="text-align: center"><strong><em>On-Premises Service Host</em></strong></p>
<p>The On-Premises Service was hosted on a machine running Windows 10 Retail with latest updates installed and IIS Express. All code was made with Visual Studio 2017 Enterprise. The relevant host hardware specifications are:</p>
<ol>
<li><em>Intel® Core™ i7-6700HQ CPU @ 2.6GHz</em></li>
<li><em>32GB of RAM DDR4 @ 3400MHz</em></li>
<li><em>NVMe M.2 PCI-e 240GB SSD</em></li>
</ol>
<p style="text-align: center"><strong><em>Testing Hours</em></strong></p>
<p>All tests were executed on working hours, 9am to 18pm, GMT.</p>
<p style="text-align: center"><strong><em>The Writer</em></strong></p>
<p>I’m a consultant @ Create It, a Portuguese company. If you read this, make sure I know about it! ? Hugs and kisses! This was an absolute pleasure to make ?</p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-specifications/">Latency test between Azure and On-Premises – Specifications</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-specifications/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Latency test between Azure and On-Premises – Conclusions</title>
		<link>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-conclusions/</link>
					<comments>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-conclusions/#respond</comments>
		
		<dc:creator><![CDATA[Gustavo Brito]]></dc:creator>
		<pubDate>Mon, 27 Nov 2017 17:50:52 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<category><![CDATA[azure]]></category>
		<category><![CDATA[delay]]></category>
		<category><![CDATA[Hybrid]]></category>
		<category><![CDATA[Hybrid Cloud]]></category>
		<category><![CDATA[Integration]]></category>
		<category><![CDATA[Latency]]></category>
		<category><![CDATA[On-Premises]]></category>
		<category><![CDATA[webservices]]></category>
		<guid isPermaLink="false">http://blogit.create.pt/gustavobrito/?p=884</guid>

					<description><![CDATA[<p>And when all testing’s complete… A final review is coming! So, brace yourselves and let’s start with a graph ? Note: 5MB results must be multiplied by 10 (value x 10) Here are the results. All of them. Don’t forget to multiply 5MB results by 10! With the graph below, we can check how message [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-conclusions/">Latency test between Azure and On-Premises – Conclusions</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>And when all testing’s complete… A final review is coming! So, brace yourselves and let’s start with a graph ?</p>
<p><img decoding="async" class="aligncenter size-full wp-image-894" src="http://blogit-create.com/wp-content/uploads/2017/11/net-12.png" alt="" width="624" height="369" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/net-12.png 624w, https://blogit.create.pt/wp-content/uploads/2017/11/net-12-300x177.png 300w" sizes="(max-width: 624px) 100vw, 624px" /></p>
<p style="text-align: center"><strong>Note: 5MB results must be multiplied by 10 (value x 10)</strong></p>
<p>Here are the results. All of them. Don’t forget to multiply 5MB results by 10!</p>
<p><span id="more-884"></span></p>
<p>With the graph below, we can check how message sizes influence latency. After doing some math, these are the results:</p>
<p><img decoding="async" class="aligncenter size-full wp-image-904" src="http://blogit-create.com/wp-content/uploads/2017/11/net-13.png" alt="" width="576" height="334" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/net-13.png 576w, https://blogit.create.pt/wp-content/uploads/2017/11/net-13-300x174.png 300w" sizes="(max-width: 576px) 100vw, 576px" /></p>
<p>&nbsp;</p>
<p>As we can confirm, latency follows an exponential growing rate, which means, the bigger the message, even greater will be latency. We can confirm that, for each scenario, this applies.</p>
<p><strong>But what does it tell me? </strong>This tells you that you need to be careful if you’re planning on sending big messages between two points. <strong>This exponential growing rate applies to all scenarios!</strong></p>
<p>All values listed and noted, so we gave tests a score. This score is calculated by adding all three results (10kb, 100kb and 5mb) and dividing all by three. This will give us average execution time. For your best understanding, below is an exponential graph. This climbs faster and faster to infinity, being the horizontal axle message sizes, and the vertical one latency:</p>
<p><img decoding="async" class="aligncenter size-full wp-image-914" src="http://blogit-create.com/wp-content/uploads/2017/11/net-14.png" alt="" width="415" height="188" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/net-14.png 415w, https://blogit.create.pt/wp-content/uploads/2017/11/net-14-300x136.png 300w" sizes="(max-width: 415px) 100vw, 415px" /></p>
<p>&nbsp;</p>
<p style="text-align: center"><strong>Now for the results!</strong></p>
<p><img decoding="async" class="aligncenter size-full wp-image-924" src="http://blogit-create.com/wp-content/uploads/2017/11/net-15.png" alt="" width="576" height="336" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/net-15.png 576w, https://blogit.create.pt/wp-content/uploads/2017/11/net-15-300x175.png 300w" sizes="(max-width: 576px) 100vw, 576px" /></p>
<p>As we can check, <strong>Test #2 (Local On-Premises LAN) was the winner here, latency wise.</strong> This was expected. Although this is the fastest, it’s the riskiest and <strong>NOT RECOMMENDED! It’s true that latency is reduced, but with only ~53ms difference between Azure Site-to-Site on a 100Kb message, </strong>is it worth to have a full infrastructure depending on your maintenance? And the networking gear it requires? What about having someone responsible for the room 24/7? How about costs? <strong>Think about it!</strong> Think about savings when migrating your business to Cloud!</p>
<p><strong>Second place</strong> goes to HTTP without VPN (Exposed services). <strong>Not a Cloud solution, and a risky one too! </strong>If you like hackers messing with your vital business services and trying to break in from all over the world 24/7, go right ahead!</p>
<p>Regarding fast and safe Cloud solutions, which is the main reason of reading this document after all, <strong>the winner was Test #4 (Azure Site-to-Site VPN), getting third place on the board, by a difference of 8 points! </strong>This managed to be the <strong>safest, more efficient and with 99.9% uptime</strong> method of expanding your office or network, and have a low-latency data exchange. <strong>This is the optimal and safer solution. No hackers, no maintenance, no worries!</strong></p>
<blockquote>
<p style="text-align: center">Consult us to migrate your business! <strong>You’re at the doorstep to your future!</strong></p>
</blockquote>
<p><strong> <img decoding="async" class="aligncenter size-full wp-image-934" src="http://blogit-create.com/wp-content/uploads/2017/11/create.png" alt="" width="302" height="101" srcset="https://blogit.create.pt/wp-content/uploads/2017/11/create.png 302w, https://blogit.create.pt/wp-content/uploads/2017/11/create-300x100.png 300w" sizes="(max-width: 302px) 100vw, 302px" /></strong></p>
<p>&nbsp;</p>
<p><strong>For full test specifications, <a href="http://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-specifications">read here</a>!</strong></p>
<p>The post <a href="https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-conclusions/">Latency test between Azure and On-Premises – Conclusions</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-conclusions/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
