<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Misc Archives - Blog IT</title>
	<atom:link href="https://blogit.create.pt/category/misc/feed/" rel="self" type="application/rss+xml" />
	<link>https://blogit.create.pt/category/misc/</link>
	<description>Create IT blogger community</description>
	<lastBuildDate>Sun, 08 Feb 2026 16:55:24 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>Book Review: Co-Intelligence by Ethan Mollick</title>
		<link>https://blogit.create.pt/davidpereira/2026/02/08/book-review-co-intelligence-by-ethan-mollick/</link>
					<comments>https://blogit.create.pt/davidpereira/2026/02/08/book-review-co-intelligence-by-ethan-mollick/#respond</comments>
		
		<dc:creator><![CDATA[David Pereira]]></dc:creator>
		<pubDate>Sun, 08 Feb 2026 16:45:54 +0000</pubDate>
				<category><![CDATA[Misc]]></category>
		<guid isPermaLink="false">https://blogit.create.pt/?p=13597</guid>

					<description><![CDATA[<p>Table of Contents Introduction We recently finished reading&#160;Co-Intelligence: Living and Working with AI&#160;by Ethan Mollick in our company&#8217;s book club. The book shares four core principles for AI collaboration and outlines various practical applications. Some really stuck with me, and I&#8217;ve tried to incorporate them in my work. Reading the author&#8217;s perspective and learning his [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/davidpereira/2026/02/08/book-review-co-intelligence-by-ethan-mollick/">Book Review: Co-Intelligence by Ethan Mollick</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">Table of Contents</h2>



<ul style="max-width:985px" class="wp-block-list">
<li>Introduction</li>



<li>AI as a Thinking Companion</li>



<li>The Human-in-the-Loop Principle
<ul style="max-width:960px" class="wp-block-list">
<li>Critical Thinking</li>



<li>Disruption in the job market</li>
</ul>
</li>



<li>Centaur vs Cyborg approaches</li>



<li>Resources</li>



<li>Conclusion</li>
</ul>



<h2 class="wp-block-heading">Introduction</h2>



<p>We recently finished reading&nbsp;<em>Co-Intelligence: Living and Working with AI</em>&nbsp;by Ethan Mollick in our company&#8217;s book club. The book shares four core principles for AI collaboration and outlines various practical applications. Some really stuck with me, and I&#8217;ve tried to incorporate them in my work. Reading the author&#8217;s perspective and learning his way of thinking definitely improved how I look at these tools. But if you know me, you know how skeptical I am. There are some chapters and opinions that I don&#8217;t agree with.</p>



<p>So in this post, I&#8217;ll share the key insights from our book club in the context of software development, plus some personal opinions as always 🙂.</p>



<h2 class="wp-block-heading">AI as a Thinking Companion</h2>



<p>One of the most practical takeaways for me was viewing AI as a co-worker and thinking companion. When done right, this can be incredibly useful. Some people use it heavily for deep research, not so much to delegate tasks for it to do.&nbsp;<a href="https://www.linkedin.com/in/andredsantos/">André Santos</a>&nbsp;gave some examples on the tasks it has been useful, like Terraform code or generating bash scripts. On those tasks, we can write a detailed prompt, alongside proper documentation (e.g. Context7 MCP), and ask it to write Terraform since it&#8217;s simpler and faster.</p>



<p>Even just making a POC, or demo, turning an idea you have into working software to see how viable the idea is. That is a perfect use case for delegating the front-end and back-end to AI. It&#8217;s not code that will ship to production, it&#8217;s a way to make prototypes or quick demo apps that otherwise you&#8217;d never spend the time to build.</p>



<p>I&#8217;ve enjoyed using models like Claude to help me around my tasks at work because they often uncover possibilities I haven&#8217;t thought about. The conversational style of going back and forth helps me fine-tune my own solution. It&#8217;s not just &#8220;give me code,&#8221; it&#8217;s &#8220;let&#8217;s discuss this architecture&#8221;. At the end of the conversation, we can generate a good draft of a PRD (Product Requirements Document). Notice I don&#8217;t delegate my thinking to it, it&#8217;s a tool that helps me think of solutions or just <a href="https://x.com/trq212/status/2005315275026260309" target="_blank" rel="noreferrer noopener">interview me sometimes</a>.</p>



<p>However, it can be annoying. I&#8217;d like to minimize the number of times I have to tell it &#8220;no, you&#8217;re wrong. The Microsoft documentation for Azure Container Apps does not state X as you said&#8221; 😅. To fix this, I&#8217;ve tried giving an explicit instruction in my system prompts:</p>



<pre class="wp-block-code"><code><em>"It's also very important for you to verify if there is official documentation that supports your claims and statements. Please find official documentation supporting your claims before responding to a user. If there isn't documentation confirming your statement, don't include it in the response."</em></code></pre>



<p>I have had better results with this, still not perfect. In a longer conversation, I think it doesn&#8217;t always verify the docs (memory limits, perhaps), but sometimes I get the response: &#8220;(&#8230;) Based on my search through the official documentation, I need to be honest with you (&#8230;)&#8221;.</p>



<p>I really find it funny that Claude &#8220;needs&#8221; to be honest with me 😄. Sycophancy is truly annoying, especially since we are talking about AI as a thinking companion. If your AI partner always agrees with you, how useful is it really as a thinking companion?</p>



<h2 class="wp-block-heading">The Human-in-the-Loop Principle</h2>



<p>While Mollick&#8217;s vision of a collaborative future with AI is profoundly optimistic, he is also a realist. One of the most important principles, and a recurring theme in the book, is the absolute necessity of human oversight &#8211; the &#8220;human-in-the-loop&#8221; principle. This is a key quote from the book:</p>



<pre class="wp-block-code"><code><em>For now, AI works best with human help, and you want to be that helpful human. As AI gets more capable and requires less human help — you still want to be that human. So the second principle is to learn to be the human in the loop.</em></code></pre>



<p>One of Mollick&#8217;s key warnings is about <a href="https://www.linkedin.com/posts/emollick_a-fundamental-mistake-i-see-people-building-activity-7153484182134923265-IxAg" target="_blank" rel="noreferrer noopener">falling asleep at the wheel</a>. When AI performs well, humans stop paying attention. This has been referenced by Simon Willison as well, in his recent insightful post <a href="https://simonwillison.net/2025/Dec/31/the-year-in-llms/#the-year-of-yolo-and-the-normalization-of-deviance" target="_blank" rel="noreferrer noopener">2025: The year in LLMs</a>. All I&#8217;m saying is I understand <code>--dangerously-skip-permissions</code> is useful as a tool when used in a secure sandbox environment. But we should verify our confidence level on the AI&#8217;s output and the autonomy + tools we give it. If we don&#8217;t, we risk using AI on tasks that fall outside the Jagged Frontier, which can lead to security issues, nasty bugs, and hurt our ability to learn.</p>



<p>I say this knowing full well that I trust Claude Opus 4.5 more on any task I give it. So I have to actively force myself to verify its suggestions just as rigorously, verify which tools I gave it access to, and which are denied. For example, I use Claude Code hooks to prevent any&nbsp;<code>appsettings</code>,&nbsp;<code>.env</code>, or similar files from being accessed. I still try to read the LLM reasoning/thinking text, so that I understand better, and simply out of curiosity as well.</p>



<p>I simply can&#8217;t forget when I saw the <a href="https://www.anthropic.com/system-cards" target="_blank" rel="noreferrer noopener">Claude Sonnet 4 and Opus 4 System Card</a>, the &#8220;High-agency behavior&#8221; Anthropic examined. Whistleblowing and other misalignment problems are possible, for example, this is a quote from the Opus 4.6 System card:</p>



<pre class="wp-block-code"><code><em>In our whistleblowing and morally-motivated sabotage evaluations, we observed a low but persistent rate of the model acting against its operator’s interests in unanticipated ways. Overall, Opus 4.6 was slightly more inclined to this behavior than Opus 4.5.</em></code></pre>



<p>All I&#8217;m saying is let&#8217;s be conscious of these behaviors and results on the evals.</p>



<p>In my opinion, the human-in-the-loop principle is crucial. Don&#8217;t just copy/paste or try to vibe your way into production. Engineers are the ones <strong>responsible</strong> for software systems, not tools or alien minds. If there are users who depend on your software, and your AI code causes an incident in production, you are responsible. Claude or Copilot won&#8217;t wake up at 3 AM if prod is on fire (or maybe <a href="https://learn.microsoft.com/en-us/azure/sre-agent/incident-management?tabs=azmon-alerts" target="_blank" rel="noreferrer noopener">Azure SRE agent</a> will if you pay for it 🤔&#8230;). Having an engineering mindset and being in the driver&#8217;s seat is what I expect from myself and anyone I work with.</p>



<h3 class="wp-block-heading">Critical Thinking</h3>



<p>Within this principle, we have a topic I have a lot of strong opinions on. This quote says it all:</p>



<pre class="wp-block-code"><code><em>LLMs are not generally optimized to say "I don’t know" when they don't have enough information. Instead, they will give you an answer, expressing confidence.</em></code></pre>



<p>Basically, to be the human in the loop, we really must have good critical thinking skills. This ability plus our experience, brings something very valuable to this AI collaboration &#8211; detect the &#8220;I don&#8217;t know&#8221;. It may help to know some ways we can <a href="https://docs.anthropic.com/en/docs/test-and-evaluate/strengthen-guardrails/reduce-hallucinations" target="_blank" rel="noreferrer noopener">reduce hallucinations</a> in our prompts. But still, we can&#8217;t blindly believe AI output is correct based on its confidence that the proposed solution works. Now more than ever, we need to continue developing critical thinking skills and apply them when working with AI, so that in the scenarios where it should have responded &#8220;I don&#8217;t know&#8221;, you rely more on your own abilities.</p>



<p>Sure, there are tasks we are more confident delegating for AI to work on, but the ones we know fall outside the Jagged Frontier, we must proceed with caution and care. We discussed our <strong>confidence level</strong> with AI output a lot. For example, <a href="https://www.linkedin.com/in/andredsantos/" target="_blank" rel="noreferrer noopener">André Santos</a> said it depends on the task we give it, but <a href="https://www.linkedin.com/in/asoliveira/">André</a><a href="https://www.linkedin.com/in/asoliveira/" target="_blank" rel="noreferrer noopener"> Oliveira</a> also argues that we can only validate the output in the topics we know. It serves as an <strong>amplifier</strong> because it&#8217;s only a tool. If the wielder of the tool doesn&#8217;t fact-check the output, we risk believing the hallucinations and false statements/claims. <a href="https://www.linkedin.com/in/pedrovala" target="_blank" rel="noreferrer noopener">Pedro Vala</a> also talked about a really good quote from the <a href="https://www.amazon.com/Agentic-Design-Patterns-Hands-Intelligent/dp/3032014018" target="_blank" rel="noreferrer noopener">Agentic Design Patterns book</a> that is super relevant to this topic:</p>



<pre class="wp-block-code"><code><em>An AI trained on "garbage" data doesn’t just produce garbage-out; it produces plausible, confident garbage that can poison an entire process - Marco Argenti, CIO, Goldman Sachs</em></code></pre>



<p>Now imagine, if we read the AI output, and at first glance it looks okay, but it&#8217;s only plausible garbage. Which is a real risk, especially on the AI-generated content that is already <a href="https://graphite.io/five-percent/more-articles-are-now-created-by-ai-than-humans" target="_blank" rel="noreferrer noopener">available in the internet</a>. Again, I hope developers continue to develop their critical thinking skills and don&#8217;t delegate their thinking to tools. Right now, the only process I have of filtering out garbage on the internet is consuming most content from authors I respect, and I know for a fact are real people 😅.</p>



<h3 class="wp-block-heading">Disruption in the job market</h3>



<p>Mollick also talks about the disruption in the job market, which is a hot topic in our industry. Especially the impact AI has on junior roles. We have debated this in a few sessions of our book club, and again, critical thinking and adaptability are crucial. We simply have to adapt and learn how to use this tool, nothing less, nothing more. How much value we bring to the table when working with AI matters, especially if the&nbsp;<strong>value</strong>&nbsp;you bring is very tiny. If you don&#8217;t bring any value to the table and just copy/paste, you are not a valuable professional in my view.</p>



<p>It&#8217;s a good idea to keep <strong>developing our skills and expertise</strong>. Andrej Karpathy talks about <a href="https://www.youtube.com/watch?v=LCEmiRjPEtQ" target="_blank" rel="noreferrer noopener">intelligence &#8220;brownout&#8221; when LLMs go down</a>, this is extremely scary to me, especially if I see this behaviour in junior or college grads. I truly hope we stop delegating so much intelligence to a tool. I don&#8217;t want engineers to <strong>rely</strong> on LLMs when production is down and on fire. It would be sad to see engineers not knowing how to troubleshoot, how to fix these accidents in production&#8230; just because AI tools are not available 😐.</p>



<h2 class="wp-block-heading">Centaur vs Cyborg approaches</h2>



<p>The book distinguishes between two ways of working with AI:</p>



<ol style="max-width:985px" class="wp-block-list">
<li><strong>Centaur</strong>: You divide tasks between human and machine. You handle the &#8220;Just me&#8221; tasks (outside the Jagged Frontier), and delegate specific sub-tasks to the AI that you later verify.</li>



<li><strong>Cyborg</strong>: You integrate AI so deeply that the workflow becomes a hybrid, often automating entire processes.</li>
</ol>



<p>For software development, I&#8217;m definitely in the&nbsp;<strong>Centaur</strong>&nbsp;camp right now. We should be careful about what tasks we delegate. Again, remember the <strong>&#8220;falling asleep at the wheel&#8221;</strong>.&nbsp;When the AI is very good, humans have no reason to work hard and pay attention. They let the AI take over instead of using it as a tool, which can hurt our learning process and skill development. Or in some scenarios, it can lead to your production database being deleted&#8230;</p>



<p>This is just a tool. We are still responsible at work. If the AI pushes a bug to production,&nbsp;<em>you</em>&nbsp;pushed a bug to production!</p>



<p>The author does give some &#8220;Cyborg examples&#8221; of working with AI, here is a quote from the book:</p>



<pre class="wp-block-code"><code><em>I would become a Cyborg and tell the AI: I am stuck on a paragraph in a section of a book about how AI can help get you unstuck. Can you help me rewrite the paragraph and finish it by giving me 10 options for the entire paragraph in various professional styles? Make the styles and approaches different from each other, making them extremely well written.</em></code></pre>



<p>This is that ideation use case that is super useful when you have writer&#8217;s block, or just want to brainstorm a bit on a given topic. In our industry, a lot of teams are integrating AI in many phases of the SDLC. I haven&#8217;t found many workflows that work well in some parts of the SDLC, since we are focusing on adopting AI for coding and code review. But in most workflows, the cyborg practice is to steer more the AI and manage the tasks where you collaborate with AI as a co-worker.</p>



<p>The risk remains even when someone uses cyborg practices, but then fails to spot hallucinations or false claims. The takeaway is really to be conscious of our AI adoption and usage. The number one cyborg practice I try to do naturally is to push back. If I smell something is off, I will disagree with the output and ask the AI to reconsider. This leads to a far more interesting back-and-forth conversation on a given topic.</p>



<h2 class="wp-block-heading">Resources</h2>



<p>Here are some resources if you want to dive deeper:</p>



<ul style="max-width:985px" class="wp-block-list">
<li><a href="https://www.amazon.com/Co-Intelligence-Living-Working-Ethan-Mollick/dp/059371671X" target="_blank" rel="noreferrer noopener">Co-intelligence by Ethan Mollick</a></li>



<li><a href="https://www.hbs.edu/ris/Publication%20Files/24-013_d9b45b68-9e74-42d6-a1c6-c72fb70c7282.pdf" target="_blank" rel="noreferrer noopener">Navigating the Jagged Technological Frontier</a></li>



<li><a href="https://www.amazon.com/Agentic-Design-Patterns-Hands-Intelligent/dp/3032014018" target="_blank" rel="noreferrer noopener">Agentic Design Patterns: A Hands-On Guide to Building Intelligent Systems</a></li>



<li><a href="https://www.youtube.com/watch?v=LCEmiRjPEtQ" target="_blank" rel="noreferrer noopener">Andrej Karpathy: Software Is Changing (Again)</a></li>



<li><a href="https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the-jagged" target="_blank" rel="noreferrer noopener">Centaurs and Cyborgs on the Jagged Frontier</a></li>



<li><a href="https://zed.dev/blog/why-llms-cant-build-software" target="_blank" rel="noreferrer noopener">Why LLMs Can&#8217;t Really Build Software</a></li>



<li><a href="https://openai.com/index/why-language-models-hallucinate/" target="_blank" rel="noreferrer noopener">Why language models hallucinate | OpenAI</a></li>
</ul>



<h2 class="wp-block-heading">Conclusion</h2>



<p>This was a great book, I truly recommend it to anyone who is interested in the slightest by AI. Co-intelligence is something we can strive for, focusing on adopting this new tool that can help us develop ourselves. Our expertise and our skills. When it was written, we had GPT 3.5 and GPT-4 was recent I believe&#8230; now we have GPT-5.3-Codex, Opus 4.6, GLM 4.7, and Kimi K2.5. I mean, in 2 years things just keep on changing 😅. The Jagged Frontier will keep changing, so this calls for experimentation. AI pioneers will do most of this experimentation, running evals and whatnot, to understand where each type of task falls in the Jagged Frontier. Pay attention to what they share, what works, and what doesn&#8217;t.</p>



<p>AI has augmented my team and me, mostly on &#8220;Centaur&#8221; tasks while we improve our AI fluency and usage. In my personal opinion, I don&#8217;t see us reaching the AGI scenario Ethan talks about in the last chapter. Actually, most of our industry talks and continues to hype AGI&#8230; even the exponential growth scenario raises some doubts for me.</p>



<p>But I agree with Ethan when he says: &#8220;No one wants to go back to working six days a week (&#8230;)&#8221; 😅. We should continue to focus on building our own expertise, and not delegating critical thinking to AI. There is a new skill in town, we now have LLM whisperers 😅, and having this skill can indeed augment you even further. Just remember the fundamentals don&#8217;t change. Engineers still need to know those! There are hundreds of &#8220;Vibe Coding Cleanup Specialist&#8221; now 🤣. Let&#8217;s remember to be the human in the loop. Apply critical thinking to any AI output, do fact-checking, and take&nbsp;<strong>ownership</strong>&nbsp;of the final result. Please don&#8217;t create AI slop 😅.</p>



<p>Hope you enjoyed this post! My next blog post will be about how we are using agentic coding tools, so stay tuned! Feel free to share in the comments your opinion too, or reach out and we can have a chat 🙂.</p>



<p>If you&#8217;re interested, check out my latest blog posts about AI:</p>



<ul style="max-width:985px" class="wp-block-list">
<li><a href="https://blogit.create.pt/davidpereira/2026/01/09/lessons-learned-improving-code-reviews-with-ai/" id="https://blogit.create.pt/davidpereira/2026/01/09/lessons-learned-improving-code-reviews-with-ai/" target="_blank" rel="noreferrer noopener">Lessons learned improving code reviews with AI</a></li>



<li><a href="https://blogit.create.pt/davidpereira/2025/09/10/becoming-augmented-by-ai/" id="https://blogit.create.pt/davidpereira/2025/09/10/becoming-augmented-by-ai/" target="_blank" rel="noreferrer noopener">Becoming augmented by AI</a></li>
</ul>
<p>The post <a href="https://blogit.create.pt/davidpereira/2026/02/08/book-review-co-intelligence-by-ethan-mollick/">Book Review: Co-Intelligence by Ethan Mollick</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/davidpereira/2026/02/08/book-review-co-intelligence-by-ethan-mollick/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Lessons learned improving code reviews with AI</title>
		<link>https://blogit.create.pt/davidpereira/2026/01/09/lessons-learned-improving-code-reviews-with-ai/</link>
					<comments>https://blogit.create.pt/davidpereira/2026/01/09/lessons-learned-improving-code-reviews-with-ai/#respond</comments>
		
		<dc:creator><![CDATA[David Pereira]]></dc:creator>
		<pubDate>Fri, 09 Jan 2026 12:44:41 +0000</pubDate>
				<category><![CDATA[Misc]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[GenAI]]></category>
		<guid isPermaLink="false">https://blogit.create.pt/?p=13548</guid>

					<description><![CDATA[<p>Table of Contents Introduction I have loved code reviews for years now, and still to this day, I love seeing good open source PRs! When I say good, I mean really great! We have access to tons of open source code, and the greatest PRs are the ones where you can learn a lot from [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/davidpereira/2026/01/09/lessons-learned-improving-code-reviews-with-ai/">Lessons learned improving code reviews with AI</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">Table of Contents</h2>



<ul style="max-width:1005px" class="wp-block-list">
<li>Introduction</li>



<li>Why we started experimenting</li>



<li>Our AI code review journey
<ul style="max-width:960px" class="wp-block-list">
<li>Claude Code</li>



<li>Saving learnings in memory</li>



<li>GitHub Copilot</li>



<li>CodeRabbit and Qodo</li>
</ul>
</li>



<li>Tool of choice
<ul style="max-width:960px" class="wp-block-list">
<li>Improving multi-agent collaboration</li>
</ul>
</li>



<li>Resources</li>



<li>Conclusion</li>
</ul>



<h2 class="wp-block-heading">Introduction</h2>



<p>I have loved code reviews for years now, and still to this day, I love seeing good open source PRs! When I say good, I mean really great! We have access to tons of open source code, and the greatest PRs are the ones where you can learn a lot from on&nbsp;<strong>how to do it right</strong>. In a sense, this blog post is about just that. This blog post is part of a series where I share how AI is augmenting my work, and what I&#8217;m learning from it. If you&#8217;re interested, you can read the first post here:&nbsp;<a href="https://blogit.create.pt/davidpereira/2025/09/10/becoming-augmented-by-ai/" target="_blank" rel="noreferrer noopener">Becoming augmented by AI</a>. In that post, I reference how AI has augmented me with an &#8220;initial code review&#8221;, but now I&#8217;ll go deeper into this topic. I&#8217;ll share our hands-on experience: what works, what doesn&#8217;t, and a healthy dose of my opinions along the way 😄.</p>



<p><strong>Quick disclaimer</strong>: what works for us might not work for you. Your team and coding guidelines are different, and that&#8217;s fine. These are just our honest experiences.</p>



<p>With that said, let&#8217;s dive into why we started incorporating AI tools in our code review process.</p>



<h2 class="wp-block-heading">Why we started experimenting<a href="https://github.com/BOLT04/Blog-Posts/blob/master/2025/lessons-learned-from-improving-code-reviews-ai.md#why-we-started-experimenting"></a></h2>



<p>I recently watched this amazing&nbsp;<a href="https://www.youtube.com/watch?v=glfB3KLQR7E" target="_blank" rel="noreferrer noopener">video by CodeRabbit</a>. In our team, code review isn&#8217;t really the bottleneck (yet), but it&#8217;s funny because we are also using AI heavily for feature development and trying to improve&#8230; hummm &#8220;velocity&#8221; 🤣.</p>



<p>Anyway, I understand many teams nowadays have increased the number of PRs created. That some PRs simply get a blind LGTM.</p>



<figure class="wp-block-image size-large is-resized"><img fetchpriority="high" decoding="async" width="300" height="168" src="https://blogit.create.pt/wp-content/uploads/2026/01/giphy.gif" alt="" class="wp-image-13589" style="aspect-ratio:1.785770356097909;width:464px;height:auto" /></figure>



<p>Maybe some PRs just have increasingly more AI slop&#8230; which wears down senior engineers tasked to do code review 😅. Not all professionals would&nbsp;<strong>want to do it right</strong>&nbsp;or maybe they just want to ship because their company&#8217;s &#8220;productivity metrics&#8221; incentivize merging more and more PRs 😅. Honestly, it&#8217;s&nbsp;<a href="https://simonwillison.net/2025/Dec/18/code-proven-to-work/" target="_blank" rel="noreferrer noopener">our job to deliver code we have proven to work</a>, I fully agree with Simon Willison. Throwing slop over to the engineers that do code review is unprofessional, just as much as throwing untested features over to QA 😐. In our case, we changed to having a dedicated dev responsible for all code reviews, and we don&#8217;t have that many per day. We simply wanted to improve code quality and reduce bugs, while keeping code review as an educational process for junior engineers.</p>



<p>About five months ago, our team started experimenting with AI tools, GitHub Copilot, Claude Code, Codacy, Qodo, and CodeRabbit to see how they could help us improve our review process without adding a ton of noise. There are more tools we didn&#8217;t try, like Augment Code and Greptile (has some cool&nbsp;<a href="https://www.greptile.com/benchmarks" target="_blank" rel="noreferrer noopener">benchmarks</a>), but hopefully the lessons we learned will be useful to you either way.</p>



<p></p>



<h2 class="wp-block-heading">Our AI code review journey</h2>



<p>We already talked in the last post about our&nbsp;<a href="https://blogit.create.pt/davidpereira/2025/09/10/becoming-augmented-by-ai/#custom-instructions" target="_blank" rel="noreferrer noopener">custom instructions</a>, to some extent. Specifically for code review we took a phased approach and started comparing different tools:</p>



<ol style="max-width:965px" class="wp-block-list">
<li>Started with&nbsp;<a href="https://docs.github.com/en/copilot/concepts/agents/code-review" target="_blank" rel="noreferrer noopener">GitHub Copilot Code Review</a></li>



<li>Integrated Claude Code with GitHub and started comparing code reviews from both tools</li>



<li>Added CodeRabbit, Qodo and Codacy to spot differences between them</li>



<li>Refined prompts/instructions/configs for some tools</li>
</ol>



<p>We didn&#8217;t invest equal time in all of them, though. Copilot and Claude ended up getting most of our attention, especially since we started using Copilot Code Review (CCR) when it was in public preview. Overall, we experimented with these tools in 30+ PRs, and made 20+ PRs to refine our prompts/instructions/agents.</p>



<h3 class="wp-block-heading">Claude Code</h3>



<p>Let&#8217;s go through Claude Code first. Here is a snippet of our&nbsp;<code>code-review</code>&nbsp;Claude Code custom slash command:</p>



<pre class="wp-block-code"><code>---
allowed-tools: Bash(dotnet test), Read, Glob, Grep, LS, Task, Explore, mcp.....
description: Perform a comprehensive code review of the requested PR or code changes, taking into consideration code standards
---

## Role

You are a world-class autonomous code review agent. You operate within a secure GitHub Actions environment.
Your analysis is precise, your feedback is constructive, and your adherence to instructions is absolute.
You do not deviate from your programming. You are tasked with reviewing a GitHub Pull Request.

## Primary Directive

Your sole purpose is to perform a comprehensive and constructive code review of this PR, and post all feedback and suggestions using the **GitHub review system** and provided tools.
All output must be directed through these tools. Any analysis not submitted as a review comment or summary is lost and constitutes a task failure.

## Input data
PR NUMBER: $ARGUMENTS

You MUST follow these steps to review the PR:
1. **Start a review**: Use `mcp__github__create_pending_pull_request_review` to begin a pending review
2. **Get diff information**: Use `mcp__github__get_pull_request_diff` to understand the code changes and line numbers
3. **Get list of files**: If you can't get diff information, use `mcp__github__get_pull_request_files` to get the list of files that were added, removed, and changed in the pull request
4. **Add comments**: Use `mcp__github__add_comment_to_pending_review` for each specific piece of feedback on particular lines
5. **Submit the review**: Use `mcp__github__submit_pending_pull_request_review` with event type "COMMENT" (not "REQUEST_CHANGES") to publish all comments as a non-blocking review

You can find all the code review standards and guidelines that you MUST follow here: `.github/instructions/code-review.instructions.md`

## Output format

**CRITICAL RULE** - DO NOT include compliments, positive notes, or praise in your review comments.
Be thorough but filter your comments aggressively - quality over quantity. Focus ONLY on issues, improvements, and actionable feedback.

**Output Violation Examples** (DO NOT DO THIS):
`The code follows best practices by...`
`Positive changes/notes`

**Important**: Submit as "COMMENT" type so the review doesn't block the PR.</code></pre>



<p>Yes, some wording might be weird like praising the AI with &#8220;You are a world-class&#8221; or &#8220;your adherence to instructions is absolute&#8221;. Like we mentioned about using uppercase &#8220;DO NOT&#8221; or &#8220;IMPORTANT&#8221;, and others, I can&#8217;t explain some of this stuff or find enough research that claims this affects how the LLM pays&nbsp;<strong>attention</strong>&nbsp;to instructions. I just experiment and learn, and&nbsp;<a href="https://github.com/google-github-actions/run-gemini-cli/blob/main/examples/workflows/pr-review/gemini-review.toml" target="_blank" rel="noreferrer noopener">Gemini</a>&nbsp;likes to use this phrase for code reviews as well 😄 (as well has 115 other devs on GitHub 😅).</p>



<p>To be honest, we still have too much noise in AI PR comments, or just tons of fluff. The bright side is, at least the compliments have kind of disappeared 😅 . You might enjoy getting this:</p>



<figure class="wp-block-image size-full"><img decoding="async" width="831" height="182" src="https://blogit.create.pt/wp-content/uploads/2026/01/image-2.png" alt="" class="wp-image-13558" srcset="https://blogit.create.pt/wp-content/uploads/2026/01/image-2.png 831w, https://blogit.create.pt/wp-content/uploads/2026/01/image-2-300x66.png 300w, https://blogit.create.pt/wp-content/uploads/2026/01/image-2-768x168.png 768w, https://blogit.create.pt/wp-content/uploads/2026/01/image-2-696x152.png 696w" sizes="(max-width: 831px) 100vw, 831px" /></figure>



<p>I don&#8217;t 🤣, especially when 1 PR has 5 of these. I do praise comments for my team yes, because positive comments are good&#8230; when it comes from a human who knows the other person, IMO. Also, there are many comments that don&#8217;t belong in a PR, they belong in a linter or other tools. We have&nbsp;<a href="https://csharpier.com/docs/About" target="_blank" rel="noreferrer noopener">CSharpier</a>&nbsp;and&nbsp;<a href="https://learn.microsoft.com/en-us/dotnet/fundamentals/code-analysis/overview?tabs=net-10" target="_blank" rel="noreferrer noopener">.NET analyzers</a>&nbsp;for that.</p>



<p>It also doesn&#8217;t have the best GitHub integration for now, at least we&#8217;ve had some problems (<a href="https://github.com/anthropics/claude-code-action/issues/584" target="_blank" rel="noreferrer noopener">400 errors</a>,&nbsp;<a href="https://github.com/anthropics/claude-code-action/issues/589" target="_blank" rel="noreferrer noopener">branch 404 errors</a>) with the GitHub action. Like&nbsp;<a href="https://github.com/anthropics/claude-code-action/issues/548" target="_blank" rel="noreferrer noopener">not having access to GitHub mcp tools</a>, even though we set it in&nbsp;<code>allowed-tools</code>&nbsp;option.</p>



<figure class="wp-block-image size-full"><img decoding="async" width="782" height="72" src="https://blogit.create.pt/wp-content/uploads/2026/01/image-1.png" alt="" class="wp-image-13557" srcset="https://blogit.create.pt/wp-content/uploads/2026/01/image-1.png 782w, https://blogit.create.pt/wp-content/uploads/2026/01/image-1-300x28.png 300w, https://blogit.create.pt/wp-content/uploads/2026/01/image-1-768x71.png 768w, https://blogit.create.pt/wp-content/uploads/2026/01/image-1-696x64.png 696w" sizes="(max-width: 782px) 100vw, 782px" /></figure>



<p>Anyway, we iterated a lot on instructions and prompts so far, since we use them for both Claude and Copilot. Here is a quick recap of what features we use from Claude Code:</p>



<ul style="max-width:965px" class="wp-block-list">
<li>Sub-agents (custom and built-in)</li>



<li>Built-in&nbsp;<code>/review</code>&nbsp;and&nbsp;<a href="https://www.claude.com/blog/automate-security-reviews-with-claude-code" target="_blank" rel="noreferrer noopener">security review</a>&nbsp;commands</li>



<li>Custom slash commands (<code>code-review.md</code>)</li>



<li>Plugins, specifically&nbsp;<a href="https://github.com/anthropics/claude-code/blob/main/plugins/code-review/commands/code-review.md" target="_blank" rel="noreferrer noopener">code-review plugin</a>&nbsp;authored by Boris Cherny</li>
</ul>



<p>We leverage those 2 built-in commands, in parallel, but it&#8217;s just to see if we get any good feedback. Our custom code review slash command already does a good review following our guidelines, plus the &#8220;code-review&#8221; plugin from Boris is works very well with parallel agents. We basically went through the famous spiral:</p>



<pre class="wp-block-code"><code>Write CLAUDE.md -&gt; Ask for code review -&gt; Find bad comments and noise we don't want -&gt; Re-write CLAUDE.md and other files -&gt; Do some meta-prompting -&gt; Repeat</code></pre>



<p>Like I said, our custom code review prompt/command has evolved through time, and was refined when we learned something new. We started with this&nbsp;<a href="https://github.com/anthropics/claude-code-action/issues/60#issuecomment-2952771401" target="_blank" rel="noreferrer noopener">incredible suggestion</a>&nbsp;to use the GitHub MCP. We also searched for other GitHub repos, mostly .NET related to see how they set up their instructions. In case they have anything particular around code review (e.g. for GitHub Copilot). I find&nbsp;<a href="https://github.com/dotnet/aspire/blob/main/.github/copilot-instructions.md">.NET Aspire</a>&nbsp;to be a super cool real-life example 🙂 . I think a lot of their AI adoption is lead by David Fowler. So I often check their PRs to see what we can learn from them, e.g.&nbsp;<a href="https://github.com/dotnet/aspire/pull/13361" target="_blank" rel="noreferrer noopener">this one</a>.</p>



<p>Anyway, our prompt was still a bit vague, so we had some chats with Claude, good old meta-prompting 🙂. After a while, Claude suggested a new file that has all the coding standards and bad smells we want to avoid &#8211;&nbsp;<code>code-review.instructions.md</code>. It does live under&nbsp;<code>.github/instructions</code>&nbsp;but it doesn&#8217;t matter, Claude can use it. The bad smells are specific and we see them referenced quite often in our PRs now. Still, we don&#8217;t have a perfect solution for overly large PRs. We simply communicate more often or have more than one dev working in the PR for those cases. When a feature genuinely requires lots of new code, the best forum to debate and provide actionable feedback is by talking. Sure, this isn&#8217;t always possible, people are busy or prefer async work. In our team going on call, or during the demo of the PR, helps make large PRs way more digestible. Draft PRs also work somewhat, to get some feedback early on.</p>



<h4 class="wp-block-heading">Avoiding noise comments</h4>



<p>Our biggest lesson learned here is running locally our custom slash command for code review and using sug-agents. Locally, we can try to provide the proper context for the review, the rest is the agent using tools and doing reasoning. No noise gets sent to GitHub comments because all the back-and-forth is done in the chat, plus right now Claude Code works better locally, not on GitHub Actions. Having sub-agents has been amazing since the main reason Claude Code uses it is for context management. Since we now have a built-in&nbsp;<code>Explore</code>&nbsp;sub-agent, our code review command uses that in order to have Explore sub-agents run in parallel (with Haiku 4.5) and not clog up the main context window.</p>



<p>I&#8217;ve learned recently of&nbsp;<a href="https://blog.sshh.io/i/177742847/custom-subagents" target="_blank" rel="noreferrer noopener">other devs using a different workflow</a>, basically leveraging the&nbsp;<code>Task</code>&nbsp;tool for the main agent to spawn sub-agents. Whichever way you want to do it, using a sub-agent that is focused on exploring the codebase and potential impacts of this PR is something I recommend.<a href="https://github.com/BOLT04/Blog-Posts/blob/master/2025/lessons-learned-from-improving-code-reviews-ai.md#avoiding-noise-comments"></a></p>



<h3 class="wp-block-heading">Saving learnings in memory</h3>



<p>Every once in a while, once we&#8217;ve merged a few PRs. We use Claude to improve itself again based on these PRs. This is our prompt:</p>



<pre class="wp-block-code"><code>Please look at the 5 most recent PRs in our GitHub repository, and check for learnings in order to improve the code review workflow. Please ultrathink on this task, so that all necessary memory files are updated taking into account these learnings, like @CLAUDE.md and @.github\instructions\ Focus on seeing code review comments that were good and made it into the codebase afterwards (e.g. coding standards violations). Ignore bad comments that were resolved with a "negative comment" or thumbs down emoji. Ask me clarifying questions before you begin. YOU MUST create a changelog file explaining why you made these edits to instruction files. Each learning must reference a PR that exists. The best is for you to link the exact comment that you used for a given learning</code></pre>



<p>At the end of the session, we usually have a few items that are good enough to add. Mostly are&nbsp;<strong>learnings around bugs</strong>&nbsp;we can catch earlier, some are coding standards. Honestly, a lot of suggestions aren&#8217;t what I want or I just think they won&#8217;t be useful in future code reviews. But doing this has been important for me to also take a step back and think about what we can learn from the work we&#8217;ve already merged. I reflect on it and then discuss with my team. I&#8217;ve seen others also talk about this idea and have a&nbsp;<code>learnings.md</code>, e.g.&nbsp;<a href="https://github.com/nibzard/awesome-agentic-patterns/blob/main/LEARNINGS.md" target="_blank" rel="noreferrer noopener">this repo</a>. At least this process seems better for us than simply using emojis to give feedback that&nbsp;<a href="https://www.coderabbit.ai/blog/why-emojis-suck-for-reinforcement-learning" target="_blank" rel="noreferrer noopener">CodeRabbit blog</a>&nbsp;also eludes to 😅.<a href="https://github.com/BOLT04/Blog-Posts/blob/master/2025/lessons-learned-from-improving-code-reviews-ai.md#saving-learnings-in-memory"></a></p>



<h3 class="wp-block-heading">GitHub Copilot</h3>



<p>Copilot&#8217;s code review features were super basic in the beginning. We tried and experimented with it a lot when it came out. It only caught nitpicks,&nbsp;<code>console.log</code>&nbsp;and typos, really not helpful on any other area. Sure catching this is good, but a human reviewer catches that in the first pass too. It didn&#8217;t support all languages so we often got 0 comments or feedback. Then in the last months, completely different, night and day.</p>



<p>If you have seen GitHub Universe, you know <a href="https://dev.to/bolt04/github-universe-2025-recap-9gl" target="_blank" rel="noreferrer noopener">what&#8217;s new</a>. But in case you don&#8217;t know, the GitHub team has invested heavily in Copilot code review and coding agent, and it shows. The code review agent is often right in every comment, it makes suggestions that are actually based on our instructions and memory files, meaning our PRs follow consistent code style and team conventions (with a link to these&nbsp;<a href="https://docs.github.com/en/copilot/how-tos/configure-custom-instructions/add-repository-instructions" target="_blank" rel="noreferrer noopener">docs</a>).</p>



<figure class="wp-block-image size-full is-resized"><img decoding="async" width="797" height="356" src="https://blogit.create.pt/wp-content/uploads/2026/01/image-4.png" alt="" class="wp-image-13562" style="aspect-ratio:2.2388195797239607;width:799px;height:auto" srcset="https://blogit.create.pt/wp-content/uploads/2026/01/image-4.png 797w, https://blogit.create.pt/wp-content/uploads/2026/01/image-4-300x134.png 300w, https://blogit.create.pt/wp-content/uploads/2026/01/image-4-768x343.png 768w, https://blogit.create.pt/wp-content/uploads/2026/01/image-4-696x311.png 696w" sizes="(max-width: 797px) 100vw, 797px" /></figure>



<p>And the agent session is somewhat transparent, since you can view it in GitHub actions now:</p>



<figure class="wp-block-image size-full"><img decoding="async" width="998" height="262" src="https://blogit.create.pt/wp-content/uploads/2026/01/image-3.png" alt="" class="wp-image-13559" srcset="https://blogit.create.pt/wp-content/uploads/2026/01/image-3.png 998w, https://blogit.create.pt/wp-content/uploads/2026/01/image-3-300x79.png 300w, https://blogit.create.pt/wp-content/uploads/2026/01/image-3-768x202.png 768w, https://blogit.create.pt/wp-content/uploads/2026/01/image-3-696x183.png 696w" sizes="(max-width: 998px) 100vw, 998px" /></figure>



<p>I mean &#8220;somewhat&#8221; because there are things I can&#8217;t configure, just like Claude Code and most tools, I guess 😅. In the logs I can see the option&nbsp;<code>UseGPT5Model=false</code>, and that it&#8217;s using Sonnet 4.5. There is also this &#8220;MoreSeniorReviews&#8221; flag that I couldn&#8217;t find any info on, and believe me&#8230; I wanted to because it was set to false 🤣 &#8211; the logs show <code>ccr[MoreSeniorReviews=false;EnableAgenticTools=true;EnableMemoryUsage=false...</code></p>



<p>Are you telling me there could be a hidden way to get a more senior review&#8230; sign me up! Jokes aside, I couldn&#8217;t find much info on the endpoint&nbsp;<code>api.githubcopilot.com/agents/swe</code>&nbsp;of CAPI (presumably Copilot API) the Autofind agent was calling, and the contents of the&nbsp;<code>ccr/callback</code>&nbsp;saved in&nbsp;<code>results-agent.json</code>. I can only hope some of these options are configurable in the future.</p>



<p>I checked the&nbsp;<a href="https://docs.github.com/en/copilot/how-tos/provide-context/use-mcp/extend-copilot-chat-with-mcp#remote-server-configuration-example-with-oauth" target="_blank" rel="noreferrer noopener">MCP docs</a>, hoping to find details about these options, but no luck.</p>



<p>Anyway, it also now has access to CodeQL and some linters, which is amazing because we didn&#8217;t have this before. It&#8217;s the way we are able to leverage CodeQL analysis in all our PRs now, we couldn&#8217;t do this in any other AI code review tool. We also see that it calls the tool &#8220;store_comment&#8221; during its session, and only submits the comments to GitHub in the end. This is useful since sometimes it stores a comment because it thought something was wrong in the implementation, and afterwards it read more code into context that invalidated the stored comment, so it no longer submits that comment in the PR. Much like the CodeRabbit validation agent, reducing the amount of noise we get in PRs.<a href="https://private-user-images.githubusercontent.com/18630253/523918240-368bf2e4-26fe-4342-8c91-bde756f00f63.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Njc4MjMyNTAsIm5iZiI6MTc2NzgyMjk1MCwicGF0aCI6Ii8xODYzMDI1My81MjM5MTgyNDAtMzY4YmYyZTQtMjZmZS00MzQyLThjOTEtYmRlNzU2ZjAwZjYzLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNjAxMDclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjYwMTA3VDIxNTU1MFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWIwNTE3MzI4YjM0MzhlOTNiZTFiYjc2MTFjM2VhNGJmNDlhYmNkMjUyZWM1OTg1MDRjZjI2OTQxNTZlODc3OWQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.ocuPov64xdo7orLKVnj6MevAujg8_k6GFgn-5lDYHCY" target="_blank" rel="noreferrer noopener"></a></p>



<h3 class="wp-block-heading">CodeRabbit and Qodo</h3>



<p>Let&#8217;s start with the cool features CodeRabbit has:</p>



<ul style="max-width:1005px" class="wp-block-list">
<li>Code diagrams in Mermaid</li>



<li>Generates a poem! Yes, a poem for my PR</li>



<li>Summary of changes added to the description</li>
</ul>



<p>Now&#8230; I gotta be honest, I don&#8217;t care about any of them 😅. They are cool, but I only glance at the poem or ignore it. Never read or care about the summary; I get one from Copilot and edit it myself. All code and sequence diagrams I saw generated in our PRs, were simply not useful, but a lot are from front-end code. I simply don&#8217;t look at them later, and if it makes sense, we update our architecture diagrams later once the code is merged. With that said, the code suggestions and feedback it obscene. By far the best code review AI tool when it comes to actionable and valuable feedback/suggestions (by a long shot)! Even if we didn&#8217;t configure&nbsp;<code>.coderabbit.yaml</code>&nbsp;or tried to optimize it, CodeRabbit already uses&nbsp;<a href="https://docs.coderabbit.ai/integrations/knowledge-base#code-guidelines:-automatic-team-rules" target="_blank" rel="noreferrer noopener">Claude and Copilot instructions</a>&nbsp;so the work we did on those was probably used in CodeRabbit. In some of our PRs it caught some nasty bugs and gave super useful feedback. Our team was impressed!</p>



<p>The insights CodeRabbit adds during code review piqued my interest. I read a few of their blog posts on context engineering like&nbsp;<a href="https://www.coderabbit.ai/blog/context-engineering-ai-code-reviews" target="_blank" rel="noreferrer noopener">this one</a>, where I found it interesting that there is a separate validation agent before submitting comments. This is probably why they maintain a high signal-to-noise ratio. I also read their open-source version of CodeRabbit, they have some&nbsp;<a href="https://github.com/coderabbitai/ai-pr-reviewer/blob/main/src/prompts.ts" target="_blank" rel="noreferrer noopener">prompts</a>&nbsp;there. I know it&#8217;s old, but it&#8217;s what I have access to. I especially like the instructions that we also have 😅 &#8220;Do NOT provide general feedback, summaries, explanations of changes, or praises for making good additions&#8221;.</p>



<p>We basically tried to have Claude and Copilot understand our large codebase, not focusing only on the PR diff. It&#8217;s harder, we still have a lot to improve here.&nbsp;<a href="https://www.coderabbit.ai/blog/how-coderabbit-delivers-accurate-ai-code-reviews-on-massive-codebases" target="_blank" rel="noreferrer noopener">CodeRabbit claims</a>&nbsp;it&#8217;s known to be great at understanding large codebases. I don&#8217;t see any research backing this, just opinions. But yes, we humans don&#8217;t like large PRs either:</p>



<figure class="wp-block-image size-full"><img decoding="async" width="638" height="436" src="https://blogit.create.pt/wp-content/uploads/2026/01/image-7.png" alt="" class="wp-image-13571" srcset="https://blogit.create.pt/wp-content/uploads/2026/01/image-7.png 638w, https://blogit.create.pt/wp-content/uploads/2026/01/image-7-300x205.png 300w, https://blogit.create.pt/wp-content/uploads/2026/01/image-7-615x420.png 615w, https://blogit.create.pt/wp-content/uploads/2026/01/image-7-218x150.png 218w" sizes="(max-width: 638px) 100vw, 638px" /></figure>



<p>In my opinion I couldn&#8217;t find that many large PRs that were way better reviewed by CodeRabbit, in comparison to Claude Code and Copilot. But one thing we liked a lot is that it uses&nbsp;<strong>collapsed sections</strong>&nbsp;in markdown very well, for example:</p>



<figure class="wp-block-image size-full"><img decoding="async" width="897" height="457" src="https://blogit.create.pt/wp-content/uploads/2026/01/image-5.png" alt="" class="wp-image-13563" srcset="https://blogit.create.pt/wp-content/uploads/2026/01/image-5.png 897w, https://blogit.create.pt/wp-content/uploads/2026/01/image-5-300x153.png 300w, https://blogit.create.pt/wp-content/uploads/2026/01/image-5-768x391.png 768w, https://blogit.create.pt/wp-content/uploads/2026/01/image-5-824x420.png 824w, https://blogit.create.pt/wp-content/uploads/2026/01/image-5-696x355.png 696w" sizes="(max-width: 897px) 100vw, 897px" /></figure>



<p>But I mean, we did have cases that we tried to use Claude Code for code review on a PR that was reviewed by CodeRabbit, and like ~60% of the context window was comments made by CodeRabbit. All that markdown ain&#8217;t friendly for AI with limited context windows. There were times I swear I could see Claude behind every word CodeRabbit made, with the &#8220;You&#8217;re absolutely correct&#8221; 🤣, e.g.</p>



<figure class="wp-block-image size-full is-resized"><img decoding="async" width="782" height="164" src="https://blogit.create.pt/wp-content/uploads/2026/01/image-6.png" alt="" class="wp-image-13564" style="width:836px;height:auto" srcset="https://blogit.create.pt/wp-content/uploads/2026/01/image-6.png 782w, https://blogit.create.pt/wp-content/uploads/2026/01/image-6-300x63.png 300w, https://blogit.create.pt/wp-content/uploads/2026/01/image-6-768x161.png 768w, https://blogit.create.pt/wp-content/uploads/2026/01/image-6-696x146.png 696w" sizes="(max-width: 782px) 100vw, 782px" /></figure>



<p>But it could be GPT models or whatever, we never truly know what is behind these products 🙂.</p>



<h4 class="wp-block-heading">Qodo</h4>



<p>As for Qodo, we liked the fact it checks for compliance and flags violations as non-compliant (no other tool had this built in). This was previously just a bullet point in our markdown file. The code review feedback was good, sometimes we ended up doing the suggested changes Qodo leaves in the comment. After reading more about what compliance checks Qodo does, we improved by adding specific instructions on our&nbsp;<code>code-review.instructions.md</code>&nbsp;for ISO 9001, GDPR and others:</p>



<pre class="wp-block-code"><code>## Regulatory Compliance Checks

### Data Protection (GDPR/HIPAA/PCI-DSS)
- Does this code handle PII (Personally Identifiable Information)?
- Are sensitive fields properly encrypted at rest and in transit?
- Is data retention policy followed (deletion after X days)?
- Are audit logs created for data access?
- Is data anonymization/pseudonymization applied where required?

### Security Standards (SOC 2 / ISO 27001)
- Are all external API calls wrapped with proper error handling?
- Is input validation present for all user inputs?
- Are authentication checks present on all sensitive endpoints?
- Are secrets/credentials stored securely (no hardcoding)?
- Is sensitive data logged or exposed in error messages?</code></pre>



<p>We kept experimenting with Qodo for longer than CodeRabbit, but the insights and feedback never reached the level of CodeRabbit. It was still a good tool that improved our codebase and sparked good discussions.</p>



<h2 class="wp-block-heading">Tool of choice</h2>



<p>Our prompts/instructions can still be improved, of course. We&#8217;ve experimented with different prompts, memory and instruction files. We&#8217;ve also researched how other teams use AI for code review, and how tools like CodeRabbit do context engineering. All of this is because our goal is to continue to improve our software development process and ensure high quality. Adopting new tools is a way of achieving this goal. Given that most AI code review tools have a price tag, we decided to focus on using only one/two tools and optimizing them. Yes, it&#8217;s Claude Code and GitHub Copilot 😄. I basically use 100% of both Copilot and Claude every month, but I get more requests from Claude even though I hit the weekly rate limit every time.</p>



<p>We know CodeRabbit is amazing, and these paid AI tools will continue getting better. There is actually a new tool supporting code review we didn&#8217;t use,&nbsp;<a href="https://www.augmentcode.com/product/code-review" target="_blank" rel="noreferrer noopener">Augment Code</a>&nbsp;(these AI companies move so fast 😅). No amount of customizing our setup with Claude or Copilot will reach the same output as these specific code review paid tools. But for us, it makes more sense to pay for one tool, for example, and leverage it in multiple steps of our software development lifecycle.</p>



<h3 class="wp-block-heading">Improving multi-agent collaboration</h3>



<p>Claude and Copilot are working very well for our code review process. But like I&#8217;ve been saying, there is work to do. We learned a lot from using each tool, but there are more areas to improve, at least in Claude Code since we have more flexibility there. I&#8217;m currently looking at implementing the &#8220;Debate and Consensus&#8221; multi-agent design pattern (<a href="https://arxiv.org/abs/2406.11776" target="_blank" rel="noreferrer noopener">Google DeepMind paper</a>&nbsp;and&nbsp;<a href="https://arxiv.org/abs/2509.11035" target="_blank" rel="noreferrer noopener">Free-MAD</a>), basically a&nbsp;<a href="https://learn.microsoft.com/en-us/azure/architecture/ai-ml/guide/ai-agent-design-patterns#group-chat-orchestration" target="_blank" rel="noreferrer noopener">group chat orchestration</a>. I just want to try it out, I&#8217;m not sure I&#8217;ll have better code reviews by having different agents (e.g. Security, Quality and Performance) debate and review the code through different perspectives. If they run sequentially, the quality agent can have questions for the performance agent, and each can agree or disagree with the reported issues. We can try out the LLM-as-a-Judge as well, to focus on reducing noise and following code quality standards.</p>



<p>Anyway, we&#8217;ll continue learning, optimizing, and improving the way we work 🙂.</p>



<h2 class="wp-block-heading">Resources</h2>



<ul style="max-width:1005px" class="wp-block-list">
<li><a href="https://graphite.com/blog/ai-wont-replace-human-code-review" target="_blank" rel="noreferrer noopener">Why AI will never replace human code review</a></li>



<li><a href="https://www.youtube.com/watch?v=-GIiTfKZx6M" target="_blank" rel="noreferrer noopener">AI Code Reviews with CodeRabbit&#8217;s Howon Lee</a></li>



<li><a href="https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report" target="_blank" rel="noreferrer noopener">CodeRabbit report: AI code creates 1.7x more problems</a></li>



<li><a href="https://awesomereviewers.com/reviewers/" target="_blank" rel="noreferrer noopener">Awesome reviewers GH repo</a></li>



<li><a href="https://www.youtube.com/watch?v=nItsfXwujjg" target="_blank" rel="noreferrer noopener">Anthropic’s NEW Claude Code Review Agent (Full Open Source Workflow)</a></li>



<li><a href="https://blog.sshh.io/p/how-i-use-every-claude-code-feature" target="_blank" rel="noreferrer noopener">How I Use Every Claude Code Feature</a></li>
</ul>



<h2 class="wp-block-heading">Conclusion</h2>



<p>The number one thing we learned is:&nbsp;<strong>experimentation is king</strong>. Like we talked before, the Jagged Frontier changes with every model release. Claude Opus 4.5 behaves a bit differently, for example, on&nbsp;<a href="https://platform.claude.com/docs/en/build-with-claude/prompt-engineering/claude-4-best-practices#tool-usage-and-triggering" target="_blank" rel="noreferrer noopener">tool triggering</a>&#8230; maybe we can stop shouting and being aggressive 🤣. We must experiment and keep learning. We can&#8217;t calibrate the prompt once and expect the best result.</p>



<p>For now we are quite happy, the human reviewer has more time to focus on design decisions and discuss trade-offs with the author of the PR. I don&#8217;t envision a future where AI does 100% of the code review.</p>



<p>If you&#8217;re considering AI for code reviews, my advice is simple: just try it. Pick one tool, run a one-month pilot, and see what happens. The worst case is you turn it off. The best case is that your team becomes augmented and probably improves code quality.</p>



<p>My next blog post in this series will be about how we are using agentic coding tools! Are you using AI code review tools? I&#8217;d love to hear from you what your experience has been. Leave a comment and let&#8217;s chat 🙂 .</p>



<p></p>
<p>The post <a href="https://blogit.create.pt/davidpereira/2026/01/09/lessons-learned-improving-code-reviews-with-ai/">Lessons learned improving code reviews with AI</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/davidpereira/2026/01/09/lessons-learned-improving-code-reviews-with-ai/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Becoming augmented by AI</title>
		<link>https://blogit.create.pt/davidpereira/2025/09/10/becoming-augmented-by-ai/</link>
					<comments>https://blogit.create.pt/davidpereira/2025/09/10/becoming-augmented-by-ai/#respond</comments>
		
		<dc:creator><![CDATA[David Pereira]]></dc:creator>
		<pubDate>Wed, 10 Sep 2025 17:24:13 +0000</pubDate>
				<category><![CDATA[Misc]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[GenAI]]></category>
		<guid isPermaLink="false">https://blogit.create.pt/?p=13531</guid>

					<description><![CDATA[<p>Table of Contents Introduction We&#8217;re deep into Co-Intelligence in Create IT&#8217;s book club — definitely worth your time! Between that and the endless stream of LLM content online, I&#8217;ve been in full research mode. Still, I can&#8217;t just watch and hear others talk about these tools, I must experiment myself and learn how to use [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/davidpereira/2025/09/10/becoming-augmented-by-ai/">Becoming augmented by AI</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">Table of Contents</h2>



<ul style="max-width:1005px" class="wp-block-list">
<li>Introduction</li>



<li>The &#8220;Jagged Frontier&#8221; concept</li>



<li>Becoming augmented by AI
<ul style="max-width:960px" class="wp-block-list">
<li>AI as a co-worker</li>



<li>AI as a co-teacher</li>
</ul>
</li>



<li>My augmentation list
<ul style="max-width:960px" class="wp-block-list">
<li>Custom instructions</li>



<li>Meta-prompting</li>
</ul>
</li>



<li>Resources</li>



<li>Conclusion</li>
</ul>



<h2 class="wp-block-heading">Introduction</h2>



<p>We&#8217;re deep into <a href="https://www.amazon.com/-/pt/dp/059371671X/ref=sr_1_1">Co-Intelligence</a> in Create IT&#8217;s book club — definitely worth your time! Between that and the endless stream of LLM content online, I&#8217;ve been in full research mode. Still, I can&#8217;t just watch and hear others talk about these tools, I must experiment myself and learn how to use them for my use cases.</p>



<p>Software development is complex. My job isn&#8217;t just churning out code, but there are many concepts in this book that we&#8217;ve internalized and started adopting. In this post, I&#8217;ll share my opinions and some of the practical guidelines our team has been following to be augmented by AI.</p>



<h2 class="wp-block-heading">The &#8220;Jagged Frontier&#8221; concept</h2>



<p>The Jagged Frontier described by the author Ethan Mollick is an amazing concept in my opinion. It&#8217;s where tasks that appear to be of similar difficulty may either be performed better or worse by humans using AI. Due to the &#8220;jagged&#8221; nature of the frontier, the same knowledge workflow of tasks can have tasks on both sides of the frontier according to a <a href="https://www.hbs.edu/ris/Publication%20Files/24-013_d9b45b68-9e74-42d6-a1c6-c72fb70c7282.pdf">publication where the author took part</a>.</p>



<p>This leads to the&nbsp;<strong>Centaur vs. Cyborg</strong>&nbsp;distinction which is really interesting. Using both approaches (deeply integrated collaboration and separation of tasks) seems to be the goal to achieve co-intelligence. One very important Cyborg practice seen in that publication is &#8220;push-back&#8221; and &#8220;demanding logic explanation&#8221;, meaning we disagree with the AI output, give it feedback, and ask it to reconsider and explain better. Or as I often do, ask it to double-check with official documentation that what it&#8217;s telling me is correct. It&#8217;s also important to understand that this frontier can change as these models improve. Hence, the focus on experimentation to understand where the Jagged Frontier lies in each LLM. It&#8217;s definitely knowledge that everyone in the industry right now wants to acquire (maybe share it afterwards 😅).</p>



<h2 class="wp-block-heading">Becoming augmented by AI</h2>



<p>I&#8217;m aware of the marketed productivity gains, where&nbsp;<a href="https://github.blog/news-insights/research/research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/">GitHub Copilot usage makes devs 55% faster</a>, and other studies that have been posted about GenAI increasing productivity. I&#8217;m also aware of the studies claiming the opposite 😄 like the&nbsp;<a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/">METR study</a>&nbsp;showing AI makes devs&nbsp;<strong>19% slower</strong>. However, I don&#8217;t see 55% productivity gains for myself, and I don&#8217;t think it makes me slower either.</p>



<p>In my opinion, productivity gains aren&#8217;t measured by producing more code. Number of PRs? Nope. Acceptance rate for AI suggestions? Definitely not! I firmly believe the less code, the better. The less slop the better too 😄. I&#8217;m currently focused on assessing&nbsp;<strong>DORA metrics</strong>&nbsp;and others for my team, because we want to measure how AI-assisted coding and the other ways we use it as an augmentation tool, actually improves those metrics, or make them worse. The rest of marketing and hype doesn&#8217;t matter.</p>



<p>Ethan Mollick provides numerous examples and research on how professionals across industries are already leveraging AI tools, like the Cyborg approach. But if we focus on our software industry, what does it mean for a tech lead to be augmented by AI? What tasks would be good to involve an AI in without compromising quality?</p>



<h3 class="wp-block-heading">AI as a co-worker</h3>



<p>For a tech lead that works with Azure services, an important skill is to know how to leverage the correct Azure services to build, deploy, and manage a scalable solution. So it becomes very useful to have an AI partner that can have a conversation about this, for example about Azure Durable Functions. This conversation can be shallow, and not get all the implementation details 100% correct. That&#8217;s okay, because the tech lead (and any dev 😅) also needs to exhibit&nbsp;<strong>critical thinking</strong>&nbsp;and evaluate the AI responses.&nbsp;<strong>This is not a skill we want to delegate</strong>&nbsp;to these models, at least in my opinion and in the&nbsp;<a href="https://www.oneusefulthing.org/p/against-brain-damage">author&#8217;s opinion</a>. There is a relevant&nbsp;<a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf">research paper</a>&nbsp;about this by Microsoft as well.</p>



<p>The goal can simply be to have a conversation with a co-worker to spark some new ideas or possible solutions that we haven&#8217;t thought of. Using AI for ideation is a great use case, not just for engineering, but for product features too like UI/UX, important metrics to capture, etc. If it generates 20 ideas, there is a higher chance you find the bad ones, filter them out, and clear your mind or steer it into better ideas. Here is an example to get some ideas on fixing a recurring exception:</p>


<div class="wp-block-image">
<figure class="aligncenter size-large is-resized"><img decoding="async" width="1024" height="810" src="https://blogit.create.pt/wp-content/uploads/2025/09/image-1024x810.png" alt="" class="wp-image-13535" style="width:761px;height:auto" srcset="https://blogit.create.pt/wp-content/uploads/2025/09/image-1024x810.png 1024w, https://blogit.create.pt/wp-content/uploads/2025/09/image-300x237.png 300w, https://blogit.create.pt/wp-content/uploads/2025/09/image-768x607.png 768w, https://blogit.create.pt/wp-content/uploads/2025/09/image-531x420.png 531w, https://blogit.create.pt/wp-content/uploads/2025/09/image-696x550.png 696w, https://blogit.create.pt/wp-content/uploads/2025/09/image-1068x844.png 1068w, https://blogit.create.pt/wp-content/uploads/2025/09/image.png 1185w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Example of using AI to get multiple options</figcaption></figure>
</div>


<p></p>



<p>It asks clarifying questions so that I can give it more useful context. Then I can see the response, iterate, or ask for more ideas, etc. I usually always set these instructions for any LLM:</p>



<pre class="wp-block-code"><code>Ask clarifying questions before giving an answer. Keep explanations not too long. Try to be as insightful as possible, and remember to verify if a solution can be implemented when answering about Azure and architecture in general.
It's also very important for you to verify if there is official documentation that supports your claims and statements. Please find official documentation supporting your claims, before responding to a user. If there isn't documentation confirming your statement, don't include it in the response.</code></pre>



<p>That is also why it searches for docs. I&#8217;ve gotten way too many statements in the LLM&#8217;s response that when I follow-up on, it realizes it made an error, or assumption, etc. When I ask it further about that sentence that it just gave me, I just get &#8220;You&#8217;re right &#8211; I was wrong about that&#8221;&#8230; Don&#8217;t become too over-reliant on these tools 😅.</p>



<h3 class="wp-block-heading">AI as a co-teacher</h3>



<p>With that said, the tech lead and senior devs are also responsible for upskilling their team by sharing knowledge, best practices, challenging juniors with more complex tasks, etc. And this part of the job isn&#8217;t that simple; it&#8217;s hard to be a force multiplier that improves everyone around you. So, what if the tech lead could use AI in this way, by creating&nbsp;<a href="https://code.visualstudio.com/docs/copilot/customization/prompt-files">reusable prompts</a>, documentation, and custom agents? How about the tech lead uses AI as a co-teacher, and then shares how to do it with the rest of the team? All of these are then able to help juniors be onboarded, help them understand our codebase and our domain.&nbsp;<a href="https://www.anthropic.com/engineering/claude-code-best-practices">Claude Code Best practices post</a>&nbsp;also reference onboarding as a good use case that helps Anthropic engineers:</p>



<p><em>&#8220;At Anthropic, using Claude Code in this way has become our core onboarding workflow, significantly improving ramp-up time and reducing load on other engineers.&#8221;</em></p>



<p>A lot of onboarding time is spent on understanding the business logic and then how it&#8217;s implemented. For juniors, it&#8217;s also about the design patterns or codebase structure. So I really think this is a net-positive for the whole team.</p>



<h2 class="wp-block-heading">My augmentation list</h2>



<p>It might not be much, but these are essentially the tasks I&#8217;m augmented by AI:</p>



<p><strong>Technical</strong>:</p>



<ul style="max-width:1005px" class="wp-block-list">
<li><strong>Initial</strong>&nbsp;code review (e.g. nitpicks, typos), some stuff I should really just automate 😅</li>



<li>Generate summaries for the PR description</li>



<li>Architectural discussions, including trade-off and risk analysis
<ul style="max-width:960px" class="wp-block-list">
<li>Draft an ADR (Architecture decision record) based on my analysis and arguments</li>
</ul>
</li>



<li>Co-Teacher and Co-Worker
<ul style="max-width:960px" class="wp-block-list">
<li>&#8220;Deep Research&#8221; and discussion about possible solutions</li>



<li>Learn new tech with analogies or specific Azure features</li>



<li>Find new sources of information (e.g. blog posts, official docs, conference talks)</li>
</ul>
</li>



<li>Troubleshooting for specific infrastructure problems
<ul style="max-width:960px" class="wp-block-list">
<li>Generating KQL queries (e.g. rendering charts, analyzing traces &amp; exceptions &amp; dependencies)</li>
</ul>
</li>



<li>Refactoring and documentation suggestions</li>



<li>Generation of new unit tests given X scenarios</li>
</ul>



<p><strong>Non-technical</strong></p>



<ul style="max-width:1005px" class="wp-block-list">
<li>Summarizing book chapters/blog posts or videos (e.g. NotebookLM)</li>



<li>Role play in various scenarios (e.g. book discussions)</li>
</ul>



<p>Of course, we also need to talk about the tasks that fall outside the Jagged Frontier. Again, these can vary from person to person. From my usage and experiments so far, these are the tasks that currently fall outside the frontier:</p>



<ul style="max-width:1005px" class="wp-block-list">
<li>Being responsible for technical support tickets, where a customer encountered an error or has a question about our product. This involves answering the ticket, asking clarifying questions when necessary, opening up tickets on a 3rd party that are related to this issue, and then resolving the issue.</li>



<li>Deep valuable code review. This includes good insights, suggestions, and knowledge sharing to improve the PR author&#8217;s skills. <a href="https://www.coderabbit.ai/">CodeRabbit</a> does often give valuable code reviews, way better than any other solution. Still not the same as human review 🙂</li>



<li>Development of a v0 (or draft) for new complex features</li>



<li>Fixing bugs that require business domain knowledge</li>
</ul>



<p>Delegating some of those tasks would be cool, at least 50% 😄, while our engineering team focuses on other tasks. But oh well, maybe that day will come.</p>



<h2 class="wp-block-heading">AI-assisted coding</h2>



<p>AI-assisted coding can be very helpful on some tasks, and lately my goal is to increase the number of tasks AI can assist me. In our team, we&#8217;ve read&nbsp;<a href="https://www.anthropic.com/engineering/claude-code-best-practices">Claude Code Best practices</a>&nbsp;in order to learn and see what fits best for our use case. Then we dive deeper in some topics that post references, for example&nbsp;<a href="https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/extended-thinking-tips">these docs</a>&nbsp;were very useful to learn about Claude&#8217;s extended thinking feature, complementing the usage of &#8220;think&#8221; &lt; &#8220;think hard&#8221; &lt; &#8220;think harder&#8221; &lt; &#8220;ultrathink&#8221;. We also found&nbsp;<a href="https://simonwillison.net/2025/Apr/19/claude-code-best-practices/">this post by Simon</a>&nbsp;about this entire feature that was interesting. In most tasks, using an iterative approach, just like normal software development, is indeed way better than one-shot with the perfect prompt. Still, if it takes too many iterations, like some bugfixes were too complex because it&#8217;s hard to pinpoint the location of the bug, then it loses performance and overall becomes bad (infinite load spinner of death 🤣).</p>



<p>Before we can use AI-assisted coding on more complex tasks, we need to improve the output quality. So we&#8217;ve invested a lot of time in fine-tuning custom instructions and meta-prompting. Let&#8217;s talk about these two.</p>



<h3 class="wp-block-heading" id="custom-instructions">Custom Instructions</h3>



<p>According to Copilot docs, instructions should be short, self-contained statements. Most principles in&nbsp;<a href="https://learn.microsoft.com/en-us/training/modules/introduction-prompt-engineering-with-github-copilot/2-prompt-engineering-foundations-best-practices">prompt engineering</a>&nbsp;are about being short, specific, and making sure our critical instructions is something the model takes special attention to. Like everyone talks about, the context window is very important, so it&#8217;s really good if we can just have an instruction file of 200 lines. The longer our instructions are, the greater the risk that the LLM won&#8217;t follow them, since it can pay more attention to other tokens or forget relevant instructions. With that said, keeping instructions short is also a challenge when we use the few-shot prompting technique and add more examples.</p>



<p>To build our custom instructions, we used C# and Blazor files from&nbsp;<a href="https://github.com/github/awesome-copilot/tree/main">the awesome-copilot repo</a>&nbsp;and other sources of inspiration like&nbsp;<a href="https://parahelp.com/blog/prompt-design">parahelp prompt design</a>&nbsp;to get a first version. We wanted to know what techniques other teams use. Then we made specific edits to follow our own guidelines and removed rules specific to explaining concepts, etc. We also added some&nbsp;<strong>capitalized words</strong>&nbsp;that are common in system prompts or commands, like IMPORTANT, NEVER, ALWAYS, MUST. The IMPORTANT word is also at the end of the instruction, to try and&nbsp;<strong>refocus</strong>&nbsp;the attention to coding standards:</p>



<pre class="wp-block-code"><code>IMPORTANT: Follow our coding standards when implementing features or fixing bugs. If you are unsure about a specific coding standard, ask for clarification.</code></pre>



<p>I&#8217;m not 100% sure how this capitalization works, or why it works&#8230; and I have not found docs/evidence/research on this. All I know is that capitalized words have different tokens than lowercase. It&#8217;s probably something the model pays more attention to, since in the training data, when we use these words, it means it&#8217;s important. I do wish Microsoft, OpenAI, and Anthropic included this topic on capitalization in their prompt engineering docs/tutorials.</p>



<p>It&#8217;s at the end of our file since it&#8217;s also&nbsp;<a href="https://huggingface.co/papers/2307.03172">being researched that the beginning and end of a prompt</a>&nbsp;are what the LLM pays more attention to and finds more relevant. Some middle parts are &#8220;meh&#8221; and can be forgotten.&nbsp;<a href="https://learn.microsoft.com/en-us/azure/ai-foundry/openai/concepts/prompt-engineering?tabs=chat#repeat-instructions-at-the-end">Microsoft docs</a>&nbsp;say the same essentially, it&#8217;s known as &#8220;<strong>recency bias</strong>&#8220;. In most prompts we see, this section exists at the end to refocus the LLM&#8217;s attention.</p>



<h3 class="wp-block-heading">Meta-prompting</h3>



<p>Our goal also isn&#8217;t to have the perfect custom instructions and prompt, since refining it later with an iterative/conversational approach works well. But we came across the concept of&nbsp;<a href="https://cookbook.openai.com/examples/enhance_your_prompts_with_meta_prompting">meta-prompting</a>, a term that is becoming more popular. Basically, we asked Claude how to improve our prompt, and it gave us some cool ideas to improve our instructions/reusable prompts.</p>



<p>But don&#8217;t forget to use LLMs with caution&#8230; I keep getting &#8220;You&#8217;re absolutely right&#8230;&#8221; and it&#8217;s annoying how sycophantic it is oftentimes 😅</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img decoding="async" width="696" height="398" src="https://blogit.create.pt/wp-content/uploads/2025/09/image-3.png" alt="" class="wp-image-13542" srcset="https://blogit.create.pt/wp-content/uploads/2025/09/image-3.png 696w, https://blogit.create.pt/wp-content/uploads/2025/09/image-3-300x172.png 300w" sizes="(max-width: 696px) 100vw, 696px" /></figure>
</div>


<p>The quality of the output is most likely affected by the complexity of the task I&#8217;m working on too. Prompting skills only go so far, from what I&#8217;ve researched and learned so far, I can say there is a learning curve for understanding LLMs. So we need to continue experimenting and learning the layers between our prompt and the output we see.</p>



<h2 class="wp-block-heading">Resources</h2>



<p>This is not an exhaustive list by any means, just some resources I find very useful:</p>



<ul style="max-width:1005px" class="wp-block-list">
<li><a href="https://www.youtube.com/watch?v=EWvNQjAaOHw&amp;t=7238s">Andrej Karpathy &#8211; How I use LLMs</a></li>



<li><a href="https://www.youtube.com/watch?v=LCEmiRjPEtQ">Andrej Karpathy: Software Is Changing (Again)</a>
<ul style="max-width:960px" class="wp-block-list">
<li>Related to this is&nbsp;<a href="https://natesnewsletter.substack.com/p/software-30-vs-ai-agentic-mesh-why">this post from Nate Jones</a></li>
</ul>
</li>



<li><a href="https://www.youtube.com/watch?v=tbDDYKRFjhk">Does AI Actually Boost Developer Productivity? (100k Devs Study) &#8211; Yegor Denisov-Blanch, Stanford</a></li>



<li><a href="https://www.anthropic.com/engineering/claude-code-best-practices">Claude Code: Best practices for agentic coding</a></li>



<li><a href="https://zed.dev/blog/why-llms-cant-build-software">Why LLMs Can&#8217;t Really Build Software</a></li>



<li><a href="https://www.youtube.com/watch?v=-1yH_BTKgXs">Is AI the Future of Software Development, or Just a new Abstraction? Insights from Kelsey Hightower</a></li>



<li><a href="https://cookbook.openai.com/examples/gpt-5/gpt-5_prompting_guide">GPT-5 prompting guide</a></li>
</ul>



<h2 class="wp-block-heading">Conclusion</h2>



<p>I&#8217;ve enjoyed learning and improving myself over the years. But with GenAI I now feel like I could learn a lot more and improve myself even further since I&#8217;m choosing them as&nbsp;<strong>augmentation tools</strong>. Hopefully, this article motivates you to pursue AI augmentation for yourself. It&#8217;s okay to be skeptical about all the hype you watch and hear around these tools. It&#8217;s a good mechanism to not fall for all the sales pitches and fluff CEO&#8217;s and others in the industry talk about. Just don&#8217;t let your skepticism prevent you from learning, experimenting, building your own opinion, and finding ways of improving your work 🙂.</p>



<p>Still&#8230; I can&#8217;t deny my curiosity to know more about how these systems work underneath. How is fine-tuning done exactly? How does post-training work? Can these models emit telemetry (logs, traces, metrics) that we can observe? Why does capitalization (e.g. IMPORTANT, MUST) or setting a&nbsp;<a href="https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/system-prompts">role/persona</a>&nbsp;improve prompts? Can we really not have access to a high-level tree with the weights the LLM uses to correlate tokens, and use it to justify why a given output was produced? Or why an instruction given as input was not followed? It&#8217;s okay to just have a basic understanding and know about the new abstractions we have with these LLMs. But knowing how that abstraction works leads to knowing how to transition to automation.</p>



<p>I will keep searching and learning more in order to answer these questions or find engineers in the industry who have answered them. Especially around&nbsp;<strong>interpretability research</strong>, which is amazing!!! I recommend reading this research, for example &#8211;&nbsp;<a href="https://www.anthropic.com/research/tracing-thoughts-language-model">Tracing the thoughts of a large language model</a>. Hope you enjoyed reading, feel free to share in the comments below how you use AI to augment yourself 🙂.</p>
<p>The post <a href="https://blogit.create.pt/davidpereira/2025/09/10/becoming-augmented-by-ai/">Becoming augmented by AI</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/davidpereira/2025/09/10/becoming-augmented-by-ai/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Learning: Observability</title>
		<link>https://blogit.create.pt/davidpereira/2025/06/20/learning-observability/</link>
					<comments>https://blogit.create.pt/davidpereira/2025/06/20/learning-observability/#respond</comments>
		
		<dc:creator><![CDATA[David Pereira]]></dc:creator>
		<pubDate>Fri, 20 Jun 2025 13:08:49 +0000</pubDate>
				<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Misc]]></category>
		<category><![CDATA[observability]]></category>
		<guid isPermaLink="false">https://blogit.create.pt/?p=13517</guid>

					<description><![CDATA[<p>Table of Contents Introduction In this small post, I&#8217;ll share some resources, notes I&#8217;ve taken while learning, and best practices for making our systems observable. I&#8217;ve always had a knowledge gap regarding observability, and recently I&#8217;ve truly enjoyed learning more about this area in our software industry. Quick note: In this post I&#8217;ll only share [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/davidpereira/2025/06/20/learning-observability/">Learning: Observability</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">Table of Contents</h2>



<ul style="max-width:1005px" class="wp-block-list">
<li>Introduction</li>



<li>Get Started</li>



<li>Logs
<ul style="max-width:960px" class="wp-block-list">
<li>Canonical logs</li>
</ul>
</li>



<li>Traces</li>



<li>Metrics</li>



<li>General Best Practices</li>



<li>Resources
<ul style="max-width:960px" class="wp-block-list">
<li>GitHub demo repo</li>
</ul>
</li>



<li>Conclusion</li>
</ul>



<h2 class="wp-block-heading">Introduction</h2>



<p>In this small post, I&#8217;ll share some resources, notes I&#8217;ve taken while learning, and best practices for making our systems observable. I&#8217;ve always had a knowledge gap regarding observability, and recently I&#8217;ve truly enjoyed learning more about this area in our software industry.</p>



<p><strong>Quick note</strong>: In this post I&#8217;ll only share about 3 telemetry&nbsp;<a href="https://opentelemetry.io/docs/concepts/signals/" target="_blank" rel="noreferrer noopener">signals</a>.&nbsp;<strong>Profile</strong>&nbsp;is another signal that I will research in the future.</p>



<p></p>



<h2 class="wp-block-heading">Get Started</h2>



<p>Follow these steps to get started with auto-instrumentation in your application using OpenTelemetry:&nbsp;<a href="https://opentelemetry.io/docs/languages/net/getting-started/#instrumentation" target="_blank" rel="noreferrer noopener">https://opentelemetry.io/docs/languages/net/getting-started/#instrumentation</a></p>



<p>For OpenTelemetry in a front-end app you can check these useful resources:</p>



<ul style="max-width:1005px" class="wp-block-list">
<li><a href="https://grafana.com/oss/faro/" target="_blank" rel="noreferrer noopener">Grafana faro</a></li>



<li><a href="https://nextjs.org/docs/app/building-your-application/optimizing/open-telemetry#using-vercelotel" target="_blank" rel="noreferrer noopener">Next.js</a></li>



<li><a href="https://www.checklyhq.com/blog/in-depth-guide-to-monitoring-next-js-apps-with-opentelemetry/" target="_blank" rel="noreferrer noopener">Guide for OpenTelemetry in Next.js</a></li>



<li><a href="https://opentelemetry.io/docs/languages/js/getting-started/browser/" target="_blank" rel="noreferrer noopener">Browser OpenTelemetry getting started</a></li>
</ul>



<p></p>



<p>Client-side instrumentation in OpenTelemetry is part of&nbsp;<a href="https://opentelemetry.io/community/roadmap/#p2-client-instrumentation-rum" target="_blank" rel="noreferrer noopener">their roadmap</a>&nbsp;which is great to see, since I&#8217;ve only seen vendor-specific solutions and products for front-end apps (e.g. New Relic, Datadog). For browser instrumentation otel doesn&#8217;t seem to be super mature yet, but a lot of effort is being put into this area by the OpenTelemetry team.</p>



<h2 class="wp-block-heading">Logs</h2>



<p>We all know about logs 😄. It&#8217;s data that we all need in order to troubleshoot and know what is happening in our applications. We shouldn&#8217;t overdo it, creating tons and tons of logs since that will probably create noise and make it harder to troubleshoot problems.</p>



<p>For logs, we can use&nbsp;<a href="https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/docs/logs/README.md#best-practices" target="_blank" rel="noreferrer noopener">these best practices</a>. From this list, these are an absolute must to follow:</p>



<ul style="max-width:1005px" class="wp-block-list">
<li>Avoid string interpolation</li>



<li>Use structured logging</li>



<li>Log redaction for sensitive information</li>
</ul>



<p>In addition to the list above, we should also include the&nbsp;<code>TraceId</code>&nbsp;and&nbsp;<code>SpanId</code>&nbsp;in our log records, to correlate logs with traces. If you are using the Serilog console sink,&nbsp;<a href="https://github.com/serilog/serilog-sinks-console/blob/4c9a7b6946dfd2d7f07a792c40bb3d46af835ee9/src/Serilog.Sinks.Console/ConsoleLoggerConfigurationExtensions.cs#L32" target="_blank" rel="noreferrer noopener">by default the message template</a>&nbsp;won&#8217;t have those fields so if you want them, consider using&nbsp;<a href="https://github.com/serilog/serilog/wiki/Formatting-Output#formatting-json" target="_blank" rel="noreferrer noopener">JsonFormatter</a>&nbsp;or&nbsp;<code>CompactJsonFormatter</code>. Here is an example Serilog configuration in&nbsp;<code>appsettings.json</code>&nbsp;(setup to remove unnecessary/noisy logs):</p>



<pre class="wp-block-code"><code>"Serilog": {
    "Using": &#091;
      "Serilog.Sinks.Console"
    ],
    "MinimumLevel": {
      "Default": "Information",
      "Override": {
        "Microsoft.AspNetCore": "Warning",
        "Microsoft.Extensions.Diagnostics.HealthChecks": "Warning"
      }
    },
    "WriteTo": &#091;
      {
        "Name": "Console",
        "Args": {
          "formatter": {
            "type": "Serilog.Formatting.Json.JsonFormatter, Serilog",
            "renderMessage": true
          }
        }
      }
    ],
    "Enrich": &#091;
      "FromLogContext",
      "WithMachineName",
      "WithThreadId",
      "WithProcessId",
      "WithProcessName",
      "WithExceptionDetails",
      "WithExceptionStackTraceHash",
      "WithEnvironmentName"
    ],
    "Properties": {
      "Application": "GrafanaDemoOtelApp"
    }
  }</code></pre>



<p>Below are some documentation links for logging in .NET. The&nbsp;<code>ILogger</code>&nbsp;extension methods are not always the best choice (e.g.&nbsp;<code>logger.LogInformation</code>), especially in high-performance scenarios or if your logs are in a hot path:</p>



<ul style="max-width:1005px" class="wp-block-list">
<li><a href="https://learn.microsoft.com/en-us/dotnet/core/extensions/high-performance-logging" target="_blank" rel="noreferrer noopener">High-performance logging in .NET</a></li>



<li><a href="https://learn.microsoft.com/en-us/dotnet/core/extensions/logger-message-generator" target="_blank" rel="noreferrer noopener">Compile-time logging source generation</a></li>
</ul>



<h3 class="wp-block-heading">Canonical logs</h3>



<p>There is also a different way of logging, based on having more attributes in one single log line. I&#8217;ve seen this in Stripe where they call it <a href="https://stripe.com/blog/canonical-log-lines" target="_blank" rel="noreferrer noopener">canonical log lines</a>. Charity Majors also references this <strong>canonical logs</strong> term in her blog post about Observability 2.0 (that I reference in the Resources section).</p>



<p>This idea is very interesting, but might lack awareness. At least in .NET land, I didn&#8217;t find many references to this style of logging or example code that we could follow when there are many&nbsp;<code>ILogger</code>&nbsp;instances involved.</p>



<p></p>



<h2 class="wp-block-heading">Traces</h2>



<p>For traces in .NET we have <a href="https://learn.microsoft.com/en-us/dotnet/core/diagnostics/distributed-tracing-instrumentation-walkthroughs#best-practices-1">these best practices</a>. So far I&#8217;ve seen four common solutions for adding <a href="https://microsoft.github.io/code-with-engineering-playbook/observability/correlation-id/" target="_blank" rel="noreferrer noopener">correlation ids</a> in traces (not all are standards):</p>



<ul style="max-width:1005px" class="wp-block-list">
<li><a href="https://www.w3.org/TR/trace-context/" target="_blank" rel="noreferrer noopener">W3C trace context</a>&nbsp;&#8211; current standard in the HTTP protocol for tracing</li>



<li><a href="https://en.wikipedia.org/wiki/List_of_HTTP_header_fields#Common_non-standard_request_fields" target="_blank" rel="noreferrer noopener">X-Correlation-Id</a>&nbsp;&#8211; a non-standard HTTP header for RESTful APIs (also known as&nbsp;<a href="https://http.dev/x-request-id" target="_blank" rel="noreferrer noopener">X-Request-Id</a>). I thought this was a standard since it&#8217;s widely used, but I didn&#8217;t find a RFC from IETF or any other organization.</li>



<li><a href="https://github.com/dotnet/runtime/blob/main/src/libraries/System.Diagnostics.DiagnosticSource/src/HttpCorrelationProtocol.md" target="_blank" rel="noreferrer noopener">Request-Id</a>&nbsp;&#8211; this is a known header in the .NET ecosystem</li>



<li><a href="https://github.com/openzipkin/b3-propagation" target="_blank" rel="noreferrer noopener">B3 Zipkin propagation</a>&nbsp;&#8211; Zipkin format standard</li>



<li><a href="https://docs.aws.amazon.com/xray/latest/devguide/xray-concepts.html#xray-concepts-tracingheader" target="_blank" rel="noreferrer noopener">AWS X-Ray Trace Id</a>&nbsp;&#8211; proprietary solution for AWS that adds headers for tracing</li>
</ul>



<p>Not every company/project uses W3C trace context, you have some options above to pick from. I prefer the standard W3C trace context 😄 (maybe the industry will widely adopt this in the future) and using OpenTelemetry to manage these headers (HTTP, AMQP, etc) and correlation with logs automatically. The code you don&#8217;t write can&#8217;t have bugs 😆.</p>



<p>With that said, in some situations, you might have integrations with 3rd party software and need to use their custom headers or project limitations and need to use a particular format. At the end of the day what&#8217;s important is that you have distributed tracing working E2E.</p>



<p>There is also a relevant spec for distributed tracing called&nbsp;<a href="https://www.w3.org/TR/baggage/" target="_blank" rel="noreferrer noopener">Baggage</a>&nbsp;which OpenTelemetry implements and we can use in our apps. The most important part here is trace propagation to get the full trace from the publisher to the consumer.</p>



<p></p>



<h2 class="wp-block-heading">Metrics</h2>



<p>For metrics, it&#8217;s important to follow naming conventions for custom metrics. Especially if your organization has a platform team, setting conventions helps everyone. I do know some otel semantic conventions aren&#8217;t stable, and that also leads to some nuget packages being pre-release.</p>



<p>But anyhow, set conventions for your team or read and follow&nbsp;<a href="https://opentelemetry.io/docs/specs/semconv/general/metrics/" target="_blank" rel="noreferrer noopener">OpenTelemetry semantic conventions</a>.<br>An important resource I found is the comments on&nbsp;<a href="https://prometheus.io/docs/practices/instrumentation/#do-not-overuse-labels" target="_blank" rel="noreferrer noopener">Prometheus best practices</a>&nbsp;related to high cardinality metrics.</p>



<p>When I started trying out custom metrics instrumentation I discovered that OpenTelemetry is not always used (the SDK + OTLP). We have the Prometheus SDK which is mature and widely used. Then for Java there are other solutions like Micrometer and others that integrate very well with Spring. In regards to the Java ecosystem, I read&nbsp;<a href="https://opentelemetry.io/blog/2024/java-metric-systems-compared/#benchmark-opentelemetry-java-vs-micrometer-vs-prometheus-java" target="_blank" rel="noreferrer noopener">this otel Java benchmarks</a>&nbsp;and&nbsp;<a href="https://spring.io/blog/2024/10/28/lets-use-opentelemetry-with-spring" target="_blank" rel="noreferrer noopener">this Spring post</a>&nbsp;just because I was interested in knowing what the industry is adopting and why.</p>



<p></p>



<h2 class="wp-block-heading">General Best Practices</h2>



<p>There is a ton to be learned with SRE principles and practices. But one in particular was very useful for me and my team:&nbsp;<strong>always categorize our custom metrics according to the 4 Golden Signals</strong>. Any metric we can&#8217;t categorize is probably not useful for us.</p>



<figure class="wp-block-image is-resized td-caption-align-center"><img decoding="async" width="800" height="800" src="https://blogit.create.pt/wp-content/uploads/2025/05/deniseyu_art_monitoring.webp" alt="" class="wp-image-13523" style="width:808px;height:auto" srcset="https://blogit.create.pt/wp-content/uploads/2025/05/deniseyu_art_monitoring.webp 800w, https://blogit.create.pt/wp-content/uploads/2025/05/deniseyu_art_monitoring-300x300.webp 300w, https://blogit.create.pt/wp-content/uploads/2025/05/deniseyu_art_monitoring-150x150.webp 150w, https://blogit.create.pt/wp-content/uploads/2025/05/deniseyu_art_monitoring-768x768.webp 768w, https://blogit.create.pt/wp-content/uploads/2025/05/deniseyu_art_monitoring-420x420.webp 420w, https://blogit.create.pt/wp-content/uploads/2025/05/deniseyu_art_monitoring-696x696.webp 696w, https://blogit.create.pt/wp-content/uploads/2025/05/deniseyu_art_monitoring-70x70.webp 70w" sizes="(max-width: 800px) 100vw, 800px" /><figcaption class="wp-element-caption"><em>Image Credit to &#8211; Denise Yu</em></figcaption></figure>



<p><a href="https://deniseyu.io/art/" target="_blank" rel="noreferrer noopener">Source of Denise Yu&#8217;s art</a>.</p>



<p><a href="https://sre.google/sre-book/monitoring-distributed-systems/" target="_blank" rel="noreferrer noopener">Google&#8217;s SRE book</a>&nbsp;is amazing to learn more about the 4 Golden signals and creating SLO-based alerts. All our alerts should be actionable (or the support team will not be happy), so it helps if they are based on SLOs that are defined as a team.</p>



<p>They also have&nbsp;<a href="https://sre.google/sre-book/service-best-practices/" target="_blank" rel="noreferrer noopener">some best practices for production services</a>.</p>



<p></p>



<h2 class="wp-block-heading">Resources</h2>



<ul style="max-width:1005px" class="wp-block-list">
<li>Glossary of many observability terms in case you’re not familiar with them:&nbsp;<a href="https://github.com/prathamesh-sonpatki/o11y-wiki" target="_blank" rel="noreferrer noopener">https://github.com/prathamesh-sonpatki/o11y-wiki</a></li>



<li><a href="https://github.com/magsther/awesome-opentelemetry" target="_blank" rel="noreferrer noopener">Awesome Observability GitHub repo</a></li>



<li>If dashboards make you happy&nbsp;<a href="https://play.grafana.org/d/feg4yc4qw3wn4b/third-annual-observability-survey?pg=survey-2025&amp;plcmt=toc-cta-2&amp;orgId=1&amp;from=2025-03-13T02:49:20.476Z&amp;to=2025-03-14T02:49:20.476Z&amp;timezone=utc&amp;var-region=$__all&amp;var-role=$__all&amp;var-size=$__all&amp;var-industry=$__all&amp;var-filters=%60Region%60%20in%20%28%27Europe%27,%27Asia%27,%27North%20America%27,%27Africa%27,%27South%20America%27,%27Oceania%27,%27Middle%20East%27%29%20AND%20%60Role%60%20IN%20%28%27Platform%20team%27,%27SRE%27,%27CTO%27,%27Engineering%20manager%27,%27Developer%27,%27Director%20of%20engineering%27,%27Other%27%29%20AND%20%60Size_of_organization%60%20IN%20%28%2710%20or%20fewer%20employees%27,%2711%20-%20100%20employees%27,%27101%20-%20500%20employees%27,%27501%20-%201,000%20employees%27,%271,001%20-%202,500%20employees%27,%272,501%20-%205,000%20employees%27,%275,001%2B%20employees%27%29%20AND%20%60Industry%60%20IN%20%28%27Telecommunications%27,%27Healthcare%27,%27IoT%27,%27Financial%20services%27,%27Education%27,%27Government%27,%27Applied%20Sciences%27,%27Software%20%26%20Technology%27,%27Media%20%26%20Entertainment%27,%27Travel%20%26%20Transportation%27,%27Retail%2FE-commerce%27,%27Energy%20%26%20Utilities%27,%27Automotive%20%26%20Manufacturing%27,%27Other%27%29" target="_blank" rel="noreferrer noopener">check the Grafana observability report dashboard</a></li>



<li><a href="https://aws-observability.github.io/observability-best-practices/guides/" target="_blank" rel="noreferrer noopener">AWS observability best practices guide</a></li>



<li><a href="https://grafana.com/blog/2018/08/02/the-red-method-how-to-instrument-your-services/" target="_blank" rel="noreferrer noopener">About RED and USE method</a></li>



<li><a href="https://learn.microsoft.com/en-us/dotnet/core/diagnostics/distributed-tracing-instrumentation-walkthroughs#best-practices-1" target="_blank" rel="noreferrer noopener">Traces Instrumentation best practices in .NET</a></li>



<li><a href="https://signoz.io/guides/what-are-the-limitations-of-prometheus-labels/#what-are-the-limitations-of-prometheus-labels" target="_blank" rel="noreferrer noopener">What are the Limitations of Prometheus Labels?</a></li>



<li><a href="https://www.cncf.io/training/certification/otca/" target="_blank" rel="noreferrer noopener">CNCF OpenTelemetry certification</a></li>



<li><a href="https://github.com/cncf/tag-observability/blob/main/whitepaper.md" target="_blank" rel="noreferrer noopener">TAG Observability whitepaper</a>&nbsp;&#8211; this is an amazing resource with tons of information! I also recommend checking out the other resources they have in the tag-observability repo and community</li>



<li>Resources specifically about&nbsp;<strong>Observability 2.0</strong>:
<ul style="max-width:970px" class="wp-block-list">
<li><a href="https://charity.wtf/tag/observability-2-0/" target="_blank" rel="noreferrer noopener">Observability 2.0 by Charity Majors</a></li>



<li><a href="https://www.aparker.io/post/3leq2g72z7r2t" target="_blank" rel="noreferrer noopener">Re-Redefining Observability</a></li>



<li><a href="https://www.youtube.com/watch?v=ag2ykPO805M" target="_blank" rel="noreferrer noopener">Is It Time To Version Observability? (Signs Point To Yes) &#8211; Charity Majors</a></li>
</ul>
</li>



<li>Talks
<ul style="max-width:970px" class="wp-block-list">
<li><a href="https://www.youtube.com/watch?v=hhZrOHKIxLw" target="_blank" rel="noreferrer noopener">How Prometheus Revolutionized Monitoring at SoundCloud &#8211; Björn Rabenstein</a></li>



<li><a href="https://www.youtube.com/watch?v=X99X-VDzxnw" target="_blank" rel="noreferrer noopener">How to Include Latency in SLO-based Alerting &#8211; Björn Rabenstein, Grafana Labs</a></li>



<li><a href="https://www.youtube.com/watch?v=pLPMAAOSxSE" target="_blank" rel="noreferrer noopener">Myths and Historical Accidents: OpenTelemetry and the Future of Observability Part 1</a></li>



<li><a href="https://youtu.be/3tBj3ZCPGJY?t=687" target="_blank" rel="noreferrer noopener">Modern Platform Engineering: 9 Secrets of Generative Teams &#8211; Liz Fong-Jones</a></li>



<li><a href="https://www.youtube.com/watch?v=gviWKCXwyvY" target="_blank" rel="noreferrer noopener">Context Propagation makes OpenTelemetry awesome</a></li>
</ul>
</li>
</ul>



<h3 class="wp-block-heading">GitHub demo repo</h3>



<p>I&#8217;ve been developing a demo app (it has fewer features than the&nbsp;<a href="https://github.com/open-telemetry/opentelemetry-demo" target="_blank" rel="noreferrer noopener">otel demo</a>) to demonstrate how to build an app with OpenTelemetry, Grafana and Prometheus. It&#8217;s primarily focused on a small app I can showcase in my talks.</p>



<p>If you&#8217;re interested take a look: <a href="https://github.com/BOLT04/grafana-observability-demo">https://github.com/BOLT04/grafana-observability-demo</a></p>



<p></p>



<h2 class="wp-block-heading">Conclusion</h2>



<img decoding="async" src="https://media0.giphy.com/media/v1.Y2lkPTc5MGI3NjExeGh0em9taDR0N2hodXIycnh4a3RqbXk2cWRoeDFjODU2Mmd0MDJqdSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/tkApIfibjeWt1ufWwj/giphy.gif" alt="happy gif" loading="lazy">



<p>Hopefully, some of these resources I&#8217;ve shared are useful to you 😄. I still have a ton to learn and explore, but I&#8217;m happy with the knowledge I&#8217;ve acquired so far.</p>



<p>There are some specific standards + projects that I&#8217;ll dive in and explore more, like: eBPF; OpenMetrics. OpenMetrics is something I&#8217;d like to spend some quality time reading about, but I know&nbsp;<a href="https://www.cncf.io/blog/2024/09/18/openmetrics-is-archived-merged-into-prometheus/" target="_blank" rel="noreferrer noopener">it&#8217;s archived</a>&nbsp;and&nbsp;<a href="https://www.reddit.com/r/devops/comments/1f5ttdx/openmetrics_is_archived_merged_into_prometheus/?rdt=47070" target="_blank" rel="noreferrer noopener">reddit says the same</a>. Just want to read and watch some talks about it to feed my curiosity 😃.</p>



<p>Last but not least, I want to follow the work that some industry leaders are doing like&nbsp;<a href="https://charity.wtf/" target="_blank" rel="noreferrer noopener">Charity Majors</a>, specifically about Observability 2.0 😄. I discovered this term in the&nbsp;<a href="https://www.thoughtworks.com/radar/techniques/summary/observability-2-0" target="_blank" rel="noreferrer noopener">Thouthworks tech radar</a>, and the part &#8220;high-cardinality event data in a single data store&#8221; caught my interest.<br>I&#8217;m still learning, researching, and listening to the opinions of industry leaders about this term to then develop my own opinions. Maybe I&#8217;ll make a blog post about this in the future 😁.</p>



<p>If you&#8217;re interested, check out my other blog posts:</p>



<ul style="max-width:1005px" class="wp-block-list">
<li><a href="https://blogit.create.pt/davidpereira/2024/02/05/dreamforce-2023-highlights/">Dreamforce 2023 Highlights</a></li>



<li><a href="https://blogit.create.pt/davidpereira/2021/09/16/getting-started-with-cloudevents-and-asyncapi/">Getting Started with CloudEvents and AsyncAPI</a></li>
</ul>



<p></p>
<p>The post <a href="https://blogit.create.pt/davidpereira/2025/06/20/learning-observability/">Learning: Observability</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/davidpereira/2025/06/20/learning-observability/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Configuring Azure Application Insights on a .NET 6 API</title>
		<link>https://blogit.create.pt/andrepires/2024/05/27/configuring-azure-application-insights-on-a-net-6-api/</link>
					<comments>https://blogit.create.pt/andrepires/2024/05/27/configuring-azure-application-insights-on-a-net-6-api/#respond</comments>
		
		<dc:creator><![CDATA[André Pires]]></dc:creator>
		<pubDate>Mon, 27 May 2024 08:21:04 +0000</pubDate>
				<category><![CDATA[Misc]]></category>
		<category><![CDATA[.NET]]></category>
		<category><![CDATA[azure]]></category>
		<guid isPermaLink="false">https://blogit.create.pt/?p=13492</guid>

					<description><![CDATA[<p>In this blog post we are going to be showing you can configure Azure Application Insights to send your logs, exceptions and performance metrics from a .NET 6 API. InstallationStart by installing the Microsoft.ApplicationInsights.AspNetCore nugget package. Now in your startup class you will need to configure some options for the telemetry collection.You can use the [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/andrepires/2024/05/27/configuring-azure-application-insights-on-a-net-6-api/">Configuring Azure Application Insights on a .NET 6 API</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p></p>



<p>In this blog post we are going to be showing you can configure Azure Application Insights to send your logs, exceptions and performance metrics from a .NET 6 API.<br><br><strong>Installation</strong><br>Start by installing the <em>Microsoft.ApplicationInsights.AspNetCore</em> nugget package.<br><br>Now in your startup class you will need to configure some options for the telemetry collection.<br>You can use the <em>AddApplicationInsightsTelemetry </em>extension method on the <em>IServiceCollection </em>and pass your options as so:<br></p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: csharp; title: ; notranslate">
public class Startup
{
    public virtual void ConfigureServices(IServiceCollection services)
    {
        services.AddApplicationInsightsTelemetry(GetApplicationInsightsServiceOptions());    
    }

    private static ApplicationInsightsServiceOptions GetApplicationInsightsServiceOptions()
    {
        return new ApplicationInsightsServiceOptions
        {
            AddAutoCollectedMetricExtractor = false,
            EnableEventCounterCollectionModule = false,
            EnableDiagnosticsTelemetryModule = false,
            EnablePerformanceCounterCollectionModule = true,
            EnableDependencyTrackingTelemetryModule = true,
            EnableRequestTrackingTelemetryModule = false,
            ConnectionString = Configuration&#x5B;ConfigurationConstants.ApplicationInsightsConnectionString],
        };
    }
}
</pre></div>


<p>This is where we define the types of telemetry we want to collect.<br>The documention of the options are here: <a href="https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core?tabs=netcorenew#use-applicationinsightsserviceoptions">https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core?tabs=netcorenew#use-applicationinsightsserviceoptions</a><br>We are also getting the connection string of the application insights resource from our appsettings.</p>



<p><strong>Telemetry modules</strong><br>Now if we want to go further, we can even specify what metrics to collect for each module.<br>For instance, let&#8217;s say we want to indicate which performance metrics to collect. Then you can do the following:<br></p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: csharp; title: ; notranslate">
        services.ConfigureTelemetryModule&lt;PerformanceCollectorModule&gt;((module, applicationInsightsServiceOptions) =&gt;
        {
            module.DefaultCounters.Add(new PerformanceCounterCollectionRequest(@&quot;\Process(??APP_WIN32_PROC??)\Private Bytes&quot;, @&quot;\Process(??APP_WIN32_PROC??)\Private Bytes&quot;));
            module.DefaultCounters.Add(new PerformanceCounterCollectionRequest(@&quot;\Process(??APP_WIN32_PROC??)\% Processor Time&quot;, @&quot;\Process(??APP_WIN32_PROC??)\% Processor Time&quot;));
        });
</pre></div>


<p>You can view the documentation on configuring telemetry modules here: <a href="https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core?tabs=netcorenew#configure-or-remove-default-telemetrymodules">https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core?tabs=netcorenew#configure-or-remove-default-telemetrymodules</a><br><br><strong>Extensibility</strong><br>To add or remove properties from collected telemetry (before it gets sent to Azure) you can create classes that implement the <em>ITelemetryInitializer </em>class from the Azure Application Insights SDK (<a href="https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core?tabs=netcorenew#add-telemetryinitializers">https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core?tabs=netcorenew#add-telemetryinitializers</a>).<br><br>A classic example for a telemetry initializer is a correlation id initializer. You would get a correlation id from your http request and pass it to all telemetry so that you can have a nice trace of everything that went through your system:<br></p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: csharp; title: ; notranslate">
public class CorrelationIdInitializer : ITelemetryInitializer
{
    private readonly IHttpContextAccessor _httpContextAccessor;
    private const string CorrelationIdProperty = &quot;CorrelationId&quot;;

    public CorrelationIdInitializer(IHttpContextAccessor httpContextAccessor)
    {
        _httpContextAccessor = httpContextAccessor;
    }

    public void Initialize(ITelemetry telemetry)
    {
        ISupportProperties telemetryProperties = (ISupportProperties)telemetry;

        // This is just an extension method where you would extract the correlation id from the request headers according to your header name.
        string correlationId = _httpContextAccessor.HttpContext.GetCorrelationId();

        if (!string.IsNullOrWhiteSpace(correlationId) &amp;&amp; !telemetryProperties.Properties.ContainsKey(CorrelationIdProperty))
        {
            telemetryProperties.Properties.Add(CorrelationIdProperty, correlationId);
        }
    }
</pre></div>


<p>To register the telemetry initializer, simply register the class and its respective interface (always ITelemetryInitializer) on your service collection: <em>services.AddSingleton&lt;ITelemetryInitializer, CorrelationIdInitializer&gt;();</em></p>



<p><strong>Filtering</strong><br>Additionally, you also have the ability to filter out telemetry from being collected. This is a great feature that allows you to focus only storing what you really care about and will save you a LOT of costs.<br>Do keep in mind that filtering out telemetry does mean that you won&#8217;t be able to query it and therefore can make your tracing a bit difficult.<br>This is all related to telemetry processors (<a href="https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core?tabs=netcorenew#add-telemetry-processors">https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core?tabs=netcorenew#add-telemetry-processors</a>).<br><br>Imagine you have thousands of fast dependencies (see the list of automatically tracked dependencies <a href="https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-dependencies#automatically-tracked-dependencies">https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-dependencies#automatically-tracked-dependencies</a>) that are being stored by your system but you come to a conclusion that they only make it hard to monitor your system and you don&#8217;t really need them.<br>Filtering them out is the way to go OR you could consider sampling instead <a href="https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core?tabs=netcorenew#sampling">https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core?tabs=netcorenew#sampling</a>.<br>Lets create a processor that filters out successful dependencies that had a duration under 500 milliseconds:<br></p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: csharp; title: ; notranslate">
public class FastDependencyProcessor : ITelemetryProcessor
{
    private ITelemetryProcessor Next { get; set; }

    public FastDependencyProcessor(ITelemetryProcessor next)
    {
        Next = next;
    }

    public void Process(ITelemetry item)
    {
        if (!ShouldFilterTelemetry(item))
        {
            Next.Process(item);
        }
    }

    public bool ShouldFilterTelemetry(ITelemetry item)
    {
        bool shouldFilterTelemetry = false;

        DependencyTelemetry dependency = item as DependencyTelemetry;

        if (dependency != null
            &amp;&amp; dependency.Duration.TotalMilliseconds &lt; 500
            &amp;&amp; dependency.Success.HasValue &amp;&amp; dependency.Success.Value)
        {
            shouldFilterTelemetry = true;
        }

        return shouldFilterTelemetry;
    }
}
</pre></div>


<p>After creating the class, you have to register the telemetry processor as part of your service collection like so:<br><em>services.AddApplicationInsightsTelemetryProcessor&lt;FastDependencyProcessor</em>&gt;<em>();</em><br><br>Keep in mind that if your telemetry processors will be executed in the order in which you&#8217;ve called them in your register process.<br><br><strong>Final notes</strong><br><br>&#8211; Azure Application Insights is a great tool that gives you visibility on how your application is doing.<br>You can use the SDK to collect most telemetry automatically or you could even instrument specific scenarios manually by making use of the <em>TelemetryClient</em> instance (<a href="https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core?tabs=netcorenew#how-can-i-track-telemetry-thats-not-automatically-collected">https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core?tabs=netcorenew#how-can-i-track-telemetry-thats-not-automatically-collected</a>).<br>&#8211; It is important to monitor your costs as it is easy to end up storing everything and not only what you need. So start by defining what you need to collect and what metrics will be relevant for you.<br>Then you can filter out the irrelevant data and from there on you could even create alerts from the data you collect, using Azure Monitor.<br>If you are trying out Application Insights on a test environment, you can even set a daily cap of X GB so that you control your spending. Simply navigate to the Azure Application Insights resource through the Azure Portal and click &#8220;Usage and estimated Costs&#8221; on the sidebar:<br></p>



<figure class="wp-block-image size-full is-resized"><img decoding="async" width="248" height="229" src="https://blogit.create.pt/wp-content/uploads/2024/05/image.png" alt="" class="wp-image-13493" style="width:238px;height:auto" /></figure>



<figure class="wp-block-image size-full"><img decoding="async" width="638" height="40" src="https://blogit.create.pt/wp-content/uploads/2024/05/image-1.png" alt="" class="wp-image-13494" srcset="https://blogit.create.pt/wp-content/uploads/2024/05/image-1.png 638w, https://blogit.create.pt/wp-content/uploads/2024/05/image-1-300x19.png 300w" sizes="(max-width: 638px) 100vw, 638px" /></figure>



<p><br>&#8211; From what I&#8217;ve seen so far, the collection of dependencies/exceptions can represent the majority of costs as you may have a lot of dependencies flowing through your system and because exceptions are a type of telemetry that occupies a lot of space. Filtering out irrelevant dependencies will definitely help.<br>As for exceptions, you may consider using the Result pattern instead of using exceptions for the normal control flow of your code. This also has the advantage of decreasing the impact on performance lead by exceptions to your application since you reduce the amount of exceptions thrown.</p>
<p>The post <a href="https://blogit.create.pt/andrepires/2024/05/27/configuring-azure-application-insights-on-a-net-6-api/">Configuring Azure Application Insights on a .NET 6 API</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/andrepires/2024/05/27/configuring-azure-application-insights-on-a-net-6-api/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deconstruct types in C#</title>
		<link>https://blogit.create.pt/andrepires/2024/02/21/deconstruct-types-in-c/</link>
					<comments>https://blogit.create.pt/andrepires/2024/02/21/deconstruct-types-in-c/#respond</comments>
		
		<dc:creator><![CDATA[André Pires]]></dc:creator>
		<pubDate>Wed, 21 Feb 2024 14:57:10 +0000</pubDate>
				<category><![CDATA[Misc]]></category>
		<guid isPermaLink="false">https://blogit.create.pt/?p=13421</guid>

					<description><![CDATA[<p>In C# you have some built-in support for deconstructing system types and for non-supported scenarios, you can always extend your types to support that. Let&#8217;s see how to deconstruct types in C#. Tuples have support for deconstruction which lets you unpackage all their items in a single operation.Here is an example with and without deconstruction: [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/andrepires/2024/02/21/deconstruct-types-in-c/">Deconstruct types in C#</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In C# you have some built-in support for deconstructing system types and for non-supported scenarios, you can always extend your types to support that. Let&#8217;s see how to deconstruct types in C#.</p>



<p>Tuples have support for deconstruction which lets you unpackage all their items in a single operation.<br>Here is an example with and without deconstruction:</p>



<p></p>



<pre class="wp-block-code"><code>public static void Main()
{
    var result = QueryCityData("New York City");

    // Without deconstruction:
    var city = result.Item1;
    var pop = result.Item2;
    var size = result.Item3;

    // With deconstruction:
   (string city, int population, double area) = QueryCityData("New York City");
}

private static (string city, int population, double area) QueryCityData(string name)
{
    return (name, 8175133, 468.48);
}</code></pre>



<p>You may be interested in the values of only some elements.<br>You can take advantage of C#&#8217;s support for discards, which are variables whose values you ignore by using the underscore character.<br>Using the previous example:</p>



<pre class="wp-block-code"><code>public static void Main ()
{
    // city and population were discarded.
    (<em>, </em>, double area) = QueryCityData("Portugal");
}</code></pre>



<p></p>



<h2 class="wp-block-heading">Other system types and deconstruction</h2>



<p>Here are some system types that implement deconstruction, such as dictionary entries:</p>



<p></p>



<pre class="wp-block-code"><code>Dictionary&lt;string,int&gt; customerToTaxMapping = new Dictionary&lt;string, int&gt;();

foreach ((string key, int taxId) in customerToTaxMapping)
{
     // Do something with key and taxId
}</code></pre>



<p>You can deconstruct <strong>KeyValuePair </strong>instances (depending on your C# version, was introduced in .NET Core 2.0):</p>



<p></p>



<pre class="wp-block-code"><code>(string id, string taxId) = new KeyValuePair("id", "taxId");</code></pre>



<p>If you are on C# 12 (.NET 8), then you can make use of the Deconstruct methods for <strong>DateTime</strong>, <strong>DateOnly </strong>and <strong>DateTimeOffset</strong>:</p>



<pre class="wp-block-code"><code>(DateOnly date, TimeOnly time) = new DateTime(2023, 1, 2, 4, 5, 59, 999);

(int year, int month, int day) = new DateTime(2023, 1, 2);

(int year, int month, int day) = new DateOnly(2023, 5, 1);

// Instantiate date and time using years, months, days,
// hours, minutes, seconds and a time span.
(DateOnly date, TimeOnly time, TimeSpan offset) = new DateTimeOffset(2008, 5, 1, 8, 6, 32, new TimeSpan(1, 0, 0));</code></pre>



<p></p>



<h2 class="wp-block-heading">Deconstruct user-defined types:</h2>



<p>You can add public methods named <strong>Deconstruct </strong>and specify the <strong>out </strong>parameters that will define the parameters you want when applying deconstruction.</p>



<pre class="wp-block-code"><code>public class Person
{
    public string Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }

    public void Deconstruct(out string firstName, out string lastName)
    {
        firstName = FirstName;
        lastName = LastName;
    }

    public void Deconstruct(out string id, out string firstName, out string lastName)
    {
        id = Id;
        firstName = FirstName;
        lastName = LastName;
    }
}</code></pre>



<p></p>



<p>If you don&#8217;t have control over the types that you want to provide deconstruction support for, then you can create a static class with a <strong>Deconstruct </strong>extension method with the same signature as displayed previously, for that given type.</p>



<p>Additionally, when you declare a <strong>record </strong>type by using two or more positional parameters, the compiler creates a <strong>Deconstruct </strong>method within the record declaration.</p>



<p><strong>Note</strong>: You can&#8217;t deconstruct dynamic objects.</p>



<p><strong>References</strong>:<br><a href="https://learn.microsoft.com/en-us/dotnet/csharp/fundamentals/functional/deconstruct" target="_blank" rel="noreferrer noopener">https://learn.microsoft.com/en-us/dotnet/csharp/fundamentals/functional/deconstruct</a><br><a href="https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/language-specification/expressions#127-deconstruction" target="_blank" rel="noreferrer noopener">https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/language-specification/expressions#127-deconstruction</a></p>
<p>The post <a href="https://blogit.create.pt/andrepires/2024/02/21/deconstruct-types-in-c/">Deconstruct types in C#</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/andrepires/2024/02/21/deconstruct-types-in-c/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Dreamforce 2023 Highlights</title>
		<link>https://blogit.create.pt/davidpereira/2024/02/05/dreamforce-2023-highlights/</link>
					<comments>https://blogit.create.pt/davidpereira/2024/02/05/dreamforce-2023-highlights/#respond</comments>
		
		<dc:creator><![CDATA[David Pereira]]></dc:creator>
		<pubDate>Mon, 05 Feb 2024 14:01:48 +0000</pubDate>
				<category><![CDATA[Misc]]></category>
		<category><![CDATA[salesforce]]></category>
		<category><![CDATA[sfcc]]></category>
		<guid isPermaLink="false">https://blogit.create.pt/?p=13007</guid>

					<description><![CDATA[<p>In this post I&#8217;ll go over my highlights of a huge event that took part some months ago &#8211; Salesforce Dreamforce 2023. Many announcements are interesting like Marketing Cloud and Commerce Cloud being integrated into the Einstein 1 platform. I believe for Commerce Cloud this means only the B2B and D2C products… at least for [&#8230;]</p>
<p>The post <a href="https://blogit.create.pt/davidpereira/2024/02/05/dreamforce-2023-highlights/">Dreamforce 2023 Highlights</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In this post I&#8217;ll go over my highlights of a huge event that took part some months ago &#8211; Salesforce Dreamforce 2023. Many announcements are interesting like Marketing Cloud and Commerce Cloud being integrated into the Einstein 1 platform. I believe for Commerce Cloud this means only the B2B and D2C products… at least for now 🙂. I&#8217;ll focus on the B2C side of Commerce Cloud, although most features were announced for B2B and D2C.</p>



<p>You can also watch <a href="https://www.youtube.com/watch?v=4j-HyuHDQQ4">Salesforce Keynote highlights of the event</a>.</p>



<h2 class="wp-block-heading">Einstein 1 Platform</h2>


<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><img decoding="async" width="708" height="362" src="https://blogit.create.pt/wp-content/uploads/2024/02/einsteinplatform.webp" alt="einstein 1 platform slide" class="wp-image-13080" style="width:708px;height:auto" srcset="https://blogit.create.pt/wp-content/uploads/2024/02/einsteinplatform.webp 708w, https://blogit.create.pt/wp-content/uploads/2024/02/einsteinplatform-300x153.webp 300w, https://blogit.create.pt/wp-content/uploads/2024/02/einsteinplatform-696x356.webp 696w" sizes="(max-width: 708px) 100vw, 708px" /><figcaption class="wp-element-caption">Image source &#8211; <a href="https://www.youtube.com/watch?v=Ew-xxNhhscU&amp;t=1528s">Dreamforce 2023 Main Keynote</a></figcaption></figure>
</div>


<p>Salesforce announced the&nbsp;<strong>Einstein Trust Layer</strong>&nbsp;which seems to be their focus of what&#8217;s behind the Einstein 1 Platform. This makes sense since Salesforce&#8217;s core value is <strong>trust</strong>, this product might be&nbsp;the core that separates Salesforce AI functionality from other solutions. I&#8217;m very interested in learning more about each specific part they announced, like data masking, zero retention, and dynamic grounding. We should know what prompt is sent to the LLMs, how the <strong>CRM and other enterprise data</strong> are used in prompts (for dynamic grounding), and <strong>how exactly</strong> zero retention is implemented in each LLM provider (e.g. OpenAI). It&#8217;s great seeing a slide explaining Einstein Trust Layer makes sure the prompt is not used by the LLMs to train their model, but I think it&#8217;s important to challenge these claims and ask questions 🙂.</p>



<p>It would also be interesting to try and connect external models or cloud services, like LAMA, Amazon SageMaker or Azure OpenAI service. This way we take advantage of the middleware inside the Trust Layer that guarantees security and input moderation. As partners, we have access to some Salesforce events and I definitely recommend attending &#8220;<strong>AI &amp; Einstein Office Hours</strong>&#8221; on Tuesdays. Claudio Moraes from Salesforce is sharing extremely useful information about Einstein GPT, the details of the Trust Layer, and more AI topics!</p>



<p>Here are some interesting videos that Salesforce has already done, explaining further the Einstein Trust layer and Einstein GPT:</p>



<ul class="wp-block-list">
<li><a href="https://www.youtube.com/watch?v=6KdUfqQ1szE">Build The Future of Business with Einstein GPT  </a></li>



<li><a href="https://www.youtube.com/watch?v=JuYwz5qkSbk">Salesforce AI day: Calling All Trailblazers</a></li>
</ul>



<p></p>



<h2 class="wp-block-heading">Einstein Copilot</h2>



<p>Well… as you know, 2023 was the year of copilots. Salesforce is also taking advantage of that wave and has <a href="https://www.salesforce.com/news/press-releases/2023/09/12/ai-einstein-news-dreamforce/">introduced Einstein Copilot and Einstein Copilot Studio</a>. This looks a lot like what they announced some time ago, Einstein GPT.</p>



<p>I think it&#8217;s great that Einstein is more integrated across various products, especially Commerce and Service Cloud. Facilitating the manual work that the business has to do in various merchandising operations only brings benefits.</p>



<p></p>



<h2 class="wp-block-heading">Generative AI in SFCC Page Designer</h2>



<p>About <a href="https://www.salesforce.com/plus/experience/Dreamforce_2023/series/commerce_at_dreamforce_2023/episode/episode-s1e3">generative Page Designer that they announced</a>, we could simply say &#8220;yup, LGTM&#8221; and simply use it… but I do have some questions that need answering 😅. The demo they showed looks good and it&#8217;s probably a feature that brings tons of value to a lot of customers. But watching it made my mind go through endless questions. At first glance it&#8217;s amazing, we could really speed up the development of new components. UI/UX designers, merchandisers, and other business profiles can ask an AI to generate a component without a dev.</p>



<p>Still, I think as a developer we need to understand what is happening underneath, where is the generated code saved, what is the prompt used to generate the component, does it have context of the whole codebase, etc. This might have been the most interesting announcement and something to watch out for. However, we must also criticize and understand the limitations of this technology. In the demo they created, the user enters a brief description of the layout and content they want in the component, then the code is generated in React (it also supports ISML). I don&#8217;t believe React is actually used in SFRA, only in the Headless architecture, so their demo raises some questions about how it works underneath.</p>



<p>But anyway, I look at this as if it were <a href="https://v0.dev/">v0.dev</a>, that is, the <strong>1st version or v0 of a component</strong>. The business or UI designers can experiment and generate several iterations of a component with this functionality, and then paste the generated code into a user story in a scrum board for a developer to retrieve. But it would just be a v0, something to be iterated over before going to production. I don&#8217;t think we&#8217;re ready yet for generative AI to know the website&#8217;s design system, know what caching considerations to apply to the component, have context about the logic behind add to cart if it&#8217;s integrated with a 3rd party vendor, etc. But we could also be optimistic and in all the use cases where gen AI struggles, it&#8217;s only going to improve.</p>



<p><strong>Note</strong>: Page Designer&#8217;s Generative AI is expected to launch in beta in February 2024.</p>



<p></p>



<h2 class="wp-block-heading">Commerce Concierge</h2>



<p>Salesforce also announced a new product that is in beta called Commerce Concierge. Basically, the product is centered on the concept of Conversational Commerce. We can integrate our e-commerce website into WhatsApp as if it were one of those ChatGPT plugins that allow you to make purchases on a website.</p>



<p>For example, a customer takes a photo of a product and asks if there is a store with stock so they can make the purchase. If the customer happens to make a purchase on WhatsApp, the bot can respond with new product suggestions trying to effectively cross-sell. Technically, Commerce Concierge uses APIs from Commerce Cloud, Service Cloud, Data Cloud, and the AI ​​layer with Einstein GPT. At the moment I think it will only be launched for Commerce B2B and not B2C. This makes sense to me since it is easier to integrate Clouds that are already within the Salesforce Core Platform&#8230; but I hope we have this for SFCC in the future 🙂. I also think it will be possible to expand to more touchpoints, not just WhatsApp.</p>



<p>It&#8217;s also important to be aware of new attack vectors. Innovation is awesome and we should strive for more and tinker with these new capabilities. However, innovation can bring new security concerns. In these apps that will be available straight from WhatsApp or our storefront, we should be aware of <a href="https://learnprompting.org/docs/prompt_hacking/injection">Prompt injection</a>, <a href="https://learnprompting.org/docs/prompt_hacking/jailbreaking">Jailbreaking</a>, etc.</p>



<p></p>



<h2 class="wp-block-heading">Conclusion</h2>



<p>It was a great event and I can&#8217;t wait to get my hands on some Einstein GPT features for B2C! Hopefully, there will be more features on SFCC that are only available for customers of B2B and D2C 🙏. For example generating product descriptions or commerce intelligence 🙂.</p>



<p>Generating product descriptions with Einstein seems to be really useful if we have a batch of new products that we want to launch. It would be great to have this available in an HTTP API, so that we could integrate this functionality in use cases where our PIM (Product Information Management) is a custom system, or just use them in SFCC. Again&#8230; for now some features are only on B2B and D2C side of the commerce products (the ones integrated in Salesforce Core Platform).</p>



<p>If you&#8217;re interested in learning about session management in SFCC <a href="https://blogit.create.pt/davidpereira/2022/06/06/session-management-in-salesforce-b2c-commerce-cloud/">read this blog post</a>.</p>
<p>The post <a href="https://blogit.create.pt/davidpereira/2024/02/05/dreamforce-2023-highlights/">Dreamforce 2023 Highlights</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/davidpereira/2024/02/05/dreamforce-2023-highlights/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How to use SEO meta tag rules for Page Designer in SFCC</title>
		<link>https://blogit.create.pt/davidpereira/2022/09/07/seo-meta-tag-rules-sfcc-page-designer/</link>
					<comments>https://blogit.create.pt/davidpereira/2022/09/07/seo-meta-tag-rules-sfcc-page-designer/#respond</comments>
		
		<dc:creator><![CDATA[David Pereira]]></dc:creator>
		<pubDate>Wed, 07 Sep 2022 17:18:48 +0000</pubDate>
				<category><![CDATA[Misc]]></category>
		<category><![CDATA[salesforce]]></category>
		<category><![CDATA[seo]]></category>
		<category><![CDATA[sfcc]]></category>
		<guid isPermaLink="false">https://blogit.create.pt/?p=12745</guid>

					<description><![CDATA[<p>SEO meta tag rules module in SFCC can be customized to support all page designer pages! Learn how in this blog post.</p>
<p>The post <a href="https://blogit.create.pt/davidpereira/2022/09/07/seo-meta-tag-rules-sfcc-page-designer/">How to use SEO meta tag rules for Page Designer in SFCC</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">Introduction</h2>



<p>We have a great feature available to us in Salesforce Commerce Cloud (SFCC) called <a href="https://trailhead.salesforce.com/en/content/learn/modules/b2c-seo-meta-tags/b2c-seo-meta-tag-explore-rules" target="_blank" rel="noreferrer noopener">SEO meta tag rules</a>. This is a module inside SEO in Business Manager (BM). However, currently there is <strong>only support for <a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/topic/com.demandware.dochelp/content/b2c_commerce/topics/page_designer/b2c_use_page_meta_tag_rules_for_pd.html" target="_blank" rel="noreferrer noopener">PDP/PLP pages made in page designer</a></strong> that support these meta tag rules.</p>



<p>In this blog post we&#8217;ll see how to support SEO meta tag rules for <strong>all pages made in page designer</strong>.</p>



<h2 class="wp-block-heading">The problem</h2>



<p>Before going any further, I want to mention some disclaimers. At the time of writing this, not all pages in page designer support this. This could be added to SFCC and be directly supported without custom development. I posted this idea as a <a href="https://ideas.salesforce.com/s/idea/a0B8W00000JJF4rUAH/support-all-page-designer-pages-for-seo-meta-tag-rules" target="_blank" rel="noreferrer noopener">feature enhancement for Salesforce in IdeaExchange</a>.</p>



<p>With that said, let&#8217;s take a step back and understand the problem.</p>



<p>The SEO experts on a business operate mostly on the SEO meta tag rules BM module. If they need a more granular level of SEO customization, most System Objects (e.g. Products, Content Assets, Categories) of SFCC support SEO fields like <code>pageUrl</code>,<em> </em><code>pageTitle</code>, <code>pageDescription</code>. SEO experts can define rules that act as fallbacks, and then if some merchants don&#8217;t define these fields for particular categories or pieces of content. You still have some meta tags being added to the page by these rules.</p>



<p>Now, even though this feature supports many types of pages, like home page, product pages or product listing pages. It doesn&#8217;t support all types of Page Designer (PD) pages. You can also take a look at the <a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/index.jsp?topic=%2Fcom.demandware.dochelp%2FDWAPI%2Fscriptapi%2Fhtml%2Fapi%2Fclass_dw_experience_Page.html" target="_blank" rel="noreferrer noopener">Page class</a>, and check that it doesn&#8217;t have the <code>pageMetaTags</code> field like the <a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/index.jsp?topic=%2Fcom.demandware.dochelp%2FDWAPI%2Fscriptapi%2Fhtml%2Fapi%2Fclass_dw_content_Content.html" target="_blank" rel="noreferrer noopener">Content class</a> does. Page Designer pages are very similar to Content Assets, in the sense that underneath, <a href="https://documentation.b2c.commercecloud.salesforce.com/DOC2/topic/com.demandware.dochelp/content/b2c_commerce/topics/page_designer/b2c_pg_comp_types_content_assets.html" target="_blank" rel="noreferrer noopener">these pages are persisted as the same content objects on the SFCC&#8217;s database</a>. But there are still differences, the support for meta tag rules is one of them.</p>



<p>Ultimately, these differences makes it harder for merchants or SEO experts that want to leverage the same functionality available on the rest of the site.</p>



<p>Now that you have a better understanding about the problem, let&#8217;s shift our focus into a solution.</p>



<h2 class="wp-block-heading">The solution</h2>



<p></p>



<h3 class="wp-block-heading">Content Assets</h3>



<p>The whole premise of this solution is that it&#8217;s supported by the SEO meta tag rules module of BM. In this module we have support for the following:</p>



<ul class="has-regular-font-size wp-block-list">
<li>Homepage</li>



<li>Product pages</li>



<li>Content Detail pages</li>



<li>Content Listing pages</li>
</ul>



<p>The only one that fits well with a page designer page is <strong>Content Detail page</strong>. Meaning we&#8217;ll create content assets as a way to support these SEO rules for Page Designer. Each page can have it&#8217;s corresponding content asset, where the content ID follows a naming convention like &#8220;-seo&#8221;. This way in <code>Page.js</code> you can get the meta tags for that page, through that content asset. Here is an example of the <code>Page-Show</code> extension:</p>



<pre class="wp-block-code"><code>server.prepend("Show", function (req, res, next) {
   var PageMgr = require('dw/experience/PageMgr')
   var ContentModel = require('*/cartridge/models/content')
   var pageMetaHelper = require("*/cartridge/scripts/helpers/pageMetaHelper")

    var page = PageMgr.getPage(req.querystring.cid);
    if (page != null &amp;&amp; page.isVisible()) {
        var pageContent = ContentMgr.getContent(page.ID + "-seo")
        if (pageContent) {
            var content = new ContentModel(pageContent, "content/content")

            pageMetaHelper.setPageMetaData(req.pageMetaData, content)
            pageMetaHelper.setPageMetaTags(req.pageMetaData, content)
        }
    }
    next()
})</code></pre>



<p>Now you don&#8217;t need to create a content asset for every since page from PD. For example, if you business doesn&#8217;t use the syntax <code>${Content.pageTitle}</code> in your meta tag rules, or any expression regarding the object <code>Content</code>. Then in that case you could simply create one content asset for each folder the business wants to have. You might be wondering why I&#8217;m mentioning folders and how they fit in the solution, so let&#8217;s take a deeper look.</p>



<p></p>



<h3 class="wp-block-heading">Folders</h3>



<p>Folders are the way you can group multiple pages, and make all of them inherit the same meta tag rules. Once a folder is created, the business can use it in the SEO meta tag rules module inside BM. Imagine you have a group of pages that are part of the same marketing campaign. As an SEO expert, of course you&#8217;d like to have page titles, descriptions, open graph and other meta tags for all of them. But maybe the content team hasn&#8217;t reached the maturity level to configure this for every since page. So you create a meta tag rule, acting as a fallback in case the content team doesn&#8217;t configure some pages.</p>



<p></p>



<p>So far the solution revolves around the business creating manually: folders, assigning them to pages and creating content assets for every page (in case they want to leverage page properties) or every folder. I believe we can improve on this solution, so let&#8217;s jump in to some automation!</p>



<h3 class="wp-block-heading">Automation</h3>



<p>This is a critical step, since having all this manual work doesn&#8217;t make sense for business people. As engineers we can do better and automate the creation of these content assets. We can develop two different pieces that play together:</p>



<ul class="wp-block-list">
<li>A <strong>job</strong> to ensure every page from Page Designer, has it&#8217;s associated content asset and all the rest</li>



<li>A <strong>new BM module</strong> to automate folder creation, specifically creating multiple folders</li>
</ul>



<p>Let&#8217;s go through the job first, its responsible to update the content assets associated with each page. This means creating that content asset if it doesn&#8217;t already exist, then assign it to the same folders as the page. To implement this job you need:</p>



<ol class="wp-block-list">
<li>Get the list of all Page Designer pages</li>



<li>Iterate through all pages</li>



<li>For each page, create the content asset and assign the appropriate folders</li>
</ol>



<p>Here is a code snippet (in ES6) representing a possible implementation:</p>



<pre class="wp-block-code"><code>const libraryGateway = require("*/cartridge/scripts/gateways/libraryGateway")
const pagesList = getAllPageDesignerPages()

pagesList.forEach(page =&gt; {
    const contentId = `${page.ID}-seo`
    const result = libraryGateway.createContentAsset({
        id: contentId,
        pageTitle: page.pageTitle
    })
    if (result.error) {
        throw new Error("Error creating content asset")
    }
    
    page.folders.forEach(folder =&gt; {
        const result = libraryGateway.assignContentAssetToFolder(contentId, folder.ID)
    })
})

function getAllPageDesignerPages() {
    const ContentSearchModel = require("dw/content/ContentSearchModel")
    const apiContentSearchModel = new ContentSearchModel()
    const libraryID = "someId"
    
    apiContentSearchModel.setRecursiveFolderSearch(true)
    apiContentSearchModel.setFilteredByFolder(false)
    apiContentSearchModel.setFolderID(libraryID)
    
    apiContentSearchModel.search()
    const contentSearchResultIterator = apiContentSearchModel.getContent()
    const count = Number(apiContentSearchModel.getCount())
    
    const pages = &#91;]
    if (contentSearchResultIterator &amp;&amp; count &gt; 0) {
        while (contentSearchResultIterator.hasNext()) {
            const contentResult = contentSearchResultIterator.next()
            if (contentResult?.page) {
                // transform some contentResult fields to other types...
                pages.push(contentResult)
            }
        }
    }
    
    return pages
}</code></pre>



<p>In the implementation above, getting all pages from page designer is done through the <code>ContentSearchModel</code> API. Although this works, it&#8217;s not ideal in my opinion since these pages are required to be <strong>searchable</strong> (a setting on all pages) and <strong>online</strong>. If they aren&#8217;t, they won&#8217;t be on the <a href="https://documentation.b2c.commercecloud.salesforce.com/DOC2/topic/com.demandware.dochelp/content/b2c_commerce/topics/search_and_navigation/b2c_index_creation.html">Content index</a>. Which seems to be the only way to get all PD pages on the site. To create content assets and assigning them to folders, we delegate that responsibility to the <code>libraryGateway</code> module. To implement this module we need to use OCAPI.</p>



<h3 class="wp-block-heading">OCAPI</h3>



<p>We can use OCAPI Data APIs to create content assets/folders and assign content assets to folders. While researching I tried finding another way, but I didn&#8217;t find anything easier. The <a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/topic/com.demandware.dochelp/DWAPI/scriptapi/html/api/packageList.html?cp=0_20_2">Salesforce Commerce API</a> (<em>dw</em> library accessible server-side) doesn&#8217;t seem to have APIs for these operations. I didn&#8217;t find any pipelets or jobs steps that do this either… perhaps by <a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/topic/com.demandware.dochelp/DWAPI/jobstepapi/html/api/jobstep.ExportContent.html">exporting the content library</a>, then creating a custom job that reads that file, edits it with the new content objects, and finally runs a job step to import that edited file.</p>



<p>Interacting with OCAPI is not an expensive development effort, so you can develop a custom cartridge for this. We won&#8217;t go through the details about that cartridge, perhaps on a separate blog post.</p>



<h3 class="wp-block-heading">OCAPI endpoints</h3>



<p>All the operations we want to do can be found on the Libraries resource from the Data API. To create a content asset we can use <a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/index.jsp?topic=%2Fcom.demandware.dochelp%2FOCAPI%2Fcurrent%2Fdata%2FResources%2FLibraries.html&amp;anchor=id893141162__id696504113" target="_blank" rel="noreferrer noopener">this endpoint</a>, to assign it to a folder we can use <a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/index.jsp?topic=%2Fcom.demandware.dochelp%2FOCAPI%2Fcurrent%2Fdata%2FResources%2FLibraries.html&amp;anchor=id893141162__id-2058222274" target="_blank" rel="noreferrer noopener">this endpoint</a>. One thing to keep in mind, creating a content asset is an <strong>idempotent </strong>operation. This means it creates the object if it doesn&#8217;t already exist, but if it does it ignores the existing object and writes a new one on top.</p>



<p>In practice, this means if someone edits these content assets (e.g. through BM, locking the resource and updating the description field), that modification will be lost. If you don&#8217;t want to lose those edits, you should consider using the endpoint to get the content asset, and use that data on your PUT request payload. In my case, these content assets are supposed to be &#8220;hidden&#8221;, so no one should need to edit them.</p>



<h3 class="wp-block-heading">BM extension to create folders</h3>



<p>Now let&#8217;s discuss the custom BM module to create folders. If business people want to create multiple filters in one go, instead of doing it through the BM UI and then going to Page Designer to assign pages to folders, they input the filter names in an input box and click a button. We can simply develop a web page that has this input box and a button (and some instructions on how to use it). This is possible by <a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/topic/com.demandware.dochelp/content/b2c_commerce/topics/site_development/b2c_customize_business_manager.html">extending BM</a> and building a custom cartridge, with this UI and the controller that executes the creation of folders. We won&#8217;t go to much into detail about this custom cartridge, I&#8217;ve added additional links at the bottom to help you building this cartridge.</p>



<h3 class="wp-block-heading">Alternative solutions to SEO meta tag rules</h3>



<p>In the case where the content object quotas are hit, we can consider building something entirely custom. What I mean is we would not use the SEO meta tag rules module from BM anymore. We would build our piece of software to handle this scenario &#8211; <strong>all page designer page types</strong>. Of course that means building software that does the same as Salesforce&#8217;s built-in module of BM, considering: Storage to store these rules, meta tag definitions, and others; an API that at least exposes a way to get meta tags for a given page, taking into account API design, SLA (important if you&#8217;ll call this in a middleware of the Page.js controller), etc.</p>



<p>This is a discussion you must have with your business/client, explaining the trade-offs of each scenario.</p>



<p>In my opinion, it&#8217;s generally often better to reuse existing functionality or an off-the-shelf solution like a plugin cartridge. Something that either is standardized and known in the community, instead of your own solution… but as always, this depends on the context we&#8217;re in. For SEO meta tag rules of SFCC in particular, we <a href="https://trailhead.salesforce.com/trailblazer-community/feed/0D54S00000Hk216SAB">can&#8217;t extend this module</a>. So if we really needed to support this feature and we hit the API quotas of the SFCC platform, we would consider building a cost-effective solution outside SFCC, considering the need of a custom parser for <code>if</code> statements, context variables… again a discussion to be made with the business and architects.</p>



<h3 class="wp-block-heading">Improvements to this solution</h3>



<p>One important note about this solution is that pages would <strong>only inherit meta tags assigned to the default folder</strong> (primary folder). From my research, a content asset can only have one default folder, and that is the folder the meta tags come from. In my use case the ideal was to setup some hierarchy, where a page could have 3 folders, and you could get all meta tags assigned to those folders. Now you kind of have this <a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/topic/com.demandware.dochelp/content/b2c_commerce/topics/search_engine_optimization/b2c_meta_tag_rules.html">hierarchy</a> (primary folder, then parent folders, up to root), but it&#8217;s not the exact behavior I needed.</p>



<h2 class="wp-block-heading">Conclusion</h2>



<p>In conclusion, you can develop a custom cartridge with this functionality, and support this for your business or clients. In the future, this could be supported out of the box by SFCC. The greatest challenges were: analyzing ways to extend the SEO meta tag rules module; an API to get all page designer pages and understanding the limitations of our solution. Let us know in the comments if this feature is something you&#8217;d like to have, or vote and comment on the <a href="https://ideas.salesforce.com/s/idea/a0B8W00000JJF4rUAH/support-all-page-designer-pages-for-seo-meta-tag-rules">IdeaExchange&#8217;s post</a>. I hope this has been helpful.</p>



<p>Check out my other blog post on <a href="https://blogit.create.pt/davidpereira/2022/06/06/session-management-in-salesforce-b2c-commerce-cloud/" target="_blank" rel="noreferrer noopener">session management for SFCC</a>.</p>



<h2 class="wp-block-heading">Additional links</h2>



<p>Here are some links to documentation and other resources, that can help you in building this feature for your business: </p>



<ul class="wp-block-list">
<li><a href="https://ideas.salesforce.com/s/idea/a0B8W00000JJF4rUAH/support-all-page-designer-pages-for-seo-meta-tag-rules">IdeaExchange&#8217;s post</a></li>



<li><a href="https://trailhead.salesforce.com/trailblazer-community/feed/0D54S00000IRNXRSA5">How to get a list of pages of Page Designer using code?</a></li>



<li><a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/topic/com.demandware.dochelp/OCAPI/current/data/Resources/Libraries.html">OCAPI Data Libraries resource</a></li>



<li><a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/topic/com.demandware.dochelp/content/b2c_commerce/topics/admin/b2c_configuring_a_business_manager_site.html">Configuring the Business Manager Site</a></li>
</ul>
<p>The post <a href="https://blogit.create.pt/davidpereira/2022/09/07/seo-meta-tag-rules-sfcc-page-designer/">How to use SEO meta tag rules for Page Designer in SFCC</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/davidpereira/2022/09/07/seo-meta-tag-rules-sfcc-page-designer/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Reactathon 2022 &#8211; A Short Summary</title>
		<link>https://blogit.create.pt/davidpereira/2022/07/01/reactathon-2022-a-short-summary/</link>
					<comments>https://blogit.create.pt/davidpereira/2022/07/01/reactathon-2022-a-short-summary/#respond</comments>
		
		<dc:creator><![CDATA[David Pereira]]></dc:creator>
		<pubDate>Fri, 01 Jul 2022 13:55:45 +0000</pubDate>
				<category><![CDATA[Web]]></category>
		<category><![CDATA[Misc]]></category>
		<category><![CDATA[React]]></category>
		<category><![CDATA[Reactathon]]></category>
		<guid isPermaLink="false">https://blogit.create.pt/?p=12734</guid>

					<description><![CDATA[<p>Let's take a look at the new updates on the React community, specifically Reactathon 2022.</p>
<p>The post <a href="https://blogit.create.pt/davidpereira/2022/07/01/reactathon-2022-a-short-summary/">Reactathon 2022 &#8211; A Short Summary</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Lately we&#8217;ve been filled with cool stuff in the React community. In case you missed it, <a href="https://www.reactathon.com/" target="_blank" rel="noreferrer noopener">Reactathon </a>happened at the beginning of May, and with it came a lot of interesting talks and discussions between people in the community.</p>



<p>In this post I&#8217;ll reference what I consider to be the most interesting and hot topics on the React community&#8230; and try to make it short.</p>



<p>If you haven&#8217;t seen the conference, you can take a look at this<a href="https://www.youtube.com/playlist?list=PLRvKvw42Rc7O0eWo2m_guXdZsGTEQM_jj" target="_blank" rel="noreferrer noopener"> YT playlist for all sessions</a>.</p>



<h2 class="wp-block-heading">The state of React 2022</h2>



<p>So first of all, what is the state of React currently? With <a href="https://reactjs.org/blog/2022/03/29/react-v18.html">React 18</a>, what was once called concurrent mode is now concurrent features. This changes the approach into an <strong>incremental adoption</strong>, so that you could use <strong>concurrent features</strong> in specific places of your React app.</p>



<p>In this talk, Lee also announces Next.js new routing system which resembles Remix a lot. They want to take advantage of <strong>nested routes</strong> which is great! This allows us to provide a better user experience on pages that have one component that blocks rendering.</p>



<p>Last but not least, there are new developments when it comes to <strong>server-side rendering</strong>, and new <strong>client-side rendering APIs</strong>.</p>



<h2 class="wp-block-heading">Edge computing</h2>



<p>If you are unaware of this type of compute, Edge computing allows a better user experience, because it reduces the time you get a response with the content you want to visualize. The time is reduced because to process your request you don&#8217;t need to &#8220;talk&#8221; to a distant server on the oasis.</p>



<p>CDN providers like Cloudflare, are building new runtimes to allow you to run code closer to your customers &#8211; on the edge. Cloudflare workers for example, don&#8217;t use Node.js or Deno under the hood, it&#8217;s their own JS runtime. Of course, an effort is being done to <a href="https://blog.cloudflare.com/introducing-the-wintercg/" target="_blank" rel="noreferrer noopener">standardize these runtimes</a>.</p>



<p>In this talk, Kent talks about how Remix improves the developer experience for edge computing. First they use the <a href="https://developer.mozilla.org/pt-BR/docs/Web/API/Fetch_API" target="_blank" rel="noreferrer noopener">Web Fetch API</a>, and depending on where you want to deploy your function. Remix then translates the request/response objects to the respective platform&#8217;s API. They also support <strong>streaming in the edge</strong>, in order to send some content to the user quickly, and then send the rest.</p>



<p>With that said, this is all in JS/TS or WASM land. At least I haven&#8217;t seen a lot of support for other languages and runtimes (e.g. C#) on services that provide edge computing.</p>



<h2 class="wp-block-heading">Streaming server components</h2>



<p>With React 18 it&#8217;s now possible to stream changes to the browser, with new APIs like Suspense that allows for asynchronous processing.</p>



<p>Why is this cool? Because we <strong>don&#8217;t want to block rendering with data fetching</strong>. When our component needs to fetch data before rendering what the user wants to see, we need to render a loading spinner&#8230; which ain&#8217;t cool.</p>



<p><strong>How about we initiate <em>fetches </em>before we render. This way the requests are in parallel and don&#8217;t block rendering</strong>. Using streaming server rendering fixes this, which is why it&#8217;s so awesome! Ryan Florence goes more in detail how this is done in this talk: <a href="https://www.youtube.com/watch?v=95B8mnhzoCM" target="_blank" rel="noreferrer noopener">When to fetch: Remixing React Router</a>.</p>



<p>Bear in mind, this is just one way of rendering. Last year at React Conf 2021 there was an intro session about this topic as well: <a href="https://www.youtube.com/watch?v=pj5N-Khihgc&amp;list=PLNG_1j3cPCaZZ7etkzWA7JfdmKWT0pMsa" target="_blank" rel="noreferrer noopener">Streaming Server Rendering with Suspense</a>. Another great session that talks about the different rendering patterns: <a href="https://www.youtube.com/watch?v=PN1HgvAOmi8" target="_blank" rel="noreferrer noopener">Advanced Rendering Patterns: Lydia Hallie</a>. It&#8217;s an amazing session to help you visualize the impacts on performance, and what are the trade-offs of each pattern.</p>
<p>The post <a href="https://blogit.create.pt/davidpereira/2022/07/01/reactathon-2022-a-short-summary/">Reactathon 2022 &#8211; A Short Summary</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/davidpereira/2022/07/01/reactathon-2022-a-short-summary/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Session management in Salesforce B2C Commerce Cloud</title>
		<link>https://blogit.create.pt/davidpereira/2022/06/06/session-management-in-salesforce-b2c-commerce-cloud/</link>
					<comments>https://blogit.create.pt/davidpereira/2022/06/06/session-management-in-salesforce-b2c-commerce-cloud/#comments</comments>
		
		<dc:creator><![CDATA[David Pereira]]></dc:creator>
		<pubDate>Mon, 06 Jun 2022 10:16:14 +0000</pubDate>
				<category><![CDATA[Misc]]></category>
		<category><![CDATA[salesforce]]></category>
		<category><![CDATA[sfcc]]></category>
		<guid isPermaLink="false">https://blogit.create.pt/?p=12711</guid>

					<description><![CDATA[<p>A look into how session management works in Commerce Cloud, from a real use case.</p>
<p>The post <a href="https://blogit.create.pt/davidpereira/2022/06/06/session-management-in-salesforce-b2c-commerce-cloud/">Session management in Salesforce B2C Commerce Cloud</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">Introduction</h2>



<p>Recently, I had a challenge in our production system that was a bit annoying to figure out how to solve. Who knew that code can be deployed on a production environment and not work, even though it works on all other environments&#8230;</p>



<p></p>



<p>I did some digging and asked in forums what was the difference between session management on different environments on Salesforce Commerce Cloud (SFCC). Unfortunately I didn&#8217;t find anything, only <a href="https://help.salesforce.com/s/articleView?language=en_US&amp;type=1&amp;id=000359741" target="_blank" rel="noreferrer noopener">limits and good practices</a> that didn&#8217;t help in this case.</p>



<p>Well that&#8217;s life right? What do you do?? You decide to be pragmatic because a feature that works and is on the hands of real people, is far better than dreams of a perfect world.</p>



<h2 class="wp-block-heading">The session problem</h2>



<p>All you know is that attributes that you are saving on the <a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/index.jsp?topic=%2Fcom.demandware.dochelp%2FDWAPI%2Fscriptapi%2Fhtml%2Fapi%2Fclass_dw_system_Session.html" target="_blank" rel="noreferrer noopener">SFCC platform&#8217;s session object</a>, are not available where you need them. For the sake of this example, let&#8217;s say we are talking about a flow between the SFCC site and an API.</p>



<p>The code at the moment is something like this:</p>



<pre class="wp-block-code"><code>// apiGateway.js script
const apiResponse = { data: "something" };
session.privacy.myCustomField = JSON.stringify(apiResponse);</code></pre>



<p>The flow starts by the site calling this API, and some time later receiving a callback from the API on a specific endpoint of <em>SomeEndpoint.js </em>controller. Then the site makes an API call to get data (specific to a given user) for further processing. Some of it is saved in the <strong>privacy</strong> field of the session object. As stated in<a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/topic/com.demandware.dochelp/DWAPI/scriptapi/html/api/class_dw_system_Session.html?resultof=%22%70%72%69%76%61%63%79%22%20%22%70%72%69%76%61%63%69%22%20%22%73%65%73%73%69%6f%6e%22%20" target="_blank" rel="noreferrer noopener"> SFCC docs</a>, this field is a dictionary where we can store custom properties. The next step on the flow is to <strong>redirect to an action on the Login controller</strong>, and inside that action retrieve the information that was stored.</p>



<pre class="wp-block-code"><code>// SomeEndpoint.js controller
res.redirect(URLUtils.https("Login-Show"));

// Login-Show action
var apiResponse = session.privacy.myCustomField ? JSON.parse(session.privacy.myCustomField) : "";
// use apiResponse for further processing</code></pre>



<p>The problem happens when running the Login-Show action, <em>session.privacy.myCustomField</em> is equal to <em>null </em>so we can&#8217;t retrieve the information that was saved in the previous controller, before the redirect.</p>



<h2 class="wp-block-heading">Thinking about solutions</h2>



<p>First you try to keep using the session object provided by SFCC, but saving the data on another field called <em><a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/index.jsp?topic=%2Fcom.demandware.dochelp%2FDWAPI%2Fscriptapi%2Fhtml%2Fapi%2Fclass_dw_system_Session.html&amp;anchor=dw_system_Session_getCustom_DetailAnchor" target="_blank" rel="noreferrer noopener">custom</a></em>. The difference between custom and privacy is that data inside <em>custom</em> won&#8217;t be deleted when the user logs out.</p>



<p>Then we try to think of another approach because storing in any field of the session might not work. So we try to store this information in <a href="https://documentation.b2c.commercecloud.salesforce.com/DOC1/index.jsp?topic=%2Fcom.demandware.dochelp%2Fcontent%2Fb2c_commerce%2Ftopics%2Fcustom_objects%2Fb2c_custom_objects.html" target="_blank" rel="noreferrer noopener">Custom Objects</a> (CO). It&#8217;s important to note a very important <strong>security detail with the custom objects approach</strong>. Creating and retrieving these custom objects happen in different controller actions. So in order for the second action to know how to retrieve the correct custom object that holds the API response data, it needs the ID for the CO. </p>



<p>Since moving from one action to the other happens through a redirect, we can only use query parameters. On top of that, the second action is Login-Show which renders HTML to the client in the end. Which means this <strong>custom object ID is publicly available</strong> in the user&#8217;s address bar&#8230; not good! This of course means you could simply type an ID in the URL and try to find a valid custom object containing some user&#8217;s data. Of course in the Login-Show action we can retrieve the custom object and delete it afterwards. But there is still a window where this data is accessible.</p>



<p>In order to prevent this exploit, we add code on the Login-Show action to validate the user accessing the login page is on the same session that initiated the login flow&#8230; but wait, did I just say &#8220;same session&#8221;. If we are having problems with this session not being the same, how can we guarantee validating this sessionID will work?? We have no guarantees basically.</p>



<p></p>



<p>After a lot of thinking, experimenting and reading docs, we decide to contact Salesforce by opening a support ticket. After that we were able to schedule a meeting and showed our problem. They investigated and got back to us with feedback on what was happening to the session, during the flow described earlier.</p>



<h3 class="wp-block-heading">Final solution</h3>



<p>The final solutions is&#8230; to not change any code. The problem was the <strong>hostname</strong> that was configured in the API that called the site. We simply configured it with the correct URL, which is the same domain in which the site uses when calling this API. In order to keep the correct session in the login page after the redirect, we made these changes:</p>



<p><strong>BEFORE</strong>: https://<strong>production-zone-digitmarket.demandware.net</strong>/on/demandware.store/Sites-Awesome-Site/pt_PT/SomeEndpoint-Action</p>



<p><strong>AFTER</strong>: https://<strong>awesome-domain.com</strong>/on/demandware.store/Sites-Awesome-Site/pt_PT/SomeEndpoint-Action</p>



<h2 class="wp-block-heading">Conclusion</h2>



<p>Hopefully this post helped you know how session management works in Salesforce B2C Commerce Cloud. The moral of the story is: when code works everywhere except on one environment, it&#8217;s generally an environment <strong>configuration</strong> problem.</p>



<p>If you enjoyed this post or learned something from it, leave a comment with your thoughts.</p>
<p>The post <a href="https://blogit.create.pt/davidpereira/2022/06/06/session-management-in-salesforce-b2c-commerce-cloud/">Session management in Salesforce B2C Commerce Cloud</a> appeared first on <a href="https://blogit.create.pt">Blog IT</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://blogit.create.pt/davidpereira/2022/06/06/session-management-in-salesforce-b2c-commerce-cloud/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
	</channel>
</rss>
