﻿<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>Tim Heuer</title>
  <id>https://www.timheuer.com/</id>
  <subtitle>The web site and blog of Tim Heuer, Program Manager for .NET and author of Alexa.NET and Callisto (a XAML UI Toolkit). A resource to learn how to develop software with .NET technologies. This blog provides information on how to get started with .NET, ASP.NET, Blazor, and other Microsoft developer technologies.</subtitle>
  <generator uri="https://github.com/madskristensen/miniblogtest" version="1.0">MiniBlogCore</generator>
  <updated>2026-01-28T02:37:10Z</updated>
  <entry>
    <id>https://www.timheuer.com/blog/rest-client-for-vs-code-endpoint-for-vs-code/</id>
    <title>Another selfish tool–Endpoint for VS Code</title>
    <updated>2026-01-28T02:37:10Z</updated>
    <published>2026-01-28T02:37:10Z</published>
    <link href="https://www.timheuer.com/blog/rest-client-for-vs-code-endpoint-for-vs-code/" />
    <author>
      <name>Tim Heuer</name>
      <email>tim@timheuer.com</email>
    </author>
    <category term="copilot" />
    <category term="rest" />
    <category term="api" />
    <category term="vscode" />
    <content type="html">&lt;p&gt;I recently was actually building a personal app that integrated with 5 different third-party APIs, each having different authentication requirements or API keys to navigate the calls. I normally would just use .http files and be done with it, but I’m a GUI person at heart and as much as I was iterating with the app and these services (across sandbox/prod environments too), navigating the single .http file just as raw text was getting frustrating for me honestly. I was already using the Rest Client for VS Code extension which is great and the absolute simplest and likely widely used. I tried a few other extensions in the marketplace but they have really shifted to be ‘enterprise’ and SaaS based and I just didn’t need all those capabilities or want a service.&lt;/p&gt;  &lt;p&gt;So I just decided to have Copilot help me and iterate on a plan and implementation…and decided to call it “&lt;strong&gt;&lt;a href="https://marketplace.visualstudio.com/items?itemName=TimHeuer.vscode-endpoint"&gt;Endpoint&lt;/a&gt;&lt;/strong&gt;” and here we are.&lt;/p&gt;  &lt;p&gt;&lt;img title="Endpoint for VS Code" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of Endpoint for VS Code" src="https://storage2.timheuer.com/endpoint-1.png" width="1024" height="566" /&gt;&lt;/p&gt;  &lt;p&gt;It’s basically a REST client, but biased hard toward *staying inside the editor* and keeping your requests portable.&lt;/p&gt;  &lt;h2&gt;What problem this is trying to solve (for me)&lt;/h2&gt;  &lt;p&gt;There are already a lot of ways to “test an endpoint.” This isn’t meant to be a hot take about existing tools. This is more like: &lt;em&gt;what do I personally want when I’m iterating fast and gives me a structured way of seeing output?&lt;/em&gt;&lt;/p&gt;  &lt;p&gt;For me the pain points are:&lt;/p&gt;  &lt;ul&gt;   &lt;li&gt;I want an API client that feels like part of VS Code: Same theme, same UX expectations, no “external app” vibe.&lt;/li&gt;    &lt;li&gt;I want requests to be roaming with me, or shareable in the repo just like .http files: If a request is useful, I want it in source control not in a service.&lt;/li&gt;    &lt;li&gt;I want multi-step flows to be easy: The classic “call login, grab token, call the real endpoint” loop.&lt;/li&gt;    &lt;li&gt;I wanted &lt;em&gt;some &lt;/em&gt;level of compatibility with the .http format because I do use multiple tools&lt;/li&gt; &lt;/ul&gt;  &lt;p&gt;Endpoint is my attempt at optimizing those loops &lt;strong&gt;for me&lt;/strong&gt;.&lt;/p&gt;  &lt;h3&gt;But don’t `.http` files already do this?&lt;/h3&gt;  &lt;p&gt;Yep — and that’s actually part of the point. I like `.http` files because they’re **portable**, **diffable**, and **live well in source control**. If all you need is “a request in a file that I can run,” `.http` is a great answer and &lt;a href="https://marketplace.visualstudio.com/items?itemName=humao.rest-client"&gt;REST Client&lt;/a&gt; is a great simple tool! &lt;/p&gt;  &lt;p&gt;Endpoint leans into that by supporting &lt;strong&gt;import/export&lt;/strong&gt; so you can move between the file format and the GUI workflow. In other words: even if you don’t start in `.http`, you can end up there (and vice versa). Endpoint isn’t trying to replace that model; it’s a selfish tool for GUI lovers who want something around the same workflow and make it easier, intuitive, and graphical:&lt;/p&gt;  &lt;ul&gt;   &lt;li&gt;&lt;strong&gt;A native GUI for editing&lt;/strong&gt; params/headers/auth/body without living in raw text all day&lt;/li&gt;    &lt;li&gt;&lt;strong&gt;Collections + defaults&lt;/strong&gt; (shared headers/auth/variables) so you don’t repeat yourself&lt;/li&gt;    &lt;li&gt;&lt;strong&gt;Environments&lt;/strong&gt; that are quick to switch (and don’t accidentally leak secrets into git)&lt;/li&gt;    &lt;li&gt;&lt;strong&gt;Chaining + pre-requests&lt;/strong&gt; for the auth/multi-step reality of modern APIs&lt;/li&gt;    &lt;li&gt;&lt;strong&gt;Code snippets&lt;/strong&gt; if you need bridge from “it works” to “ship it in the app”&lt;/li&gt; &lt;/ul&gt;  &lt;p&gt;So if you already have `.http` files you love, cool — keep them. Endpoint is me acknowledging that the file format is great, but I personally wanted fewer papercuts while iterating.&lt;/p&gt;  &lt;blockquote&gt;   &lt;p&gt;Why not just persist everything as `.http` all the time?&amp;#160; Mostly because the GUI needs a structured model (headers on/off, auth type fields, body mode, collection defaults/inheritance, secret handling, etc.). You *can* represent a lot of that in text, but you quickly end up either losing fidelity or inventing extra conventions. I chose to persist a richer model for the day-to-day workflow, and then use import/export as the compatibility layer when you want the portable file representation.&lt;/p&gt; &lt;/blockquote&gt;  &lt;h3&gt;What about other existing GUI tools?&lt;/h3&gt;  &lt;p&gt;Yep, there are a good set of ones out there that are incredibly rich. Some are mostly freemium models too though. And some may not be able to be used in certain environments because of organizational policies. These are all fantastic tools, but they didn’t work for my every need, so I just selfishly wanted my own flow…which I acknowledge may not work for anyone else’s need :-).&amp;#160; But by all means, the ones out there are super popular, incredibly powerful, and do way more for advanced scenarios.&lt;/p&gt;  &lt;h2&gt;What Endpoint gives me (in practice)&lt;/h2&gt;  &lt;p&gt;At a high level, it’s a request builder + response viewer that stays inside VS Code. The part I care about isn’t the checklist of features — it’s that the whole loop (edit → send → inspect → tweak → repeat) happens without me leaving the editor.&lt;/p&gt;  &lt;p&gt;&lt;img title="Endpoint split panel for request/response" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of the Endpoint split pane" src="https://storage2.timheuer.com/endpoint-2.png" width="943" height="778" /&gt;&lt;/p&gt;  &lt;p&gt;The mental model is simple: &lt;strong&gt;Collections&lt;/strong&gt; for grouped project scopes and saved requests, &lt;strong&gt;Environments&lt;/strong&gt; for variables, and a simple split request/response view with the most important things at the surface.&lt;/p&gt;  &lt;p&gt;The small set of things I reach for most:&lt;/p&gt;  &lt;ul&gt;   &lt;li&gt;Variables + `.env` support so I’m not hardcoding base URLs or keys (supports .env files, or stored in VS Code SecretStorage)&lt;/li&gt;    &lt;li&gt;Repeatable/shared/inherited properties for headers and auth&lt;/li&gt;    &lt;li&gt;Chaining / pre-requests for “login then use token” flows&lt;/li&gt;    &lt;li&gt;Roaming saved collections across machines – even when I’m not wanting to persist these to team repo yet&lt;/li&gt;    &lt;li&gt;Export/import for when I want to serialize for sharing broadly if needed&lt;/li&gt; &lt;/ul&gt;  &lt;p&gt;I’m still iterating, but those cover 90% of my day-to-day.&lt;/p&gt;  &lt;h2&gt;Summary&lt;/h2&gt;  &lt;p&gt;This isn’t meant to replace every API tool ever as I mentioned. Nearly every tool I create starts for extremely selfish reasons. It’s optimized for the thing I do the most: tight inner-loop iteration while already living in VS Code.&lt;/p&gt;  &lt;p&gt;If you need heavyweight collaboration features, deep test scripting, or a bunch of external integrations, this isn’t going to be it and you might still prefer something else.&lt;/p&gt;  &lt;p&gt;But if your primary pain is “I just want to hit this endpoint while I’m validating, and I don’t want to leave the editor and have the same editor UI,” then this has been a meaningful productivity boost for me and maybe for you. &lt;/p&gt;  &lt;p&gt;&lt;a href="vscode:extension/TimHeuer.vscode-endpoint"&gt;&lt;img alt="Install in VS Code" src="https://img.shields.io/badge/Install%20in-VS%20Code-0098FF?style=flat-square&amp;amp;logo=visualstudiocode&amp;amp;logoColor=white" /&gt; &lt;/a&gt;&lt;a href="vscode-insiders:extension/TimHeuer.vscode-endpoint"&gt;&lt;img alt="Install in VS Code Insiders" src="https://img.shields.io/badge/Install%20in-VS%20Code%20Insiders-24bfa5?style=flat-square&amp;amp;logo=visualstudiocode&amp;amp;logoColor=white" /&gt; &lt;/a&gt;&lt;/p&gt;  &lt;p&gt;Feel free to try it out and log issues as you face them using “Report Issue” in VS Code!&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://www.timheuer.com/blog/my-ai-copilot-developer-workflow-relies-on-planning/</id>
    <title>AI in My Developer Workflow: From Prompting to Planning</title>
    <updated>2026-01-27T17:37:50Z</updated>
    <published>2026-01-27T17:34:51Z</published>
    <link href="https://www.timheuer.com/blog/my-ai-copilot-developer-workflow-relies-on-planning/" />
    <author>
      <name>Tim Heuer</name>
      <email>tim@timheuer.com</email>
    </author>
    <category term="ai" />
    <category term="llm" />
    <category term="copilot" />
    <content type="html">&lt;p&gt;When I first started using AI in my developer workflow, I treated it like a smarter search engine.&lt;/p&gt;  &lt;p&gt;Short prompts. Minimal context. Very atomic asks. And usually in the editor as a comment, triggering the completions flow is where I operated!&lt;/p&gt;  &lt;p&gt;“Write me a regex for X.”    &lt;br /&gt;“Why is this failing?”     &lt;br /&gt;“Convert this to C#.”&lt;/p&gt;  &lt;p&gt;&lt;img title="Comments as Prompt" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Image showing using comments as a prompt" src="https://storage2.timheuer.com/ai-blog-image-1.png" width="1323" height="163" /&gt;&lt;/p&gt;  &lt;p&gt;Sometimes it worked. Often it didn’t. And when it didn’t, my instinct was to tweak a word or two and try again, the same way I would refine a Google query. That mindset held me back longer than I realized.&lt;/p&gt;  &lt;h2&gt;Early Exploration: Terse Prompts, Terse Results&lt;/h2&gt;  &lt;p&gt;Those early days were mostly frustration disguised as curiosity. I was optimizing for speed, not clarity. I would fire off a one-liner, get something half right, then either patch it myself or throw it away.&lt;/p&gt;  &lt;blockquote&gt;   &lt;p&gt;The ‘early days’ is also funny to think about how fast things are moving. I’ll acknowledge that the models I was using as an early adopter also were not as advanced as they are as of this writing in January 2026, nor will be in a month, 2 months, 3 months from now! The breadth of models and their capabilities is one of the rapid accelerations in this space.&lt;/p&gt; &lt;/blockquote&gt;  &lt;p&gt;What I didn’t appreciate at the time was that I was giving the model no room to reason. I was asking for outcomes without offering intent. No constraints. No tradeoffs. No plan.&lt;/p&gt;  &lt;p&gt;AI isn’t terrible at this, but it is also not where it shines. I was leaving a lot of capability on the table. &lt;/p&gt;  &lt;h2&gt;The Shift: Planning First, Prompting Second&lt;/h2&gt;  &lt;p&gt;The biggest unlock for me wasn’t a new model. It was planning. I saw my peers like &lt;a href="https://x.com/pierceboggan"&gt;Pierce Boggan&lt;/a&gt; really leverage a two-phased approach using custom prompts (what we called ‘chat modes’ earlier) around Planning and Implementation.&amp;#160; Pierce shared some early iterations around how he did that and I found myself really starting to switch to these modes (here is what I used to use: &lt;a href="https://gist.github.com/timheuer/d3a9544b689360784305ebcf94d974f3"&gt;Planning&lt;/a&gt;, &lt;a href="https://gist.github.com/timheuer/0e09a4c37e1e2b8999cb0745560aa768"&gt;Implementation&lt;/a&gt;).&lt;/p&gt;  &lt;p&gt;Once I started explicitly asking the AI to plan before writing code, everything changed. Instead of jumping straight to implementation, I would ask for a high-level approach, assumptions, risks, tradeoffs, and open questions. &lt;/p&gt;  &lt;p&gt;This became useful not just for greenfield projects, but especially for bigger features inside existing systems. The kind of work where you need to think about impact radius, backwards compatibility, and how something will age over time.&lt;/p&gt;  &lt;p&gt;This planning mode is now built in to nearly every tool. There are some heavier-weight workflows like SpecKit and things that require some ‘constitution’ setup and if that’s for you, that’s great. Those also can be re-usable &lt;em&gt;inputs&lt;/em&gt; to any planning mode as well. For me an open slate has been fine and I just iterate IN planning mode and ensure that I address any follow-ups&lt;/p&gt;  &lt;p&gt;&lt;img title="VS Code Copilot Planning Chat" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of a plan chat in VS Code with Copilot" src="https://storage2.timheuer.com/ai-blog-image-2.png" width="1003" height="710" /&gt;&lt;/p&gt;  &lt;p&gt;The real step change came when I started persisting those plans as an artifact and task list (again, before task-tracking was in any of the tools I used). &lt;/p&gt;  &lt;p&gt;I now drop them into a &lt;code&gt;/docs/&lt;/code&gt; folder. Sometimes they are lightweight notes. Sometimes they look more like a product requirements document. Either way, they live alongside the code. That means they are reviewable, shareable, and reusable.&lt;/p&gt;  &lt;p&gt;Treating that conversation as an artifact was not only a context-saving changer, but also a time one! Those documents also become prompts. I didn’t have to rely on memory sessions and instead could get back later and start prompting with “Let’s work on part 3 of the plan now” and Copilot could pick right up with all the context. When I come back days or weeks later, I can feed the plan back into the model and say, “Continue from here.” That continuity has been incredibly valuable. So when offered to ‘start implementation’ or save the plan…always opt to save the plan!&lt;/p&gt;  &lt;h2&gt;Agents and Longer-Lived Context&lt;/h2&gt;  &lt;p&gt;From there, moving into agents felt natural.&lt;/p&gt;  &lt;p&gt;I have been starting many projects using &lt;a href="https://gist.github.com/burkeholland/3be70206469b3e344aa11faf64109c6c"&gt;Burke Holland’s Opus Agent&lt;/a&gt;&lt;strong&gt;&lt;/strong&gt;, and it was a great on-ramp. What clicked for me was not just the output, but the structure.&lt;/p&gt;  &lt;p&gt;&lt;img title="VS Code Custom Agent selection" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of choosing a custom agent in VS Code" src="https://storage2.timheuer.com/ai-blog-image-3.png" width="391" height="304" /&gt;&lt;/p&gt;  &lt;p&gt;Sub-agents handling focused tasks. Instructions that evolve as the project evolves. A clearer separation between thinking and doing. Instructions to also persist new-learnings for the benefit of later sessions. This part of the prompt can’t be stated enough how valuable it is.&amp;#160; Here’s the snippet:&lt;/p&gt;  &lt;blockquote&gt;   &lt;p&gt;&lt;em&gt;Each time you complete a task or learn important information about the project, you should update the `.github/copilot-instructions.md` or any `agent.md` file that might be in the project to reflect any new information that you've learned or changes that require updates to these instructions files.&lt;/em&gt;&lt;/p&gt; &lt;/blockquote&gt;  &lt;p&gt;That structure maps much more closely to how I actually work as a developer. Iterative. Layered. Occasionally opinionated.&lt;/p&gt;  &lt;p&gt;Context helpers are also essential. I’ve found &lt;a href="https://context7.com/"&gt;Context7&lt;/a&gt; (who was a first mover in the MCP server context race) to be fantastic for my needs so far! It serves as part of the researcher in this custom agent fetching information about frameworks, documentation about guidelines, blog posts that might be helpful, and reasoning with all of that to provide me with some well-rounded options. Seriously, use it.&lt;/p&gt;  &lt;p&gt;&lt;img title="Context7" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Image of the Context7 web site" src="https://storage2.timheuer.com/ai-blog-image-6.png" width="1024" height="584" /&gt;&lt;/p&gt;  &lt;h3&gt;Sessions, State, and Knowing When to Start Fresh&lt;/h3&gt;  &lt;p&gt;Another habit I have had to learn is session management.&lt;/p&gt;  &lt;p&gt;I now treat a new problem like opening a new terminal window. If I am switching domains, rethinking an approach, or starting a distinct feature, I open a new session.&lt;/p&gt;  &lt;p&gt;That reset matters. It avoids dragging along stale assumptions and accidental context. State is powerful, but only when it is intentional.&lt;/p&gt;  &lt;h3&gt;Different Models for Different Jobs&lt;/h3&gt;  &lt;p&gt;I also no longer believe there is a single best model. This has really been where the massive advancements have taken place in my opinion. GPT was amazing, until Claude Sonnet 3.x came out, until Claude Sonnet 4.5 came out, until Claude Opus 4.5 came out, until Gemini Pro 3, etc, etc.&lt;/p&gt;  &lt;p&gt;Right now, my personal defaults look like this:&lt;/p&gt;  &lt;ul&gt;&lt;/ul&gt;    &lt;ul&gt;   &lt;li&gt;Opus 4.5 for coding and deeper technical reasoning&lt;/li&gt;    &lt;li&gt;Gemini Pro for UI exploration and visual-adjacent thinking&lt;/li&gt;    &lt;li&gt;“-mini” variants at time for some speed needs&lt;/li&gt; &lt;/ul&gt;    &lt;ul&gt;&lt;/ul&gt;  &lt;p&gt;They have different strengths, and leaning into that has made the workflow feel more like a toolbox and less like a magic button. Your own mileage may vary depending on the tech, problem space, and tool you are using. Try them all is my advice and you’ll settle on one that works with your style, your tooling and your desired output you prefer. I focus on finding the one to help me complete the ‘job to be done’ versus any bias I have on whether it is good for any given framework I may be familiar with.&lt;/p&gt;  &lt;h3&gt;Debug inner-loop&lt;/h3&gt;  &lt;p&gt;I’ve also got a lot more comfortable with debugging with the Copilot in my inner-loop. If I have any error that wasn’t caught in build (where Copilot would normally see it and fix), or a UI that isn’t quite right, or an output that is wrong…I just copy paste that into that same fix session (or use a new one if a new problem) and sometimes just say “fix it.” Quite literally I’ve taken screenshots, pasted, said “fix it” and it does – interpreting the image, the issue, and scoping the fix. Amazing iterative process for most things! Heck I even pasted Apple App Store rejection notes into a new session “app got rejected, here was the notes” and BOOM with confidence it started to get to work &lt;em&gt;Ah, I see where &amp;lt;appname&amp;gt; is violating this guideline, I’ll work on a fix…&lt;/em&gt; These moments make me smile every time.&lt;/p&gt;  &lt;h2&gt;An Honest Take: I Still Prefer a GUI&lt;/h2&gt;  &lt;p&gt;One thing worth saying out loud is that I still strongly prefer working in a GUI. Sorry, I’m just old I guess. &lt;/p&gt;  &lt;p&gt;A lot of AI tooling momentum right now is centered around the CLI. Agents that live in terminals. Prompts piped through commands. Workflows that assume you want to live in a shell all day.&lt;/p&gt;  &lt;p&gt;While I can do that, I do not particularly enjoy it. For me, the CLI experience is not intuitive.&lt;/p&gt;  &lt;p&gt;I am far more effective inside environments like VS Code or Visual Studio, where I already live. I can review code visually and contextually. I can leverage other extensions alongside AI. I can navigate files, diffs, tests, logs, and resources in one place. I can reason about the project as a whole, not just a stream of text. That familiarity matters. AI works best for me when it is embedded into that environment, not when it pulls me out of it. When I am already thinking about a feature, a refactor, or a bug, I want the AI to meet me there rather than forcing a context switch just to interact with it. Easier for me to mentally see other context, relationship to my repo, quick diff reviewing, etc.&lt;/p&gt;  &lt;p&gt;&lt;img title="VS Code Editor" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of a coding session in VS Code" src="https://storage2.timheuer.com/ai-blog-image-5.png" width="1024" height="570" /&gt;&lt;/p&gt;  &lt;p&gt;This also ties back to planning. Having plans, docs, and context living next to the code makes everything easier to review, validate, and evolve. The GUI is not just comfort. It is leverage.&lt;/p&gt;  &lt;p&gt;I know plenty of developers feel the opposite, and that is great. This is just what works best for me. And I acknowledge just like my transition to using AI, my transition to using different methods of development will also evolve I’m sure. I’m personally just not seeing a huge benefit to moving to a CLI-only flow for what I do development on these days – I don’t need 10 terminal instances running at one time.&lt;/p&gt;  &lt;h2&gt;Acknowledging the Privilege&lt;/h2&gt;  &lt;p&gt;One thing I do not want to gloss over is that this workflow is enabled by paid plans. That matters. Not everyone can or should stack subscriptions just to experiment. &lt;/p&gt;  &lt;p&gt;&lt;img title="AI Model Selection in Visual Studio" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of model selection in Visual Studio" src="https://storage2.timheuer.com/ai-blog-image-4.png" width="484" height="525" /&gt;&lt;/p&gt;  &lt;p&gt;I am fortunate to be able to explore these tools deeply, and I try to stay conscious of that when talking about what has worked for me. I’m starting to pay more close attention to what feeds into my context window either on-purpose or accidental as I know it impacts token-based billing and just efficiency of the LLM as well. &lt;/p&gt;  &lt;h2&gt;Where I Have Landed&lt;/h2&gt;  &lt;p&gt;AI has not replaced how I build software. It has changed how I think while building it.&lt;/p&gt;  &lt;p&gt;I plan more.&amp;#160;&amp;#160; &lt;br /&gt;I am clearer about intent.    &lt;br /&gt;I have found a ‘peer’ to communicate with, not command. The more I can express as I would with a co-worker, the greater success I find.&lt;/p&gt;  &lt;p&gt;And ironically, those improvements would still pay off even if the AI disappeared tomorrow.&lt;/p&gt;  &lt;p&gt;That might be the biggest takeaway for me. The most valuable part of integrating AI into my workflow was not automation. It was becoming a more deliberate developer.&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://www.timheuer.com/blog/add-environment-variables-to-aspire-services/</id>
    <title>Adding environment vars to .NET Aspire services</title>
    <updated>2023-11-29T00:49:10Z</updated>
    <published>2023-11-29T00:49:10Z</published>
    <link href="https://www.timheuer.com/blog/add-environment-variables-to-aspire-services/" />
    <author>
      <name>Tim Heuer</name>
      <email>tim@timheuer.com</email>
    </author>
    <category term="dotnet" />
    <category term="aspire" />
    <content type="html">&lt;p&gt;Have you heard about &lt;strong&gt;&lt;a href="https://aka.ms/dotnet-aspire"&gt;.NET Aspire&lt;/a&gt;&lt;/strong&gt; yet? If not, go &lt;a href="https://aka.ms/dotnet-aspire"&gt;read&lt;/a&gt;, then maybe &lt;a href="https://youtu.be/z1M-7Bms1Jg?si=c-uGxOyRZ7eZ7iG9"&gt;watch&lt;/a&gt;. It’s okay I’ll wait.&lt;/p&gt;  &lt;p&gt;Ok, great now that you have some grounding, I’m going to share some tips time-to-time of things that I find delightful that may not be obvious.&amp;#160; In this example I’m using the default .NET Aspire application template and added an ASP.NET Web API with enlisting into the orchestration. What does that mean exactly? Well the AppHost project (orchestrator) now has a reference to the project like so:&lt;/p&gt;  &lt;pre class="brush: csharp; toolbar: false; highlight: [3];"&gt;var builder = DistributedApplication.CreateBuilder(args);

builder.AddProject&amp;lt;Projects.WebApplication1&amp;gt;(&amp;quot;webapplication1&amp;quot;);

builder.Build().Run();
&lt;/pre&gt;

&lt;p&gt;When I run the AppHost it launches all my services, etc. Yes this is a VERY simple case and only one service…I’m here to make a point, stay with me.&lt;/p&gt;

&lt;p&gt;If in my service I add some Aspire components they may come with their own configuration information. Things like connection strings or configuration options for the components. A lot of times these will result in environment variables at deploy time that the components will read. You can see this if you run and inspect the environment variables of the app:&lt;/p&gt;

&lt;p&gt;&lt;img title="Screenshot of .NET Aspire dashboard environment variables" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of .NET Aspire dashboard environment variables" src="https://storage2.timheuer.com/aspireenv1.png" width="1024" height="671" /&gt;&lt;/p&gt;

&lt;p&gt;But what if I have a configuration/variable that I need to set that isn’t coming from a component? I want that to be a part of the application model so that the orchestrator puts things in the right places, but also deployment tooling is aware of my whole config needs. No problem, here’s a quick tip if you haven’t discovered it yet!&lt;/p&gt;

&lt;p&gt;I want a config value in my app as MY_ENV_CONFIG_VAR…a very important variable. It is a value my API needs as you can see in this super important endpoint:&lt;/p&gt;

&lt;pre class="brush: csharp; toolbar: false;"&gt;app.MapGet(&amp;quot;/somerandomconfigvar&amp;quot;, () =&amp;gt;
{
    var config = builder.Configuration.GetValue&amp;lt;string&amp;gt;(&amp;quot;MY_ENV_CONFIG_VAR&amp;quot;);
    return config;
});
&lt;/pre&gt;

&lt;p&gt;How can I get this in my Aspire environment so the app model is aware, deployment manifests are aware, etc. Easy. In the AppHost change your AddProject line to add a WithEnvironment() call specifying the variable/value to set. Like this:&lt;/p&gt;

&lt;pre class="brush: csharp; toolbar: false; highlight: [4];"&gt;var builder = DistributedApplication.CreateBuilder(args);

builder.AddProject&amp;lt;Projects.WebApplication1&amp;gt;(&amp;quot;webapplication1&amp;quot;)
    .WithEnvironment(&amp;quot;MY_ENV_CONFIG_VAR&amp;quot;, &amp;quot;Hello world!&amp;quot;);

builder.Build().Run();
&lt;/pre&gt;

&lt;p&gt;Now when I launch the orchestrator runs all my services and adds them to the environment variables for that app:&lt;/p&gt;

&lt;p&gt;&lt;img title="Screenshot of .NET Aspire dashboard environment variables" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of .NET Aspire dashboard environment variables" src="https://storage2.timheuer.com/aspireenv2.png" width="1024" height="643" /&gt;&lt;/p&gt;

&lt;p&gt;And when I produce a deployment manifest, that information is stamped as well for deployment tools to reason with and set in their configuration way. &lt;/p&gt;

&lt;pre class="brush: json; toolbar: false; highlight: [9];"&gt;{
  &amp;quot;resources&amp;quot;: {
    &amp;quot;webapplication1&amp;quot;: {
      &amp;quot;type&amp;quot;: &amp;quot;project.v0&amp;quot;,
      &amp;quot;path&amp;quot;: &amp;quot;..\\WebApplication1\\WebApplication1.csproj&amp;quot;,
      &amp;quot;env&amp;quot;: {
        &amp;quot;OTEL_DOTNET_EXPERIMENTAL_OTLP_EMIT_EXCEPTION_LOG_ATTRIBUTES&amp;quot;: &amp;quot;true&amp;quot;,
        &amp;quot;OTEL_DOTNET_EXPERIMENTAL_OTLP_EMIT_EVENT_LOG_ATTRIBUTES&amp;quot;: &amp;quot;true&amp;quot;,
        &amp;quot;MY_ENV_CONFIG_VAR&amp;quot;: &amp;quot;Hello world!&amp;quot;
      },
      &amp;quot;bindings&amp;quot;: {
        &amp;quot;http&amp;quot;: {
          &amp;quot;scheme&amp;quot;: &amp;quot;http&amp;quot;,
          &amp;quot;protocol&amp;quot;: &amp;quot;tcp&amp;quot;,
          &amp;quot;transport&amp;quot;: &amp;quot;http&amp;quot;
        },
        &amp;quot;https&amp;quot;: {
          &amp;quot;scheme&amp;quot;: &amp;quot;https&amp;quot;,
          &amp;quot;protocol&amp;quot;: &amp;quot;tcp&amp;quot;,
          &amp;quot;transport&amp;quot;: &amp;quot;http&amp;quot;
        }
      }
    }
  }
}
&lt;/pre&gt;

&lt;p&gt;Pretty cool, eh? Anyhow, just a small tip to help you on your .NET Aspire journey.&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://www.timheuer.com/blog/anatomy-of-a-dotnet-devcontainer/</id>
    <title>Anatomy of a .NET devcontainer</title>
    <updated>2023-10-25T17:47:10Z</updated>
    <published>2023-10-25T17:40:56Z</published>
    <link href="https://www.timheuer.com/blog/anatomy-of-a-dotnet-devcontainer/" />
    <author>
      <name>Tim Heuer</name>
      <email>tim@timheuer.com</email>
    </author>
    <category term="dotnet" />
    <category term="codespaces" />
    <category term="github" />
    <content type="html">&lt;div&gt;Recently, the .NET team released a starter Codespaces definition for .NET.&amp;#160; There is a great narrated overview of this and the benefit, uses, etc. by the great &lt;a href="https://twitter.com/JamesMontemagno"&gt;James Montemagno&lt;/a&gt; you can watch here: &lt;/div&gt;  &lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=bJ8vfsqr4h0"&gt;Unbelievable Instant .NET Development Setup&lt;/a&gt;. It is available when you visit &lt;a href="https://github.com/codespaces"&gt;https://github.com/codespaces&lt;/a&gt; and you can start using it immediately.&lt;/p&gt;  &lt;div&gt;&lt;/div&gt;  &lt;p&gt;&lt;img title="Screenshot of Codespaces Quickstarts" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of Codespaces Quickstarts" src="https://storage2.timheuer.com/cshomepage.png" width="1026" height="661" /&gt;&lt;/p&gt;  &lt;div&gt;&lt;/div&gt;  &lt;p&gt;Codespaces are built off of the devcontainer mechanism, which allows you to define the environment in a bunch of different ways using container images or just a devcontainer image.&amp;#160; I won’t go through all the options you can do with devcontainers, but will share the anatomy of this and what I like about it.&lt;/p&gt;  &lt;div&gt;&lt;/div&gt;  &lt;blockquote&gt;   &lt;p&gt;NOTE: If you don’t know what Development Containers are, you can read about them here &lt;a title="https://containers.dev/" href="https://containers.dev/"&gt;https://containers.dev/&lt;/a&gt;&lt;/p&gt; &lt;/blockquote&gt;  &lt;div&gt;&lt;/div&gt;  &lt;p&gt;Throughout this post I’ll be referring to snippets of the definition but you can find the FULL definition here: &lt;a href="https://github.com/github/dotnet-codespaces/blob/main/.devcontainer/devcontainer.json"&gt;github/dotnet-codespaces&lt;/a&gt;.&lt;/p&gt;  &lt;div&gt;&lt;/div&gt;  &lt;h2&gt;Base Image&lt;/h2&gt;  &lt;div&gt;&lt;/div&gt;  &lt;p&gt;Let’s start with the base image. This is the starting point of the devcontainer, the OS, and pre-configurations built-in, etc. You can use a Dockerfile definition or a pre-defined container image. I think if you have everything bundled nicely in an existing container image in a registry, start there. Just so happens, .NET does this and has nice images with the SDK already in them, so let’s use that!&lt;/p&gt;  &lt;div&gt;&lt;/div&gt;  &lt;div&gt;   &lt;pre class="brush: json; toolbar: false; highlight: [3];"&gt;{
    &amp;quot;name&amp;quot;: &amp;quot;.NET in Codespaces&amp;quot;,
    &amp;quot;image&amp;quot;: &amp;quot;mcr.microsoft.com/dotnet/sdk:8.0&amp;quot;,
    ...
}
  &lt;/pre&gt;
&lt;/div&gt;

&lt;div&gt;&lt;/div&gt;

&lt;p&gt;This uses the definition from our own container images defined here: &lt;a title="https://hub.docker.com/_/microsoft-dotnet-sdk/" href="https://hub.docker.com/_/microsoft-dotnet-sdk/"&gt;https://hub.docker.com/_/microsoft-dotnet-sdk/&lt;/a&gt;. Again this allows us a great/simple starting point.&lt;/p&gt;

&lt;div&gt;&lt;/div&gt;

&lt;h2&gt;Features&lt;/h2&gt;

&lt;div&gt;&lt;/div&gt;

&lt;p&gt;In the devcontainer world you can define ‘features’ which are like little extensions someone else has done to make it easy to add/inject into the base image. One aspect of adding things can be done through pre/post scripts, but if someone has created a ‘feature’ in the devcontainer world, this makes it super easy as you delegate that setup to this feature owner. For this image we’ve added a few features:&lt;/p&gt;

&lt;pre class="brush: json; toolbar: false; highlight: [4];"&gt;{
    &amp;quot;name&amp;quot;: &amp;quot;.NET in Codespaces&amp;quot;,
    &amp;quot;image&amp;quot;: &amp;quot;mcr.microsoft.com/dotnet/sdk:8.0&amp;quot;,
    &amp;quot;features&amp;quot;: {
        &amp;quot;ghcr.io/devcontainers/features/docker-in-docker:2&amp;quot;: {},
        &amp;quot;ghcr.io/devcontainers/features/github-cli:1&amp;quot;: {
            &amp;quot;version&amp;quot;: &amp;quot;2&amp;quot;
        },
        &amp;quot;ghcr.io/devcontainers/features/powershell:1&amp;quot;: {
            &amp;quot;version&amp;quot;: &amp;quot;latest&amp;quot;
        },
        &amp;quot;ghcr.io/azure/azure-dev/azd:0&amp;quot;: {
            &amp;quot;version&amp;quot;: &amp;quot;latest&amp;quot;
        },
        &amp;quot;ghcr.io/devcontainers/features/common-utils:2&amp;quot;: {},
        &amp;quot;ghcr.io/devcontainers/features/dotnet:2&amp;quot;: {
            &amp;quot;version&amp;quot;: &amp;quot;none&amp;quot;,
            &amp;quot;dotnetRuntimeVersions&amp;quot;: &amp;quot;7.0&amp;quot;,
            &amp;quot;aspNetCoreRuntimeVersions&amp;quot;: &amp;quot;7.0&amp;quot;
        }
    },
    ...
}
&lt;/pre&gt;

&lt;p&gt;So here we see that the following are added:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Docker in docker – helps us use other docker-based features&lt;/li&gt;

  &lt;li&gt;&lt;a href="https://cli.github.com"&gt;GitHub CLI&lt;/a&gt; – why not, you’re using GitHub so this adds some quick CLI-based commands&lt;/li&gt;

  &lt;li&gt;PowerShell – an alternate shell that .NET developers love&lt;/li&gt;

  &lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/developer/azure-developer-cli/overview"&gt;AZD&lt;/a&gt; – the Azure Developer CLI which helps with quick configuration and deployment to Azure&lt;/li&gt;

  &lt;li&gt;Common Utilities – check out the definition for more info here&lt;/li&gt;

  &lt;li&gt;.NET features – even though we are using a base image, in this case .NET 8, there may be additional runtimes we need so we can use this to bring in more. In this case this is needed for one of our extensions customizations that need the .NET 7 runtime.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This enables the base image to append additional functionality when this devcontainer is used. &lt;/p&gt;

&lt;div&gt;&lt;/div&gt;

&lt;h2&gt;Extras&lt;/h2&gt;

&lt;div&gt;&lt;/div&gt;

&lt;p&gt;You can configure more extras through a few more properties like customizations (for environments) and pre/post commands.&lt;/p&gt;

&lt;h3&gt;&lt;/h3&gt;

&lt;div&gt;&lt;/div&gt;

&lt;h3&gt;Customizations&lt;/h3&gt;

&lt;p&gt;The most common used configuration of this section is to bring in extensions for VS Code. Since Codespaces default uses VS Code, this is helpful and also carries forward if you use VS Code locally with devcontainers (which you can do!). &lt;/p&gt;

&lt;pre class="brush: json; toolbar: false; highlight: [5,6];"&gt;{
    &amp;quot;name&amp;quot;: &amp;quot;.NET in Codespaces&amp;quot;,
    ...
    &amp;quot;customizations&amp;quot;: {
        &amp;quot;vscode&amp;quot;: {
            &amp;quot;extensions&amp;quot;: [
                &amp;quot;ms-vscode.vscode-node-azure-pack&amp;quot;,
                &amp;quot;github.vscode-github-actions&amp;quot;,
                &amp;quot;GitHub.copilot&amp;quot;,
                &amp;quot;GitHub.vscode-github-actions&amp;quot;,
                &amp;quot;ms-dotnettools.vscode-dotnet-runtime&amp;quot;,
                &amp;quot;ms-dotnettools.csdevkit&amp;quot;,
                &amp;quot;ms-dotnetools.csharp&amp;quot;
            ]
        }
    },
    ...
}
&lt;/pre&gt;

&lt;p&gt;In this snippet we see that some VS Code definitions will be installed for us to get started quickly:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Azure Extensions – a set of Azure extensions to help you quickly work with Azure when ready&lt;/li&gt;

  &lt;li&gt;GitHub Actions – view your repo’s CI/CD activity&lt;/li&gt;

  &lt;li&gt;Copilot – AI-assisted code development&lt;/li&gt;

  &lt;li&gt;.NET Runtime – this helps with any runtime acquisitions needed by activity or other extensions&lt;/li&gt;

  &lt;li&gt;C#/C# Dev Kit – extensions for C# development to make you more productive in the editor&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s a great way to configure your dev environment to be ready to start when you use devcontainers without spending time hunting down extensions again.&lt;/p&gt;

&lt;ul&gt;&lt;/ul&gt;

&lt;h3&gt;Commands&lt;/h3&gt;

&lt;p&gt;Additionally you can do some post-create commands that may be used to warm-up environments, etc. An example here:&lt;/p&gt;

&lt;pre class="brush: json; toolbar: false; highlight: [8];"&gt;{
    &amp;quot;name&amp;quot;: &amp;quot;.NET in Codespaces&amp;quot;,
    ...
    &amp;quot;forwardPorts&amp;quot;: [
        8080,
        8081
    ],
    &amp;quot;postCreateCommand&amp;quot;: &amp;quot;cd ./SampleApp &amp;amp;&amp;amp; dotnet restore&amp;quot;,
    ...
}
&lt;/pre&gt;

&lt;p&gt;This is used to get the sample source ready to use immediately by restoring dependencies or other commands, in this case running the restore command on the sample app.&lt;/p&gt;

&lt;h2&gt;Summary&lt;/h2&gt;

&lt;div&gt;&lt;/div&gt;

&lt;p&gt;I am loving devcontainers. Every time I work on a new repository or anything I’m now looking first for a devcontainer to help me quickly get started. For example, I recently explored a Go app/repo and don’t have any of the Go dev tools on my local machine and it didn’t matter. The presence of a devcontainer allowed me to immediately get started with the repo with the dependencies and tools and let me get comfortable. And portable as I can navigate from machine-to-machine with Codespaces and have the same setup needed by using devcontainers!&lt;/p&gt;

&lt;p&gt;Hope this little insight helps.&amp;#160; Check out devcontainers and if you are a repo owner, please add one to your Open Source project if possible!&lt;/p&gt;

&lt;div&gt;&lt;/div&gt;</content>
  </entry>
  <entry>
    <id>https://www.timheuer.com/blog/github-actions-extension-for-visual-studio/</id>
    <title>Monitor your GitHub Actions in Visual Studio</title>
    <updated>2023-08-07T22:59:59Z</updated>
    <published>2023-08-07T22:59:59Z</published>
    <link href="https://www.timheuer.com/blog/github-actions-extension-for-visual-studio/" />
    <author>
      <name>Tim Heuer</name>
      <email>tim@timheuer.com</email>
    </author>
    <category term="dotnet" />
    <category term="github" />
    <category term="devops" />
    <category term="visual studio" />
    <category term="vs" />
    <content type="html">&lt;p&gt;I LOVE GitHub Actions! I’ve written about this a lot and how I’ve ‘seen the light’ with regard to ensuring CI/CD is a part of any/every project from the start. That said I’m also a HUGE Visual Studio fan/user.&amp;#160; I like having everything as much as possible at my fingertips in my IDE and for most basic things not have to leave VS to do those things.&amp;#160; Because of this I’ve created &lt;a href="https://marketplace.visualstudio.com/items?itemName=TimHeuer.GitHubActionsVS"&gt;&lt;strong&gt;GitHub Actions for Visual Studio&lt;/strong&gt;&lt;/a&gt; extension that installs right into Visual Studio 2022 and gives you immediate insight into your GitHub Actions environment.&amp;#160; &lt;/p&gt;  &lt;p&gt;&lt;img title="Screenshot of light/dark mode of extension" style="margin: 0px auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of light/dark mode of extension" src="https://storage2.timheuer.com/ghforvs1.png" width="1337" height="890" /&gt;&lt;/p&gt;  &lt;blockquote&gt;   &lt;p&gt;Like nearly every one of my projects it starts as completely selfish reasons and tailored to my needs. I spend time doing this in some reserved learning time and the occasional time where my family isn’t around and/or it’s raining and I can’t be on my bike LOL. That said, it may not meet your needs, and that’s okay.&lt;/p&gt; &lt;/blockquote&gt;  &lt;p&gt;With that said, let me introduce you to this extension…&lt;/p&gt;  &lt;h2&gt;How to launch it&lt;/h2&gt;  &lt;p&gt;First you’ll need to have a project/solution open that is attached to GitHub.com and you have the necessary permissions to view this information. The extension looks for GitHub credentials to use interacting with your Windows Credential manager. From VS Solution Explorer, right-click on a project or solution and navigate to the “GitHub Actions” menu item.&amp;#160; This will open a new tool window and start querying the repository and actions for more information.&amp;#160; There is a progress indicator that will show when activity is happening.&amp;#160; Once complete you’ll have a new tool window you can dock anywhere and it will show a few things for you, let’s take a look at what those are.&lt;/p&gt;  &lt;h3&gt;Categories&lt;/h3&gt;  &lt;p&gt;In the tool window there are 4 primary areas to be aware of:&lt;/p&gt;  &lt;p&gt;&lt;img title="Screenshot of the tool window annotated with 4 numbers" style="margin: 0px auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of the tool window annotated with 4 numbers" src="https://storage2.timheuer.com/ghforvs2.png" width="536" height="837" /&gt;&lt;/p&gt;    &lt;p&gt;First in the area marked #1 is a small toolbar.&amp;#160; The toolbar has two buttons, one to refresh the data should you need to manually do so for any reason. The second is a shortcut to the repository’s Actions section on GitHub.com.&lt;/p&gt;  &lt;p&gt;Next the #2 area is a tree view of the current branch you have open and workflow runs that targeted that.&amp;#160; It will first show executed (or in-progress) workflow runs, and then you can expand it to see the jobs and steps of each job.&amp;#160; At the ‘leaf’ node of the step you can double-click (or right-click for a menu) and it will open the log for that step on GitHub.com directly.&lt;/p&gt;  &lt;p&gt;The #3 area is a list of the Workflows in your repository by named definition. This is helpful just to see a list of them, but also you can right-click on them and “run” a workflow which triggers a dispatch call to that workflow to execute!&lt;/p&gt;  &lt;p&gt;Finally the #4 area is your Environments and Secrets. Right now Environments just shows you a list of any you have, but not much else. Secrets are limited to Repository Secrets only right now and show you a list and when the secret was last updated.&amp;#160; You can right-click on the Secrets node to add another or double-click on an existing one to edit.&amp;#160; This will launch a quick modal dialog window to capture the secret name/value and upon saving, write to your repository and refresh this list.&lt;/p&gt;  &lt;h3&gt;Options&lt;/h3&gt;  &lt;p&gt;There are a small set of options you can configure for the extension:&lt;/p&gt;  &lt;p&gt;&lt;img title="Screenshot of extension options" style="margin: 0px auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of extension options" src="https://storage2.timheuer.com/ghforvs3.png" width="890" height="387" /&gt;&lt;/p&gt;  &lt;p&gt;The following can be set:&lt;/p&gt;  &lt;ul&gt;   &lt;li&gt;Max Runs (Default=10): this is the maximum number of Workflow Runs to retrieve&lt;/li&gt;    &lt;li&gt;Refresh Active Jobs (Default=False): if True, this will refresh the Workflow Runs list when any job is known to be in-progress&lt;/li&gt;    &lt;li&gt;Refresh Interval (Default=5): This is a number in seconds you want to poll for an update on in-progress jobs.&lt;/li&gt; &lt;/ul&gt;  &lt;h2&gt;Managing Workflows&lt;/h2&gt;  &lt;p&gt;Aside from viewing the list there are a few other things you can do using the extension:&lt;/p&gt;  &lt;ul&gt;   &lt;li&gt;Hover over the Run to see details of the final conclusion state, how it was triggered, the total time for the run, and what GitHub user triggered the run&lt;/li&gt;    &lt;li&gt;If a run is in-progress, right-click on the run and you can choose to Cancel, which will attempt to send a cancellation to stop the run at whatever step it is in&lt;/li&gt;    &lt;li&gt;On the steps nodes you can double-click or right-click and choose to view logs.&amp;#160; This will launch your default browser to the location of the step log for that item&lt;/li&gt;    &lt;li&gt;From the Workflows list, you can right-click on a name of a Workflow and choose “Run Workflow” which will attempt to signal to run the start a run for that Workflow&lt;/li&gt; &lt;/ul&gt;  &lt;h2&gt;Managing Secrets&lt;/h2&gt;  &lt;p&gt;Secrets right now are limited to Repository Secrets only.&amp;#160; This is due to a limitation the Octokit library this extension uses.&amp;#160; If you are using Environment Secrets you will not be able to manage them from here.&amp;#160; &lt;/p&gt;  &lt;p&gt;&lt;img title="Screenshot of modal dialog for secret editing" style="margin: 0px auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of modal dialog for secret editing" src="https://storage2.timheuer.com/ghforvs4.png" width="734" height="323" /&gt;&lt;/p&gt;  &lt;p&gt;Otherwise:&lt;/p&gt;  &lt;ul&gt;   &lt;li&gt;From the Repository Secrets node you can right-click and Add Secret which will launch a modal dialog to supply a name/value for a new secret. Clicking save will persist this to your repo and refresh the list.&lt;/li&gt;    &lt;li&gt;From an existing secret you can double-click it or right-click and choose ‘Edit’ and will launch the same modal dialog but just enables you to edit the value only.&lt;/li&gt;    &lt;li&gt;To delete a secret, right-click and choose delete. This is irreversible, so be sure you want to delete!&lt;/li&gt; &lt;/ul&gt;  &lt;h2&gt;Get Started and Log Issues&lt;/h2&gt;  &lt;p&gt;To get started, you simply can navigate to the &lt;a href="https://marketplace.visualstudio.com/items?itemName=TimHeuer.GitHubActionsVS"&gt;link on the marketplace&lt;/a&gt; and click install or use the Extension Manager in Visual Studio and search for “GitHub Actions” and install it.&amp;#160; If you find any issues, the source is available on my GitHub at &lt;a href="https://github.com/timheuer/GitHubActionsVS"&gt;&lt;strong&gt;timheuer/GitHubActionsVS&lt;/strong&gt;&lt;/a&gt; but also would appreciate that you could log an Issue if you find it not working for you.&amp;#160; Thanks for trying it out and I hope it is helpful for you as it is for me.&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://www.timheuer.com/blog/resx-editor-for-visual-studio-code/</id>
    <title>Creating a VS Code editor extension</title>
    <updated>2023-06-30T00:52:27Z</updated>
    <published>2023-06-30T00:52:27Z</published>
    <link href="https://www.timheuer.com/blog/resx-editor-for-visual-studio-code/" />
    <author>
      <name>Tim Heuer</name>
      <email>tim@timheuer.com</email>
    </author>
    <category term="vscode" />
    <category term="dotnet" />
    <category term="tools" />
    <category term="developer" />
    <content type="html">&lt;p&gt;As I’ve been &lt;a href="https://aka.ms/vs/csdevkit-launch"&gt;working more with Visual Studio Code&lt;/a&gt; lately, I wanted to explore more about the developer experience and some of the more challenging areas around customization.&amp;#160; VS Code has a great extensibility model and a TON of UI points for you to integrate.&amp;#160; In the C# Dev Kit we’ve not yet had the need to introduce any custom UI in any views or other experiences that are ‘pixels’ on the screen for the user…pretty awesome extensibility. One area that doesn’t have default UI is the non-text editors. Something that you want to do fully custom in the editor space. For me, I wanted to see what this experience was so I went out to create a small custom editor. I chose to create a ResX editor for the simplest case as ResX is a known-schema based structure that could easily be serialized/de-serialized as needed.&lt;/p&gt;  &lt;p&gt;NOTE: This is not an original idea. There are existing extensions that do ResX editing in different ways. With nearly every project that I set out with, it starts as a learning/selfish reasons…and also selfish scope. Some of the existing ones had expanded features I felt unnecessary and I wanted a simple structure. They are all interesting and you should check them out. I’m no way claiming to be ‘best’ or first-mover here, just sharing my learning path.&lt;/p&gt;  &lt;p&gt;With that said, I’m pleased with what I learned and the result, which is an editor that ‘fits in’ with the VS Code UX and achieves my CRUD goal of editing a ResX file:&lt;/p&gt;  &lt;p&gt;&lt;img title="Screenshot of ResX Editor and Viewer" style="margin: 0px auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of ResX Editor and Viewer" src="https://storage2.timheuer.com/finalscreenshot.png" width="1659" height="683" /&gt;&lt;/p&gt;  &lt;p&gt;With that, here’s what I’ve learned a bit…&lt;/p&gt;  &lt;h2&gt;Custom Editors and UI&lt;/h2&gt;  &lt;p&gt;There are a lot of warnings in the &lt;a href="https://code.visualstudio.com/api/extension-guides/custom-editors"&gt;Custom Editor API docs&lt;/a&gt; of making sure you &lt;em&gt;really&lt;/em&gt; need a custom editor, but point to the value of what they can provide for previewing/WYSIWYG renderings of documents. But they point to the fact that you will likely be using a webview and thus be fully responsible for your UI.&amp;#160; In the end you are owning the UI that you are drawing. For me, I’m not a UI designer, so I rely on others/toolkits to do a lot of heavy lifting. The examples I saw out there (and oddly enough the custom editor sample) don’t match the VS Code UX at all and I didn’t like that. I actually found it odd that the sample took such an extreme approach to the editor (cat paw drawings) rather than show a more realistic data-focused scenario on a known file format.&lt;/p&gt;  &lt;p&gt;Luckily the team provides the &lt;strong&gt;&lt;a href="https://github.com/microsoft/vscode-webview-ui-toolkit"&gt;Webview UI Toolkit for Visual Studio&lt;/a&gt;&lt;/strong&gt;, a set of components that match the UX of VS Code and adhere to the theming and interaction models. It’s excellent and &lt;strong&gt;anyone doing custom UI in VS Code extensions should start using this immediately&lt;/strong&gt;. Your extension will feel way more professional and at home in the standard VS Code UX.&amp;#160; My needs were fairly simple and I wanted to show the ResX (which is XML format) in a tabular format. The toolkit has a data-grid that was perfect for the job…mostly. But let’s start with the structure.&lt;/p&gt;  &lt;p&gt;Most of the editor is in a provider (per the docs) and that’s where you implement a CustomTextEditorProvider which provides a register and resolveCustomTextEditor commands. Register does what you think, register’s your editor into the ecosystem, using the metadata from package.json about what file types/languages will trigger your editor.&amp;#160; Resolve is where you start providing your content. It provides you with a Webview panel where you put your initial content. Mine was a simple grid:&lt;/p&gt;  &lt;pre class="brush: ts;"&gt;private _getWebviewContent(webview: vscode.Webview) {
  const webviewUri = webview.asWebviewUri(vscode.Uri.joinPath(this.context.extensionUri, 'out', 'webview.js'));
  const nonce = getNonce();
  const codiconsUri = webview.asWebviewUri(vscode.Uri.joinPath(this.context.extensionUri, 'media', 'codicon.css'));
  const codiconsFont = webview.asWebviewUri(vscode.Uri.joinPath(this.context.extensionUri, 'media', 'codicon.ttf'));

  return /*html*/ `
            &amp;lt;!DOCTYPE html&amp;gt;
            &amp;lt;html lang=&amp;quot;en&amp;quot;&amp;gt;
              &amp;lt;head&amp;gt;
                &amp;lt;meta charset=&amp;quot;UTF-8&amp;quot;&amp;gt;
                &amp;lt;meta name=&amp;quot;viewport&amp;quot; content=&amp;quot;width=device-width, initial-scale=1.0&amp;quot;&amp;gt;
                &amp;lt;meta
                  http-equiv=&amp;quot;Content-Security-Policy&amp;quot;
                  content=&amp;quot;default-src 'none'; img-src ${webview.cspSource} https:; script-src 'nonce-${nonce}'; style-src ${webview.cspSource} 'nonce-${nonce}'; style-src-elem ${webview.cspSource} 'unsafe-inline'; font-src ${webview.cspSource};&amp;quot;
                /&amp;gt;
                &amp;lt;link href=&amp;quot;${codiconsUri}&amp;quot; rel=&amp;quot;stylesheet&amp;quot; nonce=&amp;quot;${nonce}&amp;quot;&amp;gt;
              &amp;lt;/head&amp;gt;
              &amp;lt;body&amp;gt;
                &amp;lt;vscode-data-grid id=&amp;quot;resource-table&amp;quot; aria-label=&amp;quot;Basic&amp;quot; generate-header=&amp;quot;sticky&amp;quot; aria-label=&amp;quot;Sticky Header&amp;quot;&amp;gt;&amp;lt;/vscode-data-grid&amp;gt;
                &amp;lt;vscode-button id=&amp;quot;add-resource-button&amp;quot;&amp;gt;
                  Add New Resource
                  &amp;lt;span slot=&amp;quot;start&amp;quot; class=&amp;quot;codicon codicon-add&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;
                &amp;lt;/vscode-button&amp;gt;
                &amp;lt;script type=&amp;quot;module&amp;quot; nonce=&amp;quot;${nonce}&amp;quot; src=&amp;quot;${webviewUri}&amp;quot;&amp;gt;&amp;lt;/script&amp;gt;
              &amp;lt;/body&amp;gt;
            &amp;lt;/html&amp;gt;
          `;
}
}
&lt;/pre&gt;

&lt;p&gt;This serves as the HTML ‘shell’ and then the actual interaction is via the webview.js you see being included. Some special things here or just how it includes the correct link to the js/css files I need but also notice the Content-Security-Policy. That was interesting to get right initially but it’s a recommendation solid meta tag to include (otherwise console will spit out warnings to anyone looking).&amp;#160; The webview.js is basically any JavaScript I needed to interact with my editor. Specifically this uses the registration of the Webview UI Toolkit and converts the resx to json and back (using the &lt;a href="https://www.npmjs.com/package/resx"&gt;npm library resx&lt;/a&gt;). Here’s a snippet of that code in the custom editor provider that basically updates the document to JSON format as it changes:&lt;/p&gt;

&lt;pre class="brush: ts;"&gt;private async updateTextDocument(document: vscode.TextDocument, json: any) {

  const edit = new vscode.WorkspaceEdit();

  edit.replace(
    document.uri,
    new vscode.Range(0, 0, document.lineCount, 0),
    await resx.js2resx(JSON.parse(json)));
  return vscode.workspace.applyEdit(edit);
}
&lt;/pre&gt;

&lt;p&gt;So that gets the essence of the ‘bones’ of the editor that I needed. Once I had the data then a function in the webview.js can ‘bind’ the data to the vscode-data-grid supplying the column names + data easily and just set as the data rows quickly (lines 20,21):&lt;/p&gt;

&lt;pre class="brush: ts; highlight: [20,21];"&gt;function updateContent(/** @type {string} **/ text) {
    if (text) {

        var resxValues = [];

        let json;
        try {
            json = JSON.parse(text);
        }
        catch
        {
            console.log(&amp;quot;error parsing json&amp;quot;);
            return;
        }

        for (const node in json || []) {
            if (node) {
                let res = json[node];
                // eslint-disable-next-line @typescript-eslint/naming-convention
                var item = { Key: node, &amp;quot;Value&amp;quot;: res.value || '', &amp;quot;Comment&amp;quot;: res.comment || '' };
                resxValues.push(item);
            }
            else {
                console.log('node is undefined or null');
            }
        }

        table.rowsData = resxValues;
    }
    else {
        console.log(&amp;quot;text is null&amp;quot;);
        return;
    }
}
&lt;/pre&gt;

&lt;p&gt;And the vscode-data-grid generates the rows, sticky header, handles the scrolling, theming, responding to environment, etc. for me! &lt;/p&gt;

&lt;p&gt;&lt;img title="Grid view" style="margin: 0px auto; float: none; display: block; background-image: none;" border="0" alt="Grid view" src="https://storage2.timheuer.com/data-grid-only.png" width="1309" height="287" /&gt;&lt;/p&gt;

&lt;p&gt;Now I want to edit…&lt;/p&gt;

&lt;h2&gt;Editing in the vscode-data-grid&lt;/h2&gt;

&lt;p&gt;The default data-grid does NOT provide editing capabilities unfortunately and I really didn’t want to have to invent something here and end up not getting the value from all the Webview UI Toolkit. Luckily some in the universe also tackling the same problem.&amp;#160; Thankfully &lt;a href="https://twitter.com/notesofbarry"&gt;Liam Barry&lt;/a&gt; was at the same time trying to solve the same problem and helped contribute what I needed.&amp;#160; It works and provides a simple editing experience:&lt;/p&gt;

&lt;p&gt;&lt;img title="Editing a row" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Editing a row" src="https://storage2.timheuer.com/editing-resx.gif" width="1318" height="318" /&gt;&lt;/p&gt;

&lt;p&gt;Now that I can edit can I delete?&lt;/p&gt;

&lt;h2&gt;Deleting items in the grid&lt;/h2&gt;

&lt;p&gt;Maybe you made an error and you want to delete.&amp;#160; I decided to expose a command that can be invoked from the command palette but also from a context menu. I specifically chose not to put an “X” or delete button per-row…it didn’t feel like the right UX.&amp;#160; Once I created the command (which basically gets the element and then the _rowData from the vscode-data-grid element (yay, that was awesome the context is set for me!!).&amp;#160; Then I just remove it from the items array and update the doc.&amp;#160; The code is okay, but the experience is simple exposing as a right-click context menu:&lt;/p&gt;

&lt;p&gt;&lt;img title="Deleting an item" style="margin-right: auto; margin-left: auto; float: none; display: block;" alt="Deleting an item" src="https://storage2.timheuer.com/deleting-resx.gif" width="1154" height="372" /&gt;&lt;/p&gt;

&lt;p&gt;This is exposed by enabling the command on the webview context menu via package.json – notice on line 2 is where it is exposed on the context menu and the conditions of which it is exposed (a specific config value and ensuring that my editor is the active one):&lt;/p&gt;

&lt;pre class="brush: json; highlight: [2];"&gt;&amp;quot;menus&amp;quot;: {
  &amp;quot;webview/context&amp;quot;: [
    {
      &amp;quot;command&amp;quot;: &amp;quot;resx-editor.deleteResource&amp;quot;,
      &amp;quot;when&amp;quot;: &amp;quot;config.resx-editor.experimentalDelete == true &amp;amp;&amp;amp; activeCustomEditorId == 'resx-editor.editor'&amp;quot;
    }
  ]
...
}
&lt;/pre&gt;

&lt;p&gt;Deleting done, now add a new one!&lt;/p&gt;

&lt;h2&gt;Adding a new item&lt;/h2&gt;

&lt;p&gt;Obviously you want to add one! So I want to capture input…but don’t want to do a ‘form’ as that doesn’t feel like the VS Code way. I chose to use a multi-input method using the command area to capture the flow. This can be invoked from the button you see but also from the command palette command itself.&lt;/p&gt;

&lt;p&gt;&lt;img title="Add new resource" style="margin-right: auto; margin-left: auto; float: none; display: block;" alt="Add new resource" src="https://storage2.timheuer.com/addnew-resx.gif" width="1068" height="492" /&gt;&lt;/p&gt;

&lt;p&gt;Simple enough, it captures the inputs and adds a new item to the data array and the document is updated again.&lt;/p&gt;

&lt;h2&gt;Using the default editor&lt;/h2&gt;

&lt;p&gt;While custom editors are great, there may be times you want to use the default editor. This can be done by doing “open with” on the file from Explorer view, but I wanted to provide a quicker method from my custom editor. I added a command that re-opens the active document in the text editor:&lt;/p&gt;

&lt;pre class="brush: ts;"&gt;let openInTextEditorCommand = vscode.commands.registerCommand(AppConstants.openInTextEditorCommand, () =&amp;gt; {
  vscode.commands.executeCommand('workbench.action.reopenTextEditor', document?.uri);
});
&lt;/pre&gt;

&lt;p&gt;and expose that command in the editor title context menu (package.json entry):&lt;/p&gt;

&lt;pre class="brush: json;"&gt;&amp;quot;editor/title&amp;quot;: [
{
  &amp;quot;command&amp;quot;: &amp;quot;resx-editor.openInTextEditor&amp;quot;,
  &amp;quot;when&amp;quot;: &amp;quot;activeCustomEditorId == 'resx-editor.editor' &amp;amp;&amp;amp; activeEditorIsNotPreview == false&amp;quot;,
  &amp;quot;group&amp;quot;: &amp;quot;navigation@1&amp;quot;
}
...
]
&lt;/pre&gt;

&lt;p&gt;Here’s the experience:&lt;/p&gt;

&lt;p&gt;&lt;img title="Toggle resx raw view" style="margin-right: auto; margin-left: auto; float: none; display: block;" alt="Toggle resx raw view" src="https://storage2.timheuer.com/toggle-resx.gif" width="1250" height="524" /&gt;&lt;/p&gt;

&lt;p&gt;Helpful way to toggle back to the ‘raw’ view.&lt;/p&gt;

&lt;h2&gt;Using the custom editor as a previewer&lt;/h2&gt;

&lt;p&gt;But what if you are in the raw view and want to see the formatted one? This may be common for standard formats where users do NOT have your editor set as default. You can expose a preview mode for yours and similarly, expose a button on the editor to preview it. This is what I’ve done here in package.json:&lt;/p&gt;

&lt;pre class="brush: json;"&gt;&amp;quot;editor/title&amp;quot;: [
...
{
  &amp;quot;command&amp;quot;: &amp;quot;resx-editor.openPreview&amp;quot;,
  &amp;quot;when&amp;quot;: &amp;quot;(resourceExtname == '.resx' || resourceExtname == '.resw') &amp;amp;&amp;amp; activeCustomEditorId != 'resx-editor.editor'&amp;quot;,
  &amp;quot;group&amp;quot;: &amp;quot;navigation@1&amp;quot;
}
...
]
&lt;/pre&gt;

&lt;p&gt;And the command that is used to open a document in my specific editor:&lt;/p&gt;



&lt;pre class="brush: ts;"&gt;let openInResxEditor = vscode.commands.registerCommand(AppConstants.openInResxEditorCommand, () =&amp;gt; {

    const editor = vscode.window.activeTextEditor;

    vscode.commands.executeCommand('vscode.openWith',
        editor?.document?.uri,
        AppConstants.viewTypeId,
        {
            preview: false,
            viewColumn: vscode.ViewColumn.Active
        });
});
&lt;/pre&gt;



&lt;p&gt;Now I’ve got a different ways to see the raw view, preview, or default structured custom view.&amp;#160; &lt;/p&gt;

&lt;p&gt;&lt;img title="Preview mode" style="margin-right: auto; margin-left: auto; float: none; display: block;" alt="Preview mode" src="https://storage2.timheuer.com/preview-resx.gif" width="1344" height="652" /&gt;&lt;/p&gt;

&lt;p&gt;Nice!&lt;/p&gt;

&lt;h2&gt;Check out the codez&lt;/h2&gt;

&lt;p&gt;As I mentioned earlier this is hardly an original idea, but I liked learning, using a standard UX and trying to make sure it felt like it fit within the VS Code experience.&amp;#160; So go ahead and give it an install and play around. It is not perfect and comes with the ‘works on my machine’ guarantee.&lt;/p&gt;

&lt;p&gt;&lt;img title="Marketplace listing" style="margin: 0px auto; float: none; display: block; background-image: none;" border="0" alt="Marketplace listing" src="https://storage2.timheuer.com/marketplacelisting-resx.png" width="1125" height="336" /&gt;&lt;/p&gt;

&lt;p&gt;The code is out there and linked in the &lt;a href="https://marketplace.visualstudio.com/items?itemName=TimHeuer.resx-editor"&gt;Marketplace listing&lt;/a&gt; for you.&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://www.timheuer.com/blog/enhance-your-vs-code-extension-listing-easily/</id>
    <title>Make your VS Code extension more helpful</title>
    <updated>2023-06-26T17:14:04Z</updated>
    <published>2023-06-26T17:14:04Z</published>
    <link href="https://www.timheuer.com/blog/enhance-your-vs-code-extension-listing-easily/" />
    <author>
      <name>Tim Heuer</name>
      <email>tim@timheuer.com</email>
    </author>
    <category term="visual studio" />
    <category term="tools" />
    <category term="developer" />
    <category term="vscode" />
    <content type="html">&lt;p&gt;Whenever I work on something new, I like to make sure I try to better understand the tech and the ecosystem around it.&amp;#160; With the launch of the &lt;strong&gt;&lt;a href="https://devblogs.microsoft.com/visualstudio/announcing-csharp-dev-kit-for-visual-studio-code/"&gt;C# Dev Kit&lt;/a&gt;&lt;/strong&gt;, I had to dive deeper into understanding some things about how VS Code extensions work, and get dirtier with TypeScript/JavaScript more than usual that my ‘day job’ required.&amp;#160; As a part of how I learn, I build.&amp;#160; So I went and built some new extensions.&amp;#160; Nearly all of my experiments are public on my repos, and all come with disclaimers that they usually are for completely selfish reasons (meaning they may not help anyone else but me – or just learning) or may not even be original ideas really.&amp;#160; I only say that because you may know a lot of this already.&lt;/p&gt;  &lt;p&gt;As a part of this journey I’ve loved the VS Code extensibility model and documentation.&amp;#160; It really is great for 99% of the use cases (the remaining 1% being esoteric uses of WebView, some of the VS Code WebView UI Toolkit, etc).&amp;#160; And one thing I’ve come to realize is the subtleties of making your VS Code extension a lot more helpful to your consumers in the information, with very little effort – just some simple entries in package.json in fact.&amp;#160; Here were some that I wasn’t totally aware of (they are in the docs) mostly because my `yo code` starting point doesn’t emit them.&amp;#160; Nearly all of these exist to help people understand your extension, discover it, or interact with YOU better.&amp;#160; And they are simple!&lt;/p&gt;  &lt;h2&gt;The Manifest&lt;/h2&gt;  &lt;p&gt;First, make sure you understand the &lt;a href="https://code.visualstudio.com/api/references/extension-manifest#marketplace-presentation-tips"&gt;extension manifest&lt;/a&gt; is not just a package.json for node packages. It represents metadata for your Marketplace and some interaction with VS Code itself!&lt;/p&gt;  &lt;p&gt;&lt;img title="VS Code Extension Manifest" style="margin: 0px auto; float: none; display: block; background-image: none;" border="0" alt="VS Code Extension Manifest" src="https://storage2.timheuer.com/extension-manifest.png" width="1196" height="758" /&gt;&lt;/p&gt;      &lt;p&gt;It’s just some snippet of text, but powers a few experiences…here are some I’ve noticed provide some added Marketplace and product value.&lt;/p&gt;  &lt;h2&gt;Repository&lt;/h2&gt;  &lt;p&gt;Sounds simple, but can be helpful if your extension is Open Source.&amp;#160; The &lt;a href="https://code.visualstudio.com/api/references/extension-manifest#marketplace-presentation-tips"&gt;repository&lt;/a&gt; just surfaces a specific link in your listing directly to your repository.&lt;/p&gt;  &lt;pre class="brush: json;"&gt;&amp;quot;repository&amp;quot;: {
  &amp;quot;type&amp;quot;: &amp;quot;git&amp;quot;,
  &amp;quot;url&amp;quot;: &amp;quot;https://github.com/timheuer/resx-editor&amp;quot;
}
&lt;/pre&gt;

&lt;p&gt;Simple enough, you specify a type and to the root of your repo.&lt;/p&gt;

&lt;h2&gt;Bugs&lt;/h2&gt;

&lt;p&gt;You want people to log issues right?&amp;#160; This attribute powers the ‘Report Issue’ capability within VS Code itself:&lt;/p&gt;

&lt;p&gt;&lt;img title="VS Code Issue Reporter" style="margin: 0px auto; float: none; display: block; background-image: none;" border="0" alt="VS Code Issue Reporter" src="https://storage2.timheuer.com/issue-reporter.png" width="1026" height="462" /&gt;&lt;/p&gt;

&lt;p&gt;To add this you simply put a URL to the issues collector of your project.&amp;#160; Could be anything really, but if it is a GitHub repo then your users will be able to directly log an issue from within VS Code. If it is not, then the URL of your issues collector (e.g., Jira, Azure DevOps) will be displayed here in link format.&lt;/p&gt;

&lt;pre class="brush: json;"&gt;&amp;quot;bugs&amp;quot;: {
  &amp;quot;url&amp;quot;: &amp;quot;https://github.com/timheuer/resx-editor/issues&amp;quot;
}
&lt;/pre&gt;

&lt;p&gt;This is super helpful for your users and I think a requirement!&lt;/p&gt;

&lt;h2&gt;Q&amp;amp;A&lt;/h2&gt;

&lt;p&gt;By default if you publish to the Visual Studio Marketplace, then you get an Q&amp;amp;A tab for your extension. People can come here and start asking questions. I personally think the experience is not great here right now, as the publisher is the sole respondent, the conversation isn’t threaded, etc. But you can change that.&lt;/p&gt;

&lt;pre class="brush: json;"&gt;&amp;quot;qna&amp;quot;: &amp;quot;https://github.com/timheuer/resx-editor/issues&amp;quot;
&lt;/pre&gt;

&lt;p&gt;By adding this, the Q&amp;amp;A link in the marketplace will now direct people to your specific link here rather than bifurcate your Q&amp;amp;A discussions in marketplace and your chosen place. This can be GitHub Issues, GitHub Discussions, some other forum software, whatever. It provides a good entry point so that you don’t have to monitor yet another Q&amp;amp;A portion for your product.&lt;/p&gt;

&lt;h2&gt;Keywords&lt;/h2&gt;

&lt;p&gt;Yes you have categories (which are specific words that the Marketplace knows about), but you can also have keywords (up to 5). This is helpful when you basically want to add some searchable context/content that might not be in your title or brief description.&lt;/p&gt;

&lt;pre class="brush: json;"&gt;&amp;quot;keywords&amp;quot;: [
  &amp;quot;resx&amp;quot;,
  &amp;quot;resw&amp;quot;,
  &amp;quot;resource&amp;quot;,
  &amp;quot;editor&amp;quot;,
  &amp;quot;viewer&amp;quot;
],
&lt;/pre&gt;

&lt;p&gt;You can only have 5 so tune them well, but don’t leave these out.&amp;#160; They also display in the marketplace listing.&lt;/p&gt;

&lt;h2&gt;Badges&lt;/h2&gt;

&lt;p&gt;Who doesn’t love a good badge to show build status or versioning! One small delighter for the nerds among us is the Marketplace/manifest support this URL format from a set of &lt;a href="https://code.visualstudio.com/api/references/extension-manifest#approved-badges"&gt;approved badge providers&lt;/a&gt;.&amp;#160; Adding this in your manifest:&lt;/p&gt;

&lt;pre class="brush: json;"&gt;&amp;quot;badges&amp;quot;: [
  {
    &amp;quot;url&amp;quot;: &amp;quot;https://img.shields.io/visual-studio-marketplace/v/timheuer.resx-editor?label=VS%20Code%20Marketplace&amp;amp;color=brightgreen&amp;amp;logo=visualstudiocode&amp;quot;,
    &amp;quot;href&amp;quot;: &amp;quot;https://marketplace.visualstudio.com/items?itemName=TimHeuer.resx-editor&amp;quot;,
    &amp;quot;description&amp;quot;: &amp;quot;Current Version&amp;quot;
  },
  {
    &amp;quot;url&amp;quot;: &amp;quot;https://github.com/timheuer/resx-editor/actions/workflows/build.yaml/badge.svg&amp;quot;,
    &amp;quot;href&amp;quot;: &amp;quot;https://github.com/timheuer/resx-editor/actions/workflows/build.yaml&amp;quot;,
    &amp;quot;description&amp;quot;: &amp;quot;Build Status&amp;quot;
  }
]
&lt;/pre&gt;

&lt;p&gt;now shows these by default in your Marketplace listing:&lt;/p&gt;

&lt;p&gt;&lt;img title="VS Code Marketplace listing with badges" style="margin: 0px auto; float: none; display: block; background-image: none;" border="0" alt="VS Code Marketplace listing with badges" src="https://storage2.timheuer.com/badges-marketplace.png" width="539" height="277" /&gt;&lt;/p&gt;

&lt;p&gt;Maybe a bit ‘extra’ as my daughter would say, but I think it adds a nice touch.&lt;/p&gt;

&lt;h2&gt;Snippets&lt;/h2&gt;

&lt;p&gt;If you are a code provider or a custom editor you may want to add some &lt;a href="https://code.visualstudio.com/docs/editor/userdefinedsnippets#_create-your-own-snippets"&gt;snippets&lt;/a&gt;.&amp;#160; Your extension can directly contribute them with your other functionality.&lt;/p&gt;

&lt;pre class="brush: json;"&gt;&amp;quot;snippets&amp;quot;: [
  {
    &amp;quot;language&amp;quot;: &amp;quot;xml&amp;quot;,
    &amp;quot;path&amp;quot;: &amp;quot;./snippet/resx.json&amp;quot;
  }
]
&lt;/pre&gt;

&lt;p&gt;Then when your extension is installed these are just a part of it and you don’t need to provide a ‘snippet only’ pack of sorts.&lt;/p&gt;

&lt;h2&gt;Menus&lt;/h2&gt;

&lt;p&gt;If you are doing custom things, you likely already know about contributing menus and commands.&amp;#160; But did you know that commands appear in the command palette by default? Perhaps you don’t want that as your command is context specific: only when a certain file type is open, a specific editor is in view, etc. So you’ll want to hide them by default in the command pallette using the ‘when’ clause like in lines 5 and 11 here. I want to never show one in the command palette (when:false) and the other only certain conditions when a specific view is open.&lt;/p&gt;



&lt;pre class="brush: json; highlight: [5,11];"&gt;&amp;quot;menus&amp;quot;: {
  &amp;quot;webview/context&amp;quot;: [
    {
      &amp;quot;command&amp;quot;: &amp;quot;resx-editor.deleteResource&amp;quot;,
      &amp;quot;when&amp;quot;: &amp;quot;config.resx-editor.experimentalDelete == true &amp;amp;&amp;amp; webviewId == 'resx-editor.editor'&amp;quot;
    }
  ],
  &amp;quot;commandPalette&amp;quot;: [
    {
      &amp;quot;command&amp;quot;: &amp;quot;resx-editor.deleteResource&amp;quot;,
      &amp;quot;when&amp;quot;: &amp;quot;false&amp;quot;
    }
  ]
}
&lt;/pre&gt;



&lt;p&gt;This enables the commands to be surfaced where you want like custom views, context menus, etc. without them showing as an ‘anytime’ available command.&lt;/p&gt;

&lt;h2&gt;Summary&lt;/h2&gt;

&lt;p&gt;There is a lot more you can do and of course the most important thing is providing a useful extension (heck, even if only to you). But these are some really simple and subtle changes I noticed in my learning that I think more extension authors should take advantage of!&amp;#160; Hope this helps!&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://www.timheuer.com/blog/write-an-open-ai-plugin-for-chatgpt-using-aspnet/</id>
    <title>Writing an OpenAI plugin for ChatGPT using ASP.NET Core</title>
    <updated>2023-06-17T16:50:17Z</updated>
    <published>2023-06-17T16:50:17Z</published>
    <link href="https://www.timheuer.com/blog/write-an-open-ai-plugin-for-chatgpt-using-aspnet/" />
    <author>
      <name>Tim Heuer</name>
      <email>tim@timheuer.com</email>
    </author>
    <category term="dotnet" />
    <category term="aspnet" />
    <category term="ai" />
    <content type="html">&lt;p&gt;Well it was all about AI at Microsoft Build this year for sure…lots of great discussions and demos around GitHub Copilot, OpenAI, Intelligent Apps, etc.&amp;#160; I’ve been heavily relying on GitHub Copilot recently as I’ve been spending more time in writing VS Code extensions and I’m not as familiar with TypeScript.&amp;#160; Having that AI assistant with me *in the editor* has been amazing.&lt;/p&gt;  &lt;p&gt;One of the sessions at Build was the keynote from &lt;strong&gt;Scott Guthrie&lt;/strong&gt; where VP of Product, &lt;strong&gt;Amanda Silver&lt;/strong&gt;, demonstrated building an OpenAI plugin for &lt;strong&gt;&lt;a href="https://chat.openai.com"&gt;ChatGPT&lt;/a&gt;&lt;/strong&gt;.&amp;#160; You can watch that demo starting at this timestamp as it was a part of the “&lt;a href="https://youtu.be/KMOV1Zy8YeM?t=531"&gt;Next generation AI for developers with the Microsoft Cloud&lt;/a&gt;” overall keynote.&amp;#160; It takes a simple API about products from the very famous Contoso outlet and exposes an API about products.&amp;#160; Amanda then created a plugin using Python and showed the workflow of getting this to work in ChatGPT.&amp;#160; So after a little prompting on Twitter and some change of weekend plans, I wanted to see what it would take to do this using ASP.NET Core API development.&amp;#160; Turns out it is pretty simple, so let’s dig in!&lt;/p&gt;  &lt;h1&gt;Working with ChatGPT plugins&lt;/h1&gt;  &lt;p&gt;A plugin in this case help connect the famous ChatGPT experience to third-party applications (APIs).&amp;#160; From the documentation:&lt;/p&gt;  &lt;blockquote&gt;   &lt;p&gt;These plugins enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT's capabilities and allowing it to perform a wide range of actions. For example, here is the &lt;a href="https://savvytrader.com/"&gt;Savvy Trader&lt;/a&gt; ChatGPT plugin in action where I can ask it investment questions and it becomes the responsible source for providing the data/answers to my natural language inquiry:&lt;/p&gt;    &lt;p&gt;&lt;img title="Screenshot of the Savvy Trader ChatGPT plugin" style="margin: 0px auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of the Savvy Trader ChatGPT plugin" src="https://storage2.timheuer.com/savvytrader.png" width="1252" height="767" /&gt;&lt;/p&gt; &lt;/blockquote&gt;  &lt;p&gt;A basic plugin is a definition of a manifest that describe how ChatGPT should interact with the third-party API.&amp;#160; It’s a contract between ChatGPT, the plugin, and the API specification, using OpenAPI.&amp;#160; That’s it simply.&amp;#160; Could your existing APIs ‘just work’ as a plugin API? That’s something you’d have to consider before just randomly exposing your whole API surface area to ChatGPT. It makes more sense to be intentional about it and deliver a set of APIs that are meaningful to the AI model to look and receive a response.&amp;#160; With that said, we’ll keep on the demo/simple path for now.&lt;/p&gt;  &lt;p&gt;For now the ChatGPT plugins require two sides: a ChatGPT Plus subscription to use them (plugins now available to all Plus subscribers) and to develop you need to be on the approved list, for which you must &lt;a href="https://openai.com/waitlist/plugins"&gt;join the waitlist to develop/deploy a plugin&lt;/a&gt; (as of the date of this writing).&lt;/p&gt;  &lt;h1&gt;Writing the API&lt;/h1&gt;  &lt;p&gt;Now the cool thing for .NET developers, namely ASP.NET Core developers is writing your API doesn’t require anything new for you to learn…it’s just your code.&amp;#160; Can it be enhanced with more? Absolutely, but as you’ll see here, we are literally keeping it simple.&amp;#160; For ours we’ll start with the simple ASP.NET Core Web API template in Visual Studio (or `dotnet new webapi –use-minimal-apis`).&amp;#160; This gives us the simple starting point for our API.&amp;#160; We’re going to follow the same sample as Amanda’s so you can delete all the weather forecast sample information in Program.cs.&amp;#160; We’re going to add in some sample fake data (products.json) which we’ll load as our ‘data source’ for the API for now.&amp;#160; We’ll load that up first:&lt;/p&gt;  &lt;pre class="brush: csharp;"&gt;// get some fake data
List&amp;lt;Product&amp;gt; products = JsonSerializer.Deserialize&amp;lt;List&amp;lt;Product&amp;gt;&amp;gt;(File.ReadAllText(&amp;quot;./Data/products.json&amp;quot;));
&lt;/pre&gt;

&lt;p&gt;Observe that I have a Product class to deserialize into, which is pretty simple class that maps to the sample data…not terribly important for this reading.&lt;/p&gt;

&lt;p&gt;Now we want to have our OpenAPI definition crafted a little, so we’re going to modify the Swagger definition a bit.&amp;#160; The template already includes Swashbuckle package to help us generate the OpenAPI specification needed…we just need to provide it with a bit of information.&amp;#160; I’m going to modify this to provide the title/description a bit better (otherwise by default it uses a set of project names you probably don’t want).&lt;/p&gt;

&lt;pre class="brush: csharp;"&gt;builder.Services.AddSwaggerGen(c =&amp;gt;
{
    c.SwaggerDoc(&amp;quot;v1&amp;quot;, new Microsoft.OpenApi.Models.OpenApiInfo() { Title = &amp;quot;Contoso Product Search&amp;quot;, Version = &amp;quot;v1&amp;quot;, Description = &amp;quot;Search through Contoso's wide range of outdoor and recreational products.&amp;quot; });
});
&lt;/pre&gt;

&lt;p&gt;Now we’ll add an API for products to query our data and expose that to OpenAPI definition:&lt;/p&gt;

&lt;pre class="brush: csharp; highlight: [11-13];"&gt;app.MapGet(&amp;quot;/products&amp;quot;, (string? query = null) =&amp;gt;
{
    if (query != null) { 
        return products?.Where(p =&amp;gt; p.Name.Contains(query, StringComparison.OrdinalIgnoreCase) || 
        p.Description.Contains(query, StringComparison.OrdinalIgnoreCase) || 
        p.Category.Contains(query, StringComparison.OrdinalIgnoreCase) ); 
    }

    return products;
})
.WithName(&amp;quot;GetProducts&amp;quot;)
.WithDescription(&amp;quot;Get a list of products&amp;quot;)
.WithOpenApi();
&lt;/pre&gt;

&lt;p&gt;That’s it.&amp;#160; You can see the highlighted lines where we further annotate the endpoint for the OpenAPI specification. Now we have our API working and it will produce an OpenAPI spec by default at {host}/swagger/v1/swagger.yaml for us.&amp;#160; Note that you can further modify this location if you want providing a different route template in the Swagger config.&lt;/p&gt;

&lt;p&gt;Now let’s move on to exposing this for ChatGPT plugins!&lt;/p&gt;

&lt;h1&gt;Exposing the API to ChatGPT&lt;/h1&gt;

&lt;p&gt;Plugins are enabled in ChatGPT by first providing a manifest that informs ChatGPT about what the plugin is, where the API definitions are, etc.&amp;#160; This is requested at a manifest located at {yourdomain}/.well-known/ai-plugin.json.&amp;#160; This is a well-known location and it is looking for a response that conforms to the schema.&amp;#160; Here are some advanced scenarios for authentication for a plugin, but we’ll keep it simple and expose this for all with no auth needed.&amp;#160; Details about the plugin manifest can be found here: &lt;a href="https://platform.openai.com/docs/plugins/getting-started/plugin-manifest"&gt;ai-plugin.json manifest definition&lt;/a&gt;.&amp;#160; It’s a pretty simple file.&amp;#160; You probably will need a logo for your plugin of course – maybe use AI to generate that for you ;-).&lt;/p&gt;

&lt;p&gt;There are a few ways you can expose this.&amp;#160; You can simply add a wwwroot folder, enable static files and drop the file in wwwroot\.well-known\ai-plugin.json.&amp;#160; To do that in your API project create the wwwroot folder, then create the .well-known folder (with the ‘.’) and put your ai-plugin.json file in that location.&amp;#160; If you go this approach you’ll want to ensure in your Program.cs you enable static files:&lt;/p&gt;

&lt;pre class="brush: csharp;"&gt;app.UseStaticFiles();
&lt;/pre&gt;

&lt;p&gt;After you have all this in place you’ll need to enable CORS policy so that the ChatGPT can access your API correctly.&amp;#160; First you will need to enable CORS (line 1 in your builder) and then configure a policy for the ChatGPT domain (line 6 in the app):&lt;/p&gt;

&lt;pre class="brush: csharp; highlight: [1,6];"&gt;builder.Services.AddCors();

...


app.UseCors(policy =&amp;gt; policy
    .WithOrigins(&amp;quot;https://chat.openai.com&amp;quot;)
    .AllowAnyMethod()
    .AllowAnyHeader());
&lt;/pre&gt;

&lt;p&gt;Now our API will be callable form the ChatGPT app.&lt;/p&gt;

&lt;h2&gt;Using Middleware to configure the manifest&lt;/h2&gt;

&lt;p&gt;As mentioned the static files approach for exposing the manifest is the simplest…but that’s no fun right?&amp;#160; We are developers!!! As I was looking at this myself, I put together a piece of ASP.NET middleware to help me configure it.&amp;#160; You can use the static files approach (in fact you’ll have to do that with your logo if hosting at the same place as your API) for sure, but just in case here’s a middleware approach that I put together.&amp;#160; First you’ll install the package &lt;a href="https://www.nuget.org/packages/TimHeuer.OpenAIPluginMiddleware"&gt;TimHeuer.OpenAIPluginMiddleware&lt;/a&gt; from NuGet.&amp;#160; Once you’ve done that now you’ll add the service and tell the pipeline to use it.&amp;#160; First add it to the services of the builder (line 1) and then tell the app to use the middleware (line 15):&lt;/p&gt;

&lt;pre class="brush: csharp; highlight: [1,15];"&gt;builder.Services.AddAiPluginGen(options =&amp;gt;
{
    options.NameForHuman = &amp;quot;Contoso Product Search&amp;quot;;
    options.NameForModel = &amp;quot;contosoproducts&amp;quot;;
    options.LegalInfoUrl = &amp;quot;https://www.microsoft.com/en-us/legal/&amp;quot;;
    options.ContactEmail = &amp;quot;noreply@microsoft.com&amp;quot;;
    options.LogoUrl = &amp;quot;/logo.png&amp;quot;;
    options.DescriptionForHuman = &amp;quot;Search through Contoso's wide range of outdoor and recreational products.&amp;quot;;
    options.DescriptionForModel = &amp;quot;Plugin for searching through Contoso's outdoor and recreational products. Use it whenever a user asks about products or activities related to camping, hiking, climbing or camping.&amp;quot;;
    options.ApiDefinition = new Api() { RelativeUrl = &amp;quot;/swagger/v1/swagger.yaml&amp;quot; };
});

...

app.UseAiPluginGen();
&lt;/pre&gt;

&lt;p&gt;This might be overkill, but now your API will respond to /.well-known/ai-plugin.json automatically without having to use the static files manifest approach.&amp;#160; This comes in handy for any dynamic configuration of your manifest (and was the reason I created it).&lt;/p&gt;

&lt;h1&gt;Putting it together&lt;/h1&gt;

&lt;p&gt;With all this in place, now we go to ChatGPT (remember, need a Plus subscription) and add our plugin.&amp;#160; Since ChatGPT is a public site and we haven’t deployed our app yet to anywhere, we need to be able to have ChatGPT call it.&amp;#160; &lt;strong&gt;&lt;a href="https://learn.microsoft.com/en-us/aspnet/core/test/dev-tunnels?view=aspnetcore-7.0"&gt;Visual Studio Dev Tunnels&lt;/a&gt;&lt;/strong&gt; to the rescue!&amp;#160; If you haven’t heard about these yet, it is the fastest and most convenient way to get a public tunnel to your dev machine right from within Visual Studio!&amp;#160; In fact, this scenario is exactly what Dev Tunnels are for!&amp;#160; In our project we’ll create a tunnel first, and make it available to everyone (ChatGPT needs public access).&amp;#160; In VS first create a tunnel, you can do that easily from the ‘start’ button of your API in the toolbar:&lt;/p&gt;

&lt;p&gt;&lt;img title="Create a Dev Tunnel in Visual Studio" style="margin: 0px auto; float: none; display: block; background-image: none;" border="0" alt="Create a Dev Tunnel in Visual Studio" src="https://storage2.timheuer.com/devtunnelcreate1.png" width="821" height="600" /&gt;&lt;/p&gt;

&lt;p&gt;and then configure the options:&lt;/p&gt;

&lt;p&gt;&lt;img title="Dev Tunnel configuration screen" style="margin: 0px auto; float: none; display: block; background-image: none;" border="0" alt="Dev Tunnel configuration screen" src="https://storage2.timheuer.com/devtunnelcreate2.png" width="1127" height="833" /&gt;&lt;/p&gt;

&lt;p&gt;More details on these options are available at the documentation for Dev Tunnels, but these are the options I’m choosing.&amp;#160; Now once I have that the tunnel will be activated and when I run the project from within Visual Studio, it will launch under the Dev Tunnel proxy:&lt;/p&gt;

&lt;p&gt;&lt;img title="Screenshot of app running behind a public Dev Tunnel" style="margin: 0px auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of app running behind a public Dev Tunnel" src="https://storage2.timheuer.com/devtunnelcreate5.png" width="1812" height="564" /&gt;&lt;/p&gt;

&lt;p&gt;You can see my app running, responding to the /.well-known/ai-plugin.json request and serving it from a public URL.&amp;#160; Now let’s make it known to ChatGPT…&lt;/p&gt;

&lt;p&gt;First navigate to &lt;a href="https://chat.openai.com"&gt;https://chat.openai.com&lt;/a&gt; and ensure you choose the GPT-4 approach then plugins:&lt;/p&gt;

&lt;p&gt;&lt;img title="Screenshot of the GPT-4 option on ChatGPT" style="margin: 0px auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of the GPT-4 option on ChatGPT" src="https://storage2.timheuer.com/gpt4tab.png" width="697" height="559" /&gt;&lt;/p&gt;

&lt;p&gt;Once there you will see the option to specify plugins in the drop-down and then navigate to the plugin store:&lt;/p&gt;

&lt;p&gt;&lt;img title="Plugin Store link" style="margin: 0px auto; float: none; display: block; background-image: none;" border="0" alt="Plugin Store link" src="https://storage2.timheuer.com/plugin-store.png" width="712" height="597" /&gt;&lt;/p&gt;

&lt;p&gt;Click that and choose ‘Develop your own plugin’ where you will be asked to put in a URL.&amp;#160; This is where your manifest will respond to (just need the root URL).&amp;#160; Again, because this needs to be public, Visual Studio Dev Tunnels will help you! I put in the URL to my dev tunnel and click next through the process (because this is development you’ll see a few things about warnings etc):&lt;/p&gt;

&lt;p&gt;&lt;img title="Develop your own plugin" style="margin: 0px auto; float: none; display: block; background-image: none;" border="0" alt="Develop your own plugin" src="https://storage2.timheuer.com/develop-plugin.png" width="859" height="241" /&gt;&lt;/p&gt;

&lt;p&gt;After that your plugin will be enabled and now I can issue a query to it and watch it work!&amp;#160; Because I’m using Visual Studio Dev Tunnels I can also set a breakpoint in my C# code and see it happening live, inspect, etc:&lt;/p&gt;

&lt;p&gt;&lt;img title="Breakpoint during debugging hit" style="margin: 0px auto; float: none; display: block; background-image: none;" border="0" alt="Breakpoint during debugging hit" src="https://storage2.timheuer.com/debug-breakpoint.png" width="2353" height="442" /&gt;&lt;/p&gt;

&lt;p&gt;A very fast way to debug my plugin before I’m fully ready for deployment!&lt;/p&gt;

&lt;h1&gt;Sample code&lt;/h1&gt;

&lt;p&gt;And now you have it.&amp;#160; Now you could actually deploy your plugin to Azure Container Apps for scale and you are ready to let everyone get recommendations on backpacks and hiking shoes from Contoso!&amp;#160; I’ve put all of this together (including some Azure deployment infrastructure scripts) in this sample repo: &lt;a href="https://github.com/timheuer/openai-plugin-aspnetcore"&gt;timheuer/openai-plugin-aspnetcore&lt;/a&gt;.&amp;#160; This uses the middleware that I created for the manifest.&amp;#160; That repo is located at &lt;a href="https://github.com/timheuer/openai-plugin-middleware"&gt;timheuer/openai-plugin-middleware&lt;/a&gt; and I’d love to hear comments on the usefulness here. There is some added code in that repo that dynamically changes some of the routes to handle the Dev Tunnel proxy URL for development.&lt;/p&gt;

&lt;p&gt;Hope this helps see the end to end of a very simple plugin using ASP.NET Core, Visual Studio, and ChatGPT with plugins!&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://www.timheuer.com/blog/deploy-dotnet-apps-with-containers-in-visual-studio-fast-and-easy/</id>
    <title>Contain your excitement for ASP.NET on Azure</title>
    <updated>2023-01-27T23:14:15Z</updated>
    <published>2023-01-27T23:14:15Z</published>
    <link href="https://www.timheuer.com/blog/deploy-dotnet-apps-with-containers-in-visual-studio-fast-and-easy/" />
    <author>
      <name>Tim Heuer</name>
      <email>tim@timheuer.com</email>
    </author>
    <category term=".net" />
    <category term="dotnet" />
    <category term="devops" />
    <category term="cloud" />
    <category term="azure" />
    <content type="html">&lt;p&gt;Okay, so I won’t quit my day job in favor of trying to come up with a witty title for a blog post.&amp;#160; But this is one thing that I’m proud to see our team deliver: one of the fastest ways to get your ASP.NET app to a container service on Azure (or elsewhere) without having to know what containers are or learn new things.&amp;#160; No really!&lt;/p&gt;  &lt;h2&gt;Cloud native&lt;/h2&gt;  &lt;p&gt;Well if you operate in the modern web world you’ve heard this term ‘cloud native’ before. And everyone has an opinion on what it means. I’m not here to pick sides and I think it means a lot of different things. One commonality it seems that most can agree on is that one aspect is of deploying a service to the cloud as ‘cloud native’ is to leverage containers.&amp;#160; If you aren’t familiar with containers, go read here: &lt;a href="https://azure.microsoft.com/resources/cloud-computing-dictionary/what-is-a-container/"&gt;What is a container?&lt;/a&gt; It’s a good primer on what they are technically but also some benefits. Once you educate yourself you’ll be able to declare yourself worthy to nod your head in cloud native conversations and every once in a while throw out comments like &lt;em&gt;“Yeah, containers will help here for us.”&lt;/em&gt; or something like that. Instantly you will be seen as smart and an authority and the accolades will start pouring in.&amp;#160; But then you may actually have to do something about it in your job/app. Hey don’t blame me, you brought this on yourself with those arrogant comments! Have no fear, Visual Studio is here to help!&lt;/p&gt;  &lt;h2&gt;Creating and deploying a container&lt;/h2&gt;  &lt;p&gt;If you haven’t spent time working with containers, you will be likely introduced to new concepts like Docker, Dockerfile, compose, and perhaps even YAML. In creating a container, you typically need to have a definition of what your container is, and generally this will be a Dockerfile.&amp;#160; A typical Docker file for a .NET Web API looks like this:&lt;/p&gt;    &lt;pre class="brush: yaml;"&gt;#See https://aka.ms/customizecontainer to learn how to customize your debug container and how Visual Studio uses this Dockerfile to build your images for faster debugging.

FROM mcr.microsoft.com/dotnet/aspnet:7.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build
WORKDIR /src
COPY [&amp;quot;CommerceApi.csproj&amp;quot;, &amp;quot;.&amp;quot;]
RUN dotnet restore &amp;quot;./CommerceApi.csproj&amp;quot;
COPY . .
WORKDIR &amp;quot;/src/.&amp;quot;
RUN dotnet build &amp;quot;CommerceApi.csproj&amp;quot; -c Release -o /app/build

FROM build AS publish
RUN dotnet publish &amp;quot;CommerceApi.csproj&amp;quot; -c Release -o /app/publish /p:UseAppHost=false

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT [&amp;quot;dotnet&amp;quot;, &amp;quot;CommerceApi.dll&amp;quot;]
&lt;/pre&gt;



&lt;p&gt;You can see a few concepts here that you’d have to understand and that’s not the purpose of this post. You’d then need to use Docker to build this container image and also to ‘push’ it to a container registry like &lt;a href="https://learn.microsoft.com/azure/container-registry/container-registry-intro"&gt;Azure Container Registry&lt;/a&gt; (ACR). For a developer this would mean you’d likely have Docker Desktop installed that brings these set of tools to you locally to execute within your developer workflow.&amp;#160; As you develop your solution, you’ll have to keep your Dockerfile updated if it involves more projects, version changes, path changes, etc. But what if you just have a simple service, you’ve heard about containers and you just want to get it to a container service as fast as possible and simple.&amp;#160; Well, in Visual Studio we have you covered.&lt;/p&gt;

&lt;h3&gt;Publish&lt;/h3&gt;

&lt;p&gt;Yeah yeah, ‘friends don’t let friends…’ – c’mon let people be (more on that later). In VS we have a great set of tools to help you rapidly get your code to various deployment endpoints. Since containers are ‘the thing’ lately as of this writing we want to help you remove concepts and get their fast as well…in partnership with Azure.&amp;#160; Azure has a new service launched last year called &lt;strong&gt;&lt;a href="https://learn.microsoft.com/azure/container-apps/overview"&gt;Azure Container Apps&lt;/a&gt;&lt;/strong&gt; (ACA), a managed container environment that helps you scale your app. It’s a great way to get started in container deployments easily and have manageability and scale.&amp;#160; Let me show you how we help you get to ACA quickly, from your beloved IDE, with no need for a Dockerfile or other tools.&amp;#160; You’ll start with your ASP.NET Web project and start from the Publish flow (yep, right-click publish).&amp;#160; From their choose Azure and notice Azure Container Apps right there for you:&lt;/p&gt;

&lt;p&gt;&lt;img title="Visual Studio Publish dialog" style="margin: 0px auto; border: 0px currentcolor; border-image: none; float: none; display: block; background-image: none;" border="0" alt="Visual Studio Publish dialog" src="https://storage2.timheuer.com/publishazablog1.png" width="1024" height="510" /&gt;&lt;/p&gt;

&lt;p&gt;After selecting that Visual Studio (VS) will help you either select existing resources that your infrastructure team helped setup for you or, if you’d like and have access to create them, create new Azure resources all from within VS easily without having to go to the portal.&amp;#160; You can then select your ACA instance:&lt;/p&gt;

&lt;p&gt;&lt;img title="Visual Studio Publish dialog with Azure" style="margin: 0px auto; border: 0px currentcolor; border-image: none; float: none; display: block; background-image: none;" border="0" alt="Visual Studio Publish dialog with Azure" src="https://storage2.timheuer.com/publishazablog2.png" width="1024" height="494" /&gt;&lt;/p&gt;

&lt;p&gt;And then the container registry for your image:&lt;/p&gt;

&lt;p&gt;&lt;img title="Visual Studio Publish dialog with Azure" style="margin: 0px auto; border: 0px currentcolor; border-image: none; float: none; display: block; background-image: none;" border="0" alt="Visual Studio Publish dialog with Azure" src="https://storage2.timheuer.com/publishazablog3.png" width="1024" height="479" /&gt;&lt;/p&gt;

&lt;p&gt;Now you’ll be presented with an option on how to build the container. Notice two options because we’re nice:&lt;/p&gt;

&lt;p&gt;&lt;img title="Publish with .NET SDK selection" style="margin: 0px auto; border: 0px currentcolor; border-image: none; float: none; display: block; background-image: none;" border="0" alt="Publish with .NET SDK selection" src="https://storage2.timheuer.com/publishazablog4.png" width="1024" height="463" /&gt;&lt;/p&gt;

&lt;p&gt;If you still have a Dockerfile and want to go that route (read below) we enable that for you as well. But the first option is leveraging the .NET SDK that you already have (using the publish targets for the SDK). Selecting this option will be the ‘fast path’ to your publishing adventure.&lt;/p&gt;

&lt;p&gt;Then click finish and you’re done, you now have a profile ready to push a container image to a registry (ACR), then to a container app service (ACA) without having to create a Docker file, learn a new concept or have other tools.&amp;#160; Click publish and you’ll see the completed results and you will now be able to strut back into your manager’s office/cube/open space bean bag and say &lt;em&gt;Hey boss, our service is all containerized and in the cloud ready to scale…where’s my promo?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;img title="Publish summary page" style="margin: 0px auto; border: 0px currentcolor; border-image: none; float: none; display: block; background-image: none;" border="0" alt="Publish summary page" src="https://storage2.timheuer.com/publishazablog5.png" width="962" height="768" /&gt;&lt;/p&gt;

&lt;p&gt;VS has helped with millions of cloud deployments every month whether they be to VMs, PaaS services, Web Deploy to on-metal cloud-hosted machines, and now easily to container services like ACA.&amp;#160; It’s very helpful and fast, especially for those dev/test scenarios as you iterate on your app with others.&lt;/p&gt;

&lt;h2&gt;Leveraging continuous integration and deployment (CI/CD)&lt;/h2&gt;

&lt;p&gt;But Tim, friends don’t let friends right-click publish! Pfft, again I say, do what makes you happy and productive.&amp;#160; But also, I agree ;-).&amp;#160; Seriously though I’ve become a believer in CI/CD for EVERYTHING I do now, no matter the size of project. It just raises the confidence of repeatable builds and creates an environment of collaboration better for other things. And here’s the good thing, VS is going to help you bootstrap your efforts here easily as well – EVEN WITH CONTAINERS! Remember that step where we selected the SDK to build our container? Well if your VS project is within a GitHub repository (free for most cases these days, you should use it!), we’ll offer to generate an Actions workflow, which is GitHub’s CI/CD system:&lt;/p&gt;

&lt;p&gt;&lt;img title="Publish using CI/CD" style="margin: 0px auto; border: 0px currentcolor; border-image: none; float: none; display: block; background-image: none;" border="0" alt="Publish using CI/CD" src="https://storage2.timheuer.com/publishazablog6.png" width="1024" height="489" /&gt;&lt;/p&gt;

&lt;p&gt;In choosing a CI/CD workflow, the CI system (in this case GitHub Actions) needs to know some more information: where to deploy, some credentials to use for deployment, etc. The cool thing is even in CI, Visual Studio will help you do all of this setup including retrieving and setting these values as secrets on your repo! Selecting this option would result in this summary for you:&lt;/p&gt;

&lt;p&gt;&lt;img title="GitHub Actions summary page" style="margin: 0px auto; border: 0px currentcolor; border-image: none; float: none; display: block; background-image: none;" border="0" alt="GitHub Actions summary page" src="https://storage2.timheuer.com/publishazablog7.png" width="1007" height="768" /&gt;&lt;/p&gt;

&lt;p&gt;And the resulting workflow in an Actions YAML file in your project:&lt;/p&gt;



&lt;pre class="brush: yaml;"&gt;name: Build and deploy .NET application to container app commerceapp
on:
  push:
    branches:
    - main
env:
  CONTAINER_APP_CONTAINER_NAME: commerceapi
  CONTAINER_APP_NAME: commerceapp
  CONTAINER_APP_RESOURCE_GROUP_NAME: container-apps
  CONTAINER_REGISTRY_LOGIN_SERVER: XXXXXXXXXXXX.azurecr.io
  DOTNET_CORE_VERSION: 7.0.x
  PROJECT_NAME_FOR_DOCKER: commerceapi
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout to the branch
      uses: actions/checkout@v3
    - name: Setup .NET SDK
      uses: actions/setup-dotnet@v1.8.0
      with:
        include-prerelease: True
        dotnet-version: ${{ env.DOTNET_CORE_VERSION }}
    - name: Log in to container registry
      uses: azure/docker-login@v1
      with:
        login-server: ${{ env.CONTAINER_REGISTRY_LOGIN_SERVER }}
        username: ${{ secrets.timacregistry_USERNAME_F84D }}
        password: ${{ secrets.timacregistry_PASSWORD_F84D }}
    - name: Build and push container image to registry
      run: dotnet publish -c Release -r linux-x64 -p:PublishProfile=DefaultContainer -p:ContainerImageTag=${{ github.sha }} --no-self-contained -p:ContainerRegistry=${{ env.CONTAINER_REGISTRY_LOGIN_SERVER }} -bl
    - name: Upload binlog for investigation
      uses: actions/upload-artifact@v3
      with:
        if-no-files-found: error
        name: binlog
        path: msbuild.binlog
  deploy:
    runs-on: ubuntu-latest
    needs: build
    steps:
    - name: Azure Login
      uses: azure/login@v1
      with:
        creds: ${{ secrets.commerceapp_SPN }}
    - name: Deploy to containerapp
      uses: azure/CLI@v1
      with:
        inlineScript: &amp;gt;
          az config set extension.use_dynamic_install=yes_without_prompt

          az containerapp registry set --name ${{ env.CONTAINER_APP_NAME }} --resource-group ${{ env.CONTAINER_APP_RESOURCE_GROUP_NAME }} --server ${{ env.CONTAINER_REGISTRY_LOGIN_SERVER }} --username ${{ secrets.timacregistry_USERNAME_F84D }} --password ${{ secrets.timacregistry_PASSWORD_F84D }}

          az containerapp update --name ${{ env.CONTAINER_APP_NAME }} --container-name ${{ env.CONTAINER_APP_CONTAINER_NAME }} --resource-group ${{ env.CONTAINER_APP_RESOURCE_GROUP_NAME }} --image ${{ env.CONTAINER_REGISTRY_LOGIN_SERVER }}/${{ env.PROJECT_NAME_FOR_DOCKER }}:${{ github.sha }}
    - name: logout
      run: &amp;gt;
        az logout

&lt;/pre&gt;



&lt;p&gt;Boom! So now you CAN use right-click publish and still get started with CI/CD deploying to the cloud!&amp;#160; Strut right back into that office: &lt;em&gt;Hey boss, I took the extra step and setup our initial CI/CD workflow for the container service so the team can just focus on coding and checking it in…gonna take the rest of the week off.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;Cool, but I have advanced needs…&lt;/h2&gt;

&lt;p&gt;Now, now I know there will be always cases where your needs are different, this is too simple, etc. and YOU ARE RIGHT! There are limitations to this approach which we outlined in our &lt;a href="https://devblogs.microsoft.com/dotnet/announcing-builtin-container-support-for-the-dotnet-sdk/"&gt;initial support for the SDK container build capabilities&lt;/a&gt;.&amp;#160; Things like customizing your base container image, tag names, ports, etc. are all easily customizable in your project file as they feed into the build pipeline, so we have you covered on this type of customization. As your solution grows and your particular full microservices needs get more complex, you may outgrow this simplicity…we hope that means your app is hugely successful and profits are rolling in for your app! You’ll likely grow into the Dockerfile scenarios and that’s okay…you’ll have identified your needs and have already setup your starting CI/CD workflow that you can progressively also grow as needed. We will continue to listen and see about ways we can improve this capability as developers like you give us feedback!&lt;/p&gt;

&lt;h2&gt;Summary&lt;/h2&gt;

&lt;p&gt;Our goal in Visual Studio is to help you be productive with a range of tasks. Moving to ‘cloud native’ can be another thing that your team has to worry about and as you start your journey (or perhaps looking to simplify a bit) VS aims to be your partner there and continue to help you be productive in getting your code to the cloud quickly with as much friction removed from your normal workflow. Here’s a few links to read more in more corporate speak about these capabilities:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://devblogs.microsoft.com/dotnet/announcing-builtin-container-support-for-the-dotnet-sdk/"&gt;Announcing built-in container support for the .NET SDK&lt;/a&gt;&lt;/li&gt;

  &lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/container-apps/deploy-visual-studio"&gt;Tutorial: Deploy to Azure Container Apps using Visual Studio&lt;/a&gt;&lt;/li&gt;

  &lt;li&gt;&lt;a href="https://docs.docker.com/engine/reference/builder/"&gt;Dockerfile reference&lt;/a&gt;&lt;/li&gt;

  &lt;li&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/container-apps/overview"&gt;Azure Container Apps&lt;/a&gt;&lt;/li&gt;

  &lt;li&gt;&lt;a href="https://twitter.com/VisualStudio"&gt;@VisualStudio team on Twitter&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://www.timheuer.com/blog/use-github-composite-actions-for-templates-in-workflows/</id>
    <title>GitHub Composite Actions are fast way to templatize workflows</title>
    <updated>2021-12-17T18:49:16Z</updated>
    <published>2021-12-17T18:49:16Z</published>
    <link href="https://www.timheuer.com/blog/use-github-composite-actions-for-templates-in-workflows/" />
    <author>
      <name>Tim Heuer</name>
      <email>tim@timheuer.com</email>
    </author>
    <category term="dotnet" />
    <category term="github" />
    <category term="devops" />
    <content type="html">&lt;p&gt;I’ve had a love/hate relationship with CI/CD for a long while ever since I remember it being a thing. In those early days the ‘tools’ were basically everyone’s homegrown scripts, batch files, random daemon hosts, etc. Calling something a workflow was a stretch. It was for that reason I just wasn’t a believer, it was just too ‘hard’ for the average dev. I, like many, would build from my machine and direct deploy or copy over to file shares (NOTE: LOTS of people still do this). Well the tools have gotten WAY better across the board from many different vendors and your options for great tools exist. I’ve been privileged to work with &lt;a href="https://twitter.com/damovisa"&gt;Damian Brady&lt;/a&gt; and Abel Wang to educate me on the ways of CI/CD a bit. I know Damian has a mantra about right-click publish, but that only made me want to make it simpler for devs.&lt;/p&gt;  &lt;blockquote&gt;   &lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: Did you know that for most projects in .NET working in VS you can use &lt;a href="https://devblogs.microsoft.com/visualstudio/using-github-actions-in-visual-studio-is-as-easy-as-right-click-and-publish/"&gt;right-click Publish to generate a CI/CD workflow&lt;/a&gt; for you, further reducing the complexity?&lt;/p&gt; &lt;/blockquote&gt;  &lt;p&gt;Well, I’m a believer now and I make it part of my mission to improve the tool experience for .NET devs and also look to convince/advocate for .NET developers to use CI/CD even in the smallest of projects. I’ve honed my own workflows to now I truly just worry about development…releases just take care of themselves. It’s glorious and frees so much time. I go out of my way now when I see friend’s projects who are on GitHub but not using Actions, for example. Recently I was working with &lt;a href="https://twitter.com/mkristensen/"&gt;Mads Kristensen&lt;/a&gt; on some things and asked him if he’d consider using Actions. And in a few minutes I submitted a first PR to one of his projects showing how simple it was. I started from using my own `&lt;a href="https://timheuer.com/blog/generate-github-actions-workflow-from-cli/"&gt;dotnet new workflow&lt;/a&gt;` tool as not all project types support the right-click Publish—&amp;gt;Actions work Visual Studio has done yet. This helps get started with the basics.&lt;/p&gt;  &lt;p&gt;In a few back/forth with Mads he wanted to encapsulate more…the files were too busy for him LOL. Enter &lt;a href="https://github.blog/changelog/2020-08-07-github-actions-composite-run-steps/"&gt;composite Actions&lt;/a&gt; (or technically composite &lt;em&gt;run steps&lt;/em&gt;). This was my chance to look into these as I hadn’t really had a need yet. You should read the docs, but my lay explanation is that composite run steps enable you to basically templatize some of your steps into a single encapsulation…and VERY simply.&amp;#160; &lt;/p&gt;  &lt;p&gt;&lt;img title="Screenshot of GitHub Action YAML file" style="margin: 0px auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of GitHub Action YAML file" src="https://storage2.timheuer.com/steptemplates.png" width="1324" height="525" /&gt;&lt;/p&gt;  &lt;p&gt;Let’s look at one example with Mads’ desires. Mads’ projects are usually &lt;a href="https://github.com/VsixCommunity/"&gt;Visual Studio extensibility&lt;/a&gt; projects and require a few things to build more than just the .NET SDK. In this particular instance Mads needed .NET SDK, NuGet, and MSBuild to be setup.&amp;#160; No problem, I started out with this, because duh, why not:&lt;/p&gt;  &lt;pre class="brush: yaml;"&gt;  # prior portion of jobs removed for brevity
  steps:
    - name: Setup dotnet
      uses: actions/setup-dotnet@v1.9.0
      with:
        dotnet-version: 6.0.x

    - name: Setup MSBuild
      uses: microsoft/setup-msbuild@v1.1

    - name: Setup NuGet
      uses: NuGet/setup-nuget@v1.0.5
&lt;/pre&gt;

&lt;p&gt;But wanting less text, we discussed and I encapsulated these three in one single step using a new composite action. &lt;a href="https://docs.github.com/en/actions/creating-actions/creating-a-composite-action"&gt;Creating a composite action&lt;/a&gt; is simple and enables you to deploy it in a few ways. First you can just keep these in your own repo itself without having to release anything, etc. This is helpful when yours are very repo-specific and nobody is sharing them across org/repos. Let’s look at the above and how we might encapsulate this. I still want to enable SDK version input to start so need an input parameter for that. So in the repo I’ll create two new folders in the .github/workflows folder, creating a new path called ./github/workflows/composite/bootstrap-dotnet and then place a new action.yaml file in that directory. My action.yaml file looks like this:&lt;/p&gt;

&lt;pre class="brush: yaml; highlight: [6,27,30];"&gt;# yaml-language-server: $schema=https://json.schemastore.org/github-action.json
name: 'Setup .NET build dependencies'
description: 'Sets up the .NET dependencies of MSBuild, SDK, NuGet'
branding:
  icon: download
  color: purple
inputs:
  dotnet-version:
    description: 'What .NET SDK version to use'
    required: true
    default: 6.0.x
  sdk:
    description: 'Setup .NET SDK'
    required: false
    default: 'true'
  msbuild:
    description: 'Setup MSBuild'
    required: false
    default: 'true'
  nuget:
    description: 'Setup NuGet'
    required: false
    default: 'true'
runs:
  using: &amp;quot;composite&amp;quot;
  steps:
    - name: Setup dotnet
      if: inputs.sdk == 'true'
      uses: actions/setup-dotnet@v1.9.0
      with:
        dotnet-version: ${{ inputs.dotnet-version }}

    - name: Setup MSBuild
      if: inputs.msbuild == 'true' &amp;amp;&amp;amp; runner.os == 'Windows'
      uses: microsoft/setup-msbuild@v1.1

    - name: Setup NuGet
      if: inputs.nuget == 'true'
      uses: NuGet/setup-nuget@v1.0.5
&lt;/pre&gt;

&lt;p&gt;Let’s break it down. Composite actions still have the same setup as other custom actions enabling you to have branding/name/description/etc. as well as inputs as I’ve defined starting at line 6. I can then use these inputs in later steps (line 27/30). As you can see this action basically is a template for other steps that use other actions…simple!!! Now in the primary workflow for the project it looks like this:&lt;/p&gt;

&lt;pre class="brush: yaml; highlight: [14];"&gt;# yaml-language-server: $schema=https://json.schemastore.org/github-workflow.json
name: &amp;quot;PR Build&amp;quot;

on: [pull_request]
      
jobs:
  build:
    name: Build 
    runs-on: windows-2022
      
    steps:
    - uses: actions/checkout@v2

    - name: Setup .NET build dependencies
      uses: ./.github/workflows/composite/bootstrap-dotnet
      with:
        nuget: 'false'
&lt;/pre&gt;

&lt;p&gt;Notice the path to the workflow itself using the new folder structure (line 14). Now when this workflow runs it will bring this composite action in and also run it’s steps…beautiful. If the action is more generic and you want to move it out of the repo you can do that. In fact in this one we did just that and you can see it at &lt;a href="https://github.com/timheuer/bootstrap-dotnet"&gt;timheuer/bootstrap-dotnet&lt;/a&gt; and be able to use it just like any other action in your setup. An example of changed like the above is as simple as:&lt;/p&gt;

&lt;pre class="brush: yaml; highlight: [14];"&gt;# yaml-language-server: $schema=https://json.schemastore.org/github-workflow.json
name: &amp;quot;PR Build&amp;quot;

on: [pull_request]

jobs:
  build:
    name: Build 
    runs-on: windows-2022
      
    steps:
    - uses: actions/checkout@v2

    - name: Setup .NET build dependencies
      uses: timheuer/bootstrap-dotnet@v1
      with:
        nuget: 'false'
&lt;/pre&gt;

&lt;p&gt;Done! What’s also great is because this still is a legit GitHub Action you can publish it on the marketplace for others to discover and use (hence the branding). &lt;a href="https://github.com/marketplace/actions/setup-net-build-dependencies"&gt;Here is this one we just demonstrated above in the marketplace&lt;/a&gt;: &lt;/p&gt;

&lt;p&gt;&lt;img title="Screenshot of GitHub Action marketplace listing" style="margin: 0px auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of GitHub Action marketplace listing" src="https://storage2.timheuer.com/setupmarketplace.png" width="1674" height="967" /&gt;&lt;/p&gt;

&lt;p&gt;So that’s a simple example of truly a template/merge of other existing actions. But can you use this method to create a custom action that just uses script for example, like PowerShell? YES! Let’s take another one of these examples that uploads the VSIX from our project to the &lt;a href="https://www.vsixgallery.com"&gt;Open VSIX gallery&lt;/a&gt;. Mads was using a PowerShell script that does his upload for him, so I’m copying that into a new composite action and making some inputs and then he can use it.&amp;#160; Here’s the full composite action:&lt;/p&gt;

&lt;pre class="brush: yaml; highlight: [6];"&gt;# yaml-language-server: $schema=https://json.schemastore.org/github-action.json
name: 'Publish to OpenVSIX Gallery'
description: 'Publishes a Visual Studio extension (VSIX) to the OpenVSIX Gallery'
branding:
  icon: upload-cloud
  color: purple
inputs:
  readme:
    description: 'Path to readme file'
    required: false
    default: ''
  vsix-file:
    description: 'Path to VSIX file'
    requried: true
runs:
  using: &amp;quot;composite&amp;quot;
  steps:
    - name: Publish to Gallery
      id: publish_gallery
      shell: pwsh
      run: |
        $repo = &amp;quot;&amp;quot;
        $issueTracker = &amp;quot;&amp;quot;

        # If no readme URL was specified, default to &amp;quot;&amp;lt;branch_name&amp;gt;/README.md&amp;quot;
        if (-not &amp;quot;${{ inputs.readme }}&amp;quot;) {
          $readmeUrl = &amp;quot;$Env:GITHUB_REF_NAME/README.md&amp;quot;
        } else {
          $readmeUrl = &amp;quot;${{ inputs.readme }}&amp;quot;
        }

        $repoUrl = &amp;quot;$Env:GITHUB_SERVER_URL/$Env:GITHUB_REPOSITORY/&amp;quot;

        [Reflection.Assembly]::LoadWithPartialName(&amp;quot;System.Web&amp;quot;) | Out-Null
        $repo = [System.Web.HttpUtility]::UrlEncode($repoUrl)
        $issueTracker = [System.Web.HttpUtility]::UrlEncode(($repoUrl + &amp;quot;issues/&amp;quot;))
        $readmeUrl = [System.Web.HttpUtility]::UrlEncode($readmeUrl)

        # $fileNames = (Get-ChildItem $filePath -Recurse -File)
        $vsixFile = &amp;quot;${{ inputs.vsix-file }}&amp;quot;
        $vsixUploadEndpoint = &amp;quot;https://www.vsixgallery.com/api/upload&amp;quot;

        [string]$url = ($vsixUploadEndpoint + &amp;quot;?repo=&amp;quot; + $repo + &amp;quot;&amp;amp;issuetracker=&amp;quot; + $issueTracker + &amp;quot;&amp;amp;readmeUrl=&amp;quot; + $readmeUrl)
        [byte[]]$bytes = [System.IO.File]::ReadAllBytes($vsixFile)
             
        try {
            $webclient = New-Object System.Net.WebClient
            $webclient.UploadFile($url, $vsixFile) | Out-Null
            'OK' | Write-Host -ForegroundColor Green
        }
        catch{
            'FAIL' | Write-Error
            $_.Exception.Response.Headers[&amp;quot;x-error&amp;quot;] | Write-Error
        }

&lt;/pre&gt;

&lt;p&gt;You can see it is mostly a PowerShell script and has the inputs (line 6). And here it is in use in a project:&lt;/p&gt;



&lt;pre class="brush: yaml;"&gt;# other steps removed for brevity in snippet
  publish:
    runs-on: ubuntu-latest
    steps:

      - uses: actions/checkout@v2

      - name: Download Package artifact
        uses: actions/download-artifact@v2
        with:
          name: RestClientVS.vsix

      - name: Upload to Open VSIX
        uses: timheuer/openvsixpublish@v1
        with:
          vsix-file: RestClientVS.vsix
&lt;/pre&gt;



&lt;p&gt;Pretty cool when your custom action is a script like this and you don’t need to do any funky containers, or have a node app that just launches pwsh.exe or stuff like that. LOVE IT! Here’s the repo for this one to see more: &lt;a href="https://github.com/timheuer/openvsixpublish"&gt;timheuer/openvsixpublish&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;This will definitely be the first approach I consider when needing other simple actions for my projects or others. The simplicity and flexibility in ‘templatizing’ some steps is really great!&lt;/p&gt;

&lt;p&gt;Hope this helps!&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://www.timheuer.com/blog/dotnet-cycling-kit/</id>
    <title>Limited Edition Custom .NET Cycling Jersey</title>
    <updated>2021-11-12T22:20:17Z</updated>
    <published>2021-11-12T22:20:17Z</published>
    <link href="https://www.timheuer.com/blog/dotnet-cycling-kit/" />
    <author>
      <name>Tim Heuer</name>
      <email>tim@timheuer.com</email>
    </author>
    <category term=".net" />
    <category term="dotnet" />
    <content type="html">&lt;p&gt;Well with the &lt;a href="https://devblogs.microsoft.com/dotnet/announcing-net-6/"&gt;release of .NET 6&lt;/a&gt;, lots of excitement around the platform and me being the nerd I am with a love for cycling, it’s time to open up for another round of ordering for the highly exclusive limited edition .NET Cycling Kit (jersey and bib shorts).&lt;/p&gt;  &lt;p&gt;&lt;img title=".NET Cycling Kit" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt=".NET Cycling Kit" src="https://storage2.timheuer.com/dotnetkit.png" width="810" height="592" /&gt;&lt;/p&gt;  &lt;p&gt;Last year I had created these using the .NET Foundation assets and in accordance with the brand guidelines for the .NET project (did you know we had a branding guidelines?! Me neither, but now you do).&amp;#160; Well, we’re opening it up for an exclusive another round (this is round 3) and probably the last (famous last words).&lt;/p&gt;  &lt;p&gt;Here’s the details:&lt;/p&gt;  &lt;ul&gt;   &lt;li&gt;These are high-quality race-fit cycling kits…yes they are not cheap…nor is the quality.&lt;/li&gt;    &lt;li&gt;These are race-fit, not ‘club fit’ so they are meant to fit tighter and in riding position&lt;/li&gt;    &lt;li&gt;Sizing: &lt;a href="https://www.elielcycling.com/pages/custom-size-chart"&gt;Eliel Cycling Sizing Guide&lt;/a&gt; (for context I am 5’9” [175cm] and ~195lbs [88.5kg] and I prefer a Large top and Large bib shorts)&lt;/li&gt;    &lt;li&gt;These are all 100% custom made-to-order – there is no ‘stock’ for immediate ship&lt;/li&gt;    &lt;li&gt;For patience, upon store closing these will take 10-12 weeks of production – must be patient :-)&lt;/li&gt;    &lt;li&gt;They are custom, sales final, no returns&lt;/li&gt;    &lt;li&gt;They look awesome and are comfortable&lt;/li&gt;    &lt;li&gt;You will be the envy of all cyclists who are .NET developers that didn’t get one&lt;/li&gt;    &lt;li&gt;I make NO money on this&lt;/li&gt;    &lt;li&gt;Microsoft makes no money on this&lt;/li&gt;    &lt;li&gt;Microsoft has nothing to do with this&lt;/li&gt;    &lt;li&gt;There is no telemetry collected on the kit&lt;/li&gt; &lt;/ul&gt;  &lt;p&gt;So how do you get one?&amp;#160; Simple, click here: &lt;a href="https://bit.ly/dotnetkit3"&gt;&lt;strong&gt;.NET Cycling Kit (Round 3)&lt;/strong&gt;&lt;/a&gt; – you order direct from the manufacturer in California and pay direct to them.&amp;#160; Please read the site and details clearly and as well for EU ordering.&lt;/p&gt;  &lt;p&gt;If you have any questions about these, the best place to ask is ping me on Twitter &lt;a href="https://twitter.com/timheuer"&gt;@timheuer&lt;/a&gt; for questions that I may be able to answer on sizing or otherwise.&amp;#160; I would love to &lt;a href="https://twitter.com/search?q=cyclistsofdotnet&amp;amp;src=typeahead_click"&gt;see more .NET cyclists with their kits in the wild worldwide&lt;/a&gt;.&amp;#160; &lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://www.timheuer.com/blog/manually-force-a-failure-in-github-action-step/</id>
    <title>Forcing a failure in GitHub Actions based on a condition</title>
    <updated>2021-04-30T17:38:58Z</updated>
    <published>2021-04-30T17:38:58Z</published>
    <link href="https://www.timheuer.com/blog/manually-force-a-failure-in-github-action-step/" />
    <author>
      <name>Tim Heuer</name>
      <email>tim@timheuer.com</email>
    </author>
    <category term=".net" />
    <category term="dotnet" />
    <category term="github" />
    <category term="devops" />
    <content type="html">&lt;p&gt;Last night I got tweeted at asking me how one could halt a CI workflow in GitHub Actions on a condition.&amp;#160; This particular condition was if the code coverage tests failed a certain coverage threshold.&amp;#160; I’m not a regular user of code coverage tools like &lt;a href="https://github.com/coverlet-coverage/coverlet"&gt;Coverlet&lt;/a&gt; but I went Googling for some answers and oddly did not find the obvious answer that was pointed out to me this morning.&amp;#160; Regardless the journey to discover an alternate means was interesting to me so I’ll share what I did that I feel is super hacky, but works and is a similar method I used for passing some version information in other workflows.&lt;/p&gt;  &lt;p&gt;First, the simple solution for if you are using Coverlet and want to fail a build and thus a CI workflow is to use the &lt;a href="https://github.com/coverlet-coverage/coverlet/blob/master/Documentation/MSBuildIntegration.md"&gt;MSBuild integration option&lt;/a&gt; and then you can simply use:&lt;/p&gt;  &lt;pre class="brush: bash;"&gt;dotnet test /p:CollectCoverage=true /p:Threshold=80
&lt;/pre&gt;

&lt;p&gt;I honestly felt embarrassed that I didn’t find this simple option, but oh well, it is there and is definitely the simplest option if you can use this option.&amp;#160; But there you have it.&amp;#160; When used in an Actions workflow if the threshold isn’t met, this will fail that step and you are done.&lt;/p&gt;

&lt;p&gt;&lt;img title="Picture of failed GitHub Actions step" style="margin: 0px auto; border: 0px currentcolor; border-image: none; float: none; display: block; background-image: none;" border="0" alt="Picture of failed GitHub Actions step" src="https://storage2.timheuer.com/failedtest.png" width="1798" height="1035" /&gt;&lt;/p&gt;

&lt;h2&gt;Creating your condition to inspect&lt;/h2&gt;

&lt;p&gt;But let’s say you need to fail for a different reason or in this example here, you couldn’t use the MSBuild integration and instead are just using the VSTest integration with a collector.&amp;#160; Well, we’ll use this code coverage scenario as an &lt;em&gt;example&lt;/em&gt; but the key step here is focusing on how to fail a step.&amp;#160; Your condition may be anything but I suspect it is usually based on some previous step’s output or value.&amp;#160; Well first, if you are relying on previous steps values, be sure you understand the power of using &lt;a href="https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#outputs"&gt;outputs&lt;/a&gt;.&amp;#160; This is I think the best way to kind of ‘set state’ of certain things in steps.&amp;#160; A step can do some things and either in the Action itself set an Output value, or in the workflow YAML you can do this as well using a shell command and calling the &lt;a href="https://docs.github.com/en/actions/reference/workflow-commands-for-github-actions#setting-an-output-parameter"&gt;::set-output method&lt;/a&gt;.&amp;#160; Let’s look at an example…first the initial step (again using our code coverage scenario):&lt;/p&gt;

&lt;pre class="brush: yaml;"&gt;- name: Test
  run: dotnet test XUnit.Coverlet.Collector/XUnit.Coverlet.Collector.csproj --collect:&amp;quot;XPlat Code Coverage&amp;quot;
&lt;/pre&gt;

&lt;p&gt;This basically will produce an XML output ‘report’ that contains the values we want to extract.&amp;#160; Namely it’s in this snippet:&lt;/p&gt;



&lt;pre class="brush: xml; highlight: [2];"&gt;&amp;lt;?xml version=&amp;quot;1.0&amp;quot; encoding=&amp;quot;utf-8&amp;quot;?&amp;gt;
&amp;lt;coverage line-rate=&amp;quot;0.85999999999&amp;quot; branch-rate=&amp;quot;1&amp;quot; version=&amp;quot;1.9&amp;quot; timestamp=&amp;quot;1619804172&amp;quot; lines-covered=&amp;quot;15&amp;quot; lines-valid=&amp;quot;15&amp;quot; branches-covered=&amp;quot;8&amp;quot; branches-valid=&amp;quot;8&amp;quot;&amp;gt;
  &amp;lt;sources&amp;gt;
    &amp;lt;source&amp;gt;D:\&amp;lt;/source&amp;gt;
  &amp;lt;/sources&amp;gt;
  &amp;lt;packages&amp;gt;
    &amp;lt;package name=&amp;quot;Numbers&amp;quot; line-rate=&amp;quot;1&amp;quot; branch-rate=&amp;quot;1&amp;quot; complexity=&amp;quot;8&amp;quot;&amp;gt;
      &amp;lt;classes&amp;gt;
&lt;/pre&gt;



&lt;p&gt;I want the line-rate value (line 2) in this XML to be my condition…so I’m going to create a new Actions step to extract the value by parsing the XML using a PowerShell cmdlet.&amp;#160; Once I have that I will set the value as the &lt;em&gt;output&lt;/em&gt; of this step for later use:&lt;/p&gt;

&lt;pre class="brush: yaml; highlight: [2,9];"&gt;- name: Get Line Rate from output
  id: get_line_rate
  shell: pwsh  
  run: |
    $covreport = get-childitem -Filter coverage.cobertura.xml -Recurse | Sort-Object -Descending -Property LastWriteTime -Top 1
    Write-Output $covreport.FullName
    [xml]$covxml = Get-Content -Path $covreport.FullName
    $lineRate = $covxml.coverage.'line-rate'
    Write-Output &amp;quot;::set-output name=lineRate::$lineRate&amp;quot;
&lt;/pre&gt;

&lt;p&gt;As you can see in lines 2 and 9 I have set a specific ID for my step and then used the set-output method to write a value to an output of the step named ‘lineRate’ that can be later used.&amp;#160; So now let’s use it!&lt;/p&gt;

&lt;h2&gt;Evaluating your condition and failing the step manually&lt;/h2&gt;

&lt;p&gt;Now that we have our condition, we want to fail the run if the condition evaluates a certain way…in our case if the code coverage line rate isn’t meeting our threshold.&amp;#160; To do this we’re going to use a specific GitHub Action called &lt;a href="https://github.com/actions/github-script"&gt;actions/github-script&lt;/a&gt;&lt;strong&gt;&lt;/strong&gt; which allows you to run some of the GitHub API directly in a script.&amp;#160; This is great as it allows us to use the &lt;a href="https://github.com/actions/core"&gt;core library&lt;/a&gt; which has a set of methods for success and failure!&amp;#160; Let’s take a look at how we combine the condition with the setting:&lt;/p&gt;

&lt;pre class="brush: yaml; highlight: [2,6];"&gt;- name: Check coverage tolerance
  if: ${{ steps.get_line_rate.outputs.lineRate &amp;lt; 0.9 }}
  uses: actions/github-script@v3
  with:
    script: |
        core.setFailed('Coverage test below tolerance')
&lt;/pre&gt;

&lt;p&gt;Okay, so we did a few things here.&amp;#160; First we are defining this step as executing the core.setFailed() method…that’s specifically what this step will do, that’s it…it will fail the run with a message we put in there.&amp;#160; &lt;strong&gt;*BUT*&lt;/strong&gt; we have put a condition on the step itself using the &lt;a href="https://docs.github.com/en/actions/reference/context-and-expression-syntax-for-github-actions#job-status-check-functions"&gt;if condition&lt;/a&gt; checking.&amp;#160; In line 6 we are executing the setFailed function with our custom message that will show in the runner log.&amp;#160; On line 2 we have set the condition for if this step even runs at all.&amp;#160; Notice we are using the ID of a previous step (get_line_rate) and the output parameter (lineRate) and then doing a quick math check.&amp;#160; If this condition is met, then this step will run.&amp;#160; If the condition is NOT met, this step will not run, but also doesn’t fail and the run can continue.&amp;#160; Observe that if the condition is met, our step will fail and the run fails:&lt;/p&gt;

&lt;p&gt;&lt;img title="Failed run based on condition" style="margin: 0px auto; border: 0px currentcolor; border-image: none; float: none; display: block; background-image: none;" border="0" alt="Failed run based on condition" src="https://storage2.timheuer.com/failconditionstep.png" width="829" height="340" /&gt;&lt;/p&gt;

&lt;p&gt;If the condition is NOT met the step is ignored, the run continues:&lt;/p&gt;

&lt;p&gt;&lt;img title="Condition not met" style="margin: 0px auto; border-image: none; float: none; display: block; background-image: none;" border="0" alt="Condition not met" src="https://storage2.timheuer.com/conditionnotmetstep.png" width="835" height="281" /&gt;&lt;/p&gt;

&lt;p&gt;Boom, that’s it!&amp;#160; &lt;/p&gt;

&lt;h2&gt;Summary&lt;/h2&gt;

&lt;p&gt;This was just one scenario but the key here is if you need to manually control a fail condition or otherwise evaluate conditions, using the &lt;strong&gt;actions/github-script&lt;/strong&gt; Action is a simple way to do a quick insertion to control your run based on a condition.&amp;#160; It’s quick and effective for some scenarios where your steps may not have natural success/fail exit codes that would otherwise fail your CI run.&amp;#160; What do you think? Is there a better/easier way that I missed when you don’t have clear exit codes?&lt;/p&gt;

&lt;p&gt;Hope this helps someone!&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://www.timheuer.com/blog/add-approval-workflow-to-github-actions/</id>
    <title>Adding approval workflow to your GitHub Action</title>
    <updated>2020-12-16T22:37:24Z</updated>
    <published>2020-12-16T22:37:24Z</published>
    <link href="https://www.timheuer.com/blog/add-approval-workflow-to-github-actions/" />
    <author>
      <name>Tim Heuer</name>
      <email>tim@timheuer.com</email>
    </author>
    <category term="github" />
    <category term="azure" />
    <category term="devops" />
    <category term="dotnet" />
    <content type="html">&lt;p&gt;One of the biggest things that I’ve wanted (and have heard others) when adopting GitHub Actions is the use of some type of approval flow.&amp;#160; Until now (roughly the time of this writing) that wasn’t possible easily in Actions.&amp;#160; The concept of how Azure Pipelines does it is so nice and simple to understand in my opinion and a lot of the attempts by others using various Actions stitched together made it tough to adopt.&amp;#160; Well, announced at &lt;a href="https://githubuniverse.com/"&gt;GitHub Universe&lt;/a&gt;, reviewers is now in Beta for Actions customers!!!&amp;#160; Yes!!!&amp;#160; I spent some time setting up a flow with an ASP.NET 5 web app and Azure as my deployment to check it out.&amp;#160; I wanted to share my write-up in hopes it might help others get started quickly as well.&amp;#160; First I’ll acknowledge that this is the simplest getting started you can have and your workflows may be more complex, etc.&amp;#160; If you’d like to have a primer and see some other updates on Actions, be sure to check out Chris Patterson’s session from Universe: &lt;a href="https://githubuniverse.com/Continuous-delivery-with-GitHub-Actions/"&gt;Continuous delivery with GitHub Actions&lt;/a&gt;.&amp;#160; With that let’s get started!&lt;/p&gt;  &lt;h2&gt;&lt;/h2&gt;  &lt;h2&gt;Setting things up&lt;/h2&gt;  &lt;p&gt;First we’ll need a few things to get started.&amp;#160; These are things I’m not going to walk through here but will explain &lt;em&gt;briefly&lt;/em&gt; what/why it is needed for my example.&lt;/p&gt;  &lt;ul&gt;   &lt;li&gt;An Azure account – I’m using this sample with Azure as my deployment because that’s where I do most of my work.&amp;#160; You can &lt;strong&gt;&lt;a href="https://azure.com/free"&gt;get a free Azure account&lt;/a&gt;&lt;/strong&gt; as well and do exactly this without any obligation.&lt;/li&gt;    &lt;li&gt;Set up an &lt;a href="https://docs.microsoft.com/azure/app-service/"&gt;Azure App Service&lt;/a&gt; resource – I’m using App Service Linux and just created it using basically all the defaults.&amp;#160; This is just a sample so those are fine for me.&amp;#160; I also created these using the portal to have everything setup in advance.&lt;/li&gt;    &lt;li&gt;I added one &lt;a href="https://docs.microsoft.com/azure/app-service/configure-common"&gt;Application Setting&lt;/a&gt; to my App Service called APPSERVICE_ENVIRONMENT so I could just extract a string noting which environment I was in and display it on the home page.&lt;/li&gt;    &lt;li&gt;In your App Service create a &lt;a href="https://docs.microsoft.com/azure/app-service/deploy-staging-slots"&gt;Deployment Slot&lt;/a&gt; and name it “staging” and choose to clone the main service settings (to get the previous app setting I noted).&amp;#160; I then changed the app setting value for this deployment slot.&lt;/li&gt;    &lt;li&gt;&lt;a href="https://docs.microsoft.com/visualstudio/deployment/tutorial-import-publish-settings-azure?view=vs-2019#create-the-publish-settings-file-in-azure-app-service"&gt;Download the publish profile&lt;/a&gt; for each your production and staging instances individually and save those somewhere for now as we’ll refer back to them in the next step.&lt;/li&gt;    &lt;li&gt;I created an ASP.NET 5 Web App using the default template from Visual Studio 2019.&amp;#160; I made some code changes in the Index.cshtml to pull from app settings, but otherwise it is unchanged.&lt;/li&gt;    &lt;li&gt;I used the new Git features in Visual Studio to quickly get my app to a repository in my GitHub account and enabled Actions on that repo.&lt;/li&gt; &lt;/ul&gt;  &lt;p&gt;That’s it!&amp;#160; With those basics set up I can get started with the next steps of building out the workflow.&amp;#160; I should note that the steps I’m outlining here are free for GitHub public repositories.&amp;#160; For private repositories you need to be a GitHub Enterprise Server customer.&amp;#160; Since my sample is public I’m ready to go!&lt;/p&gt;  &lt;h2&gt;Environments&lt;/h2&gt;  &lt;p&gt;The first concept is &lt;a href="https://docs.github.com/en/free-pro-team@latest/actions/reference/environments"&gt;Environments&lt;/a&gt;.&amp;#160; These are basically a separate segmented definition of your repo that you can associate secrets and protection rules with.&amp;#160; This is the key to the approval workflow as one of the protection rules is reviewers required (aka approvers).&amp;#160; The first thing we’ll do is set up two environments: staging and production.&amp;#160; Go to your repository settings and you’ll see a new section called Environments in the navigation.&amp;#160; &lt;/p&gt;  &lt;p&gt;&lt;img title="Screenshot of environment config" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of environment config" src="https://storage2.timheuer.com/approvalpost1.png" width="1913" height="964" /&gt;&lt;/p&gt;  &lt;p&gt;To create an environment, click the New Environment button and give it a name.&amp;#160; I created one called &lt;strong&gt;production&lt;/strong&gt; and one called &lt;strong&gt;staging&lt;/strong&gt;.&amp;#160; In each of these you can do things independently like secrets and reviewers.&amp;#160; Because I’m a team of one person my reviewer will be me, but you could set up others like maybe a build engineer for staging approval deployment and a QA team for production deployment.&amp;#160; Either way&amp;#160; click the Required reviewers checkbox and add yourself at least and save protection rule.&lt;/p&gt;  &lt;blockquote&gt;   &lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: This area may expand more to further protection rules but for now it is reviewers or a wait delay.&amp;#160; GitHub indicates others may be in the future.&lt;/p&gt; &lt;/blockquote&gt;  &lt;p&gt;Now we’ll add some secrets.&amp;#160; With Environments, you can have independent secrets for each environment.&amp;#160; Maybe you want to have different deployment variables, etc. for each environment, this is where you could do it.&amp;#160; For us, this is specifically what we’ll use the different publish profiles for.&amp;#160; Remember those profiles you downloaded earlier, now you’ll need them.&amp;#160; In the staging environment create a new secret named AZURE_PUBLISH_PROFILE and paste in the contents of your staging publish profile.&amp;#160; Then go to your production environment settings and do the same &lt;strong&gt;using the same secret name&lt;/strong&gt; and use the production publish profile you downloaded earlier.&amp;#160; This allows our workflow to use environment-specific secret settings when they are called, but still use the same secret name…meaning we don’t need AZURE_PUBLISH_PROFILE_STAGING naming as we’ll be marking the environment in the workflow and it will pick up secrets from that environment only (or the repo if not found there – you can have a hierarchy of secrets effectively).&lt;/p&gt;  &lt;p&gt;Okay we’re done setting up the Environment in the repo…off to set up the workflow!&lt;/p&gt;  &lt;h2&gt;Setting up the workflow&lt;/h2&gt;  &lt;p&gt;To get me quickly started I used my own template so I could `&lt;a href="https://timheuer.com/blog/generate-github-actions-workflow-from-cli"&gt;dotnet new workflow&lt;/a&gt;` in my repo root using the CLI.&amp;#160; This gives me a strawman to work with.&amp;#160; Let’s build out the basics, we’re going to have 3 jobs: build, deploy to staging, deploy to prod.&amp;#160; Let’s get started.&amp;#160; The full workflow is in my repo for this post, but I’ll be extracting snippets to focus on and show relevant pieces here.&lt;/p&gt;  &lt;h3&gt;Build&lt;/h3&gt;  &lt;p&gt;For build I’m using my standard implementation of restore/build/publish/upload artifacts which looks like this (with some environment-specific keys):&lt;/p&gt;  &lt;pre class="brush: yaml;"&gt;jobs:
  build:
    name: Build
    if: github.event_name == 'push' &amp;amp;&amp;amp; contains(toJson(github.event.commits), '***NO_CI***') == false &amp;amp;&amp;amp; contains(toJson(github.event.commits), '[ci skip]') == false &amp;amp;&amp;amp; contains(toJson(github.event.commits), '[skip ci]') == false
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - name: Setup .NET Core SDK ${{ env.DOTNET_CORE_VERSION }}
      uses: actions/setup-dotnet@v1
      with:
        dotnet-version: ${{ env.DOTNET_CORE_VERSION }}
    - name: Restore packages
      run: dotnet restore &amp;quot;${{ env.PROJECT_PATH }}&amp;quot;
    - name: Build app
      run: dotnet build &amp;quot;${{ env.PROJECT_PATH }}&amp;quot; --configuration ${{ env.CONFIGURATION }} --no-restore
    - name: Test app
      run: dotnet test &amp;quot;${{ env.PROJECT_PATH }}&amp;quot; --no-build
    - name: Publish app for deploy
      run: dotnet publish &amp;quot;${{ env.PROJECT_PATH }}&amp;quot; --configuration ${{ env.CONFIGURATION }} --no-build --output &amp;quot;${{ env.AZURE_WEBAPP_PACKAGE_PATH }}&amp;quot;
    - name: Publish Artifacts
      uses: actions/upload-artifact@v1.0.0
      with:
        name: webapp
        path: ${{ env.AZURE_WEBAPP_PACKAGE_PATH }}
&lt;/pre&gt;

&lt;p&gt;Notice this job is ‘build’ and ends with uploading some artifacts to the job.&amp;#160; That’s it, the core functionality is to build/test this and store the final artifacts.&lt;/p&gt;

&lt;h3&gt;Deploy to staging&lt;/h3&gt;

&lt;p&gt;Next job we want is to deploy those bits to staging environment, which will be our staging slot in our Azure App Service we set up before.&amp;#160; Here’s the workflow job definition snippet:&lt;/p&gt;

&lt;pre class="brush: yaml; highlight: [2,4,5,6,21,22];"&gt;  staging:
    needs: build
    name: Deploy to staging
    environment:
        name: staging
        url: ${{ steps.deploy_staging.outputs.webapp-url }}
    runs-on: ubuntu-latest
    steps:
    # Download artifacts
    - name: Download artifacts
      uses: actions/download-artifact@v2
      with:
        name: webapp

    # Deploy to App Service Linux
    - name: Deploy to Azure WebApp
      uses: azure/webapps-deploy@v2
      id: deploy_staging
      with:
        app-name: ${{ env.AZURE_WEBAPP_NAME }}
        publish-profile: ${{ secrets.AZURE_PUBLISH_PROFILE }}
        slot-name: staging
&lt;/pre&gt;

&lt;p&gt;In this job we download the previously published artifacts to be used as our app to deploy.&amp;#160; Observe a few other things here:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;I’ve declared that this job ‘needs’ the ‘build’ job to start.&amp;#160; This ensures a sequence workflow.&amp;#160; If build job fails, this doesn’t start.&lt;/li&gt;

  &lt;li&gt;I’ve declared this job an ‘environment’ and marked it using staging which maps to the Environment name we set up on the repo settings.&lt;/li&gt;

  &lt;li&gt;In the publish phase I specified the slot-name value mapping to the Azure App Service slot name we created on our resource in the portal.&lt;/li&gt;

  &lt;li&gt;Specify getting the AZURE_PUBLISH_PROFILE secret from the repo&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’ll also notice the ‘url’ setting on the environment.&amp;#160; This is a cool little delighter that you should use.&amp;#160; One of the outputs of the Azure web app deploy action is the URL to where it was deployed.&amp;#160; I can extract that from the step and put it in this variable.&amp;#160; GitHub Actions summary will now show this final URL in the visual map of the workflow.&amp;#160; It is a small delighter, but you’ll see useful a bit later.&amp;#160; Notice I don’t put any approver information in here.&amp;#160; By declaring this in the ‘staging’ environment it will follow the protection rules we previously set up.&amp;#160; So in fact, this job won’t run unless (1) build completes successfully and (2) the protection rules for the environment are stratified.&amp;#160; &lt;/p&gt;

&lt;h3&gt;Deploy to production&lt;/h3&gt;

&lt;p&gt;Similarly to staging we have a final step to deploy to production.&amp;#160; Here’s the definition snippet:&lt;/p&gt;



&lt;pre class="brush: yaml; highlight: [3,4,5,21];"&gt;  deploy:
    needs: staging
    environment:
      name: production
      url: ${{ steps.deploy_production.outputs.webapp-url }}
    name: Deploy to production
    runs-on: ubuntu-latest
    steps:
    # Download artifacts
    - name: Download artifacts
      uses: actions/download-artifact@v2
      with:
        name: webapp

    # Deploy to App Service Linux
    - name: Deploy to Azure WebApp
      id: deploy_production
      uses: azure/webapps-deploy@v2
      with:
        app-name: ${{ env.AZURE_WEBAPP_NAME }}
        publish-profile: ${{ secrets.AZURE_PUBLISH_PROFILE }}
&lt;/pre&gt;



&lt;p&gt;This is almost identical to staging except we changed:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Needs ‘staging’ to complete before this runs&lt;/li&gt;

  &lt;li&gt;Changed the environment to production to follow those protection rules&lt;/li&gt;

  &lt;li&gt;Removed the slot-name for deployment (default is production)&lt;/li&gt;

  &lt;li&gt;Changed the URL output value to the value from this job&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Notice that we have the same AZURE_PUBLISH_PROFILE secret used here.&amp;#160; Because we are declaring environments we will get the environment-specific secret in these job scopes.&amp;#160; Helpful to have a common name and just map to different environments rather than many little ones – at least my opinion it does.&lt;/p&gt;

&lt;p&gt;That’s it, we now have our full workflow to build –&amp;gt; deploy to staging with approval –&amp;gt; deploy to production with approval.&amp;#160; Let’s see it in action!&lt;/p&gt;

&lt;h2&gt;Trigger the workflow&lt;/h2&gt;

&lt;p&gt;Once we have this workflow in fact we can commit/push this workflow file and it should trigger a run itself.&amp;#160; Otherwise you can do a different code change/commit/push to trigger as well.&amp;#160; We get a few things here when the run happens.&lt;/p&gt;

&lt;p&gt;First we get a nicer visualization of the summary of the job:&lt;/p&gt;

&lt;p&gt;&lt;img title="Screenshot of summary view" style="margin: 0px auto; border: 0px currentcolor; border-image: none; float: none; display: block; background-image: none;" border="0" alt="Screenshot of summary view" src="https://storage2.timheuer.com/approvalpost4.png" width="1439" height="578" /&gt;&lt;/p&gt;

&lt;p&gt;When the protection rules are hit, a few things happen.&amp;#160; Namely the run stops and waits, but the reviewers are notified.&amp;#160; The notification happens in standard GitHub notification means. I have email notifications and so I got an email like this:&lt;/p&gt;

&lt;p&gt;&lt;img title="Picture of email notification" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Picture of email notification" src="https://storage2.timheuer.com/approvalpost2.png" width="1098" height="695" /&gt;&lt;/p&gt;

&lt;p&gt;I can then click through and approve the workflow step and add comments:&lt;/p&gt;

&lt;p&gt;&lt;img title="Screenshot of approval step" style="margin: 0px auto; border: 0px currentcolor; border-image: none; float: none; display: block; background-image: none;" border="0" alt="Screenshot of approval step" src="https://storage2.timheuer.com/approvalpost5.png" width="1440" height="644" /&gt;&lt;/p&gt;

&lt;p&gt;Once that step is approved, the job runs.&amp;#160; On the environment job it provides a nice little progress indicator of the steps:&lt;/p&gt;

&lt;p&gt;&lt;img title="Picture of progress indicator" style="margin: 0px auto; border: 0px currentcolor; border-image: none; float: none; display: block; background-image: none;" border="0" alt="Picture of progress indicator" src="https://storage2.timheuer.com/approvalpost6.png" width="519" height="247" /&gt;&lt;/p&gt;

&lt;p&gt;Remember that URL setting we had?&amp;#160; Once that job finished, you’ll see it surface in that nice summary view to quickly click through and test your staging environment:&lt;/p&gt;

&lt;p&gt;&lt;img title="Picture of the URL shown in summary view in step" style="margin: 0px auto; border: 0px currentcolor; border-image: none; float: none; display: block; background-image: none;" border="0" alt="Picture of the URL shown in summary view in step" src="https://storage2.timheuer.com/approvalpost7.png" width="1416" height="262" /&gt;&lt;/p&gt;

&lt;p&gt;Once we are satisfied with the staging environment we can then approve the next workflow and the same steps happen and we are deployed to production!&lt;/p&gt;

&lt;p&gt;&lt;img title="Screenshot of final approval flow" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of final approval flow" src="https://storage2.timheuer.com/approvalpost3.png" width="1440" height="737" /&gt;&lt;/p&gt;

&lt;p&gt;And we’re done!&lt;/p&gt;

&lt;h2&gt;Summary&lt;/h2&gt;

&lt;p&gt;The concept of approvals in Actions workflows has been a top request I’ve heard and I’m glad it is finally there!&amp;#160; I’m in the process of adding it as an extra protection to all my public repo projects, whether it be for a web app deployment or a NuGet package publish, it is a helpful protection to put in place in your Actions.&amp;#160; It’s rather simple to set up and if you have a relatively simple workflow it is equally simple to config and modify already to incorporate.&amp;#160; More complex workflows might require a bit more thought but still simple to augment.&amp;#160; I’ve posted my full sample here and the workflow file in the repo &lt;strong&gt;&lt;a href="https://github.com/timheuer/actions-approval-sample"&gt;timheuer/actions-approval-sample&lt;/a&gt;&lt;/strong&gt; where you can see the &lt;a href="https://github.com/timheuer/actions-approval-sample/blob/main/.github/workflows/build-deploy.yaml"&gt;full workflow file here&lt;/a&gt;.&amp;#160; This was fun to walk through and I hope this write-up helps you get started as well!&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://www.timheuer.com/blog/use-github-actions-for-bulk-resolve-issues/</id>
    <title>Using GitHub Actions for Bulk Resolving</title>
    <updated>2020-12-16T06:27:38Z</updated>
    <published>2020-12-16T06:27:06Z</published>
    <link href="https://www.timheuer.com/blog/use-github-actions-for-bulk-resolve-issues/" />
    <author>
      <name>Tim Heuer</name>
      <email>tim@timheuer.com</email>
    </author>
    <category term="github" />
    <category term="devops" />
    <content type="html">&lt;p&gt;Today I was working on one of our internal GitHub repositories that apparently used to be used for our tooling issue tracking.&amp;#160; I have no idea the history but a quick look at the 68 issues with the latest dating back to 2017 told me that yeah, nobody is looking at these anymore.&amp;#160; After a quick email ack from my dev lead that I could bulk clear these out I immediately went to the repo issues list, and was about to do this:&lt;/p&gt;  &lt;p&gt;&lt;img title="Screenshot of mark all as closed" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of mark all as closed" src="https://storage2.timheuer.com/bulkpost1.png" width="928" height="334" /&gt;&lt;/p&gt;  &lt;p&gt;Then I realized that all that was going to do was close them without any reasoning at all.&amp;#160; I know that closing sends a notification to people on the issue and that wasn’t the right thing to do.&amp;#160; I quickly looked around, did some googling and didn’t find anything in the GitHub docs that would allow me to “bulk resolve and add a message” outside of adding a commit and a bunch of “close #XXX” statements.&amp;#160; That was unrealistic.&amp;#160; I threw it out on Twitter in hopes maybe someone had a tool already.&amp;#160; The other debate in my head was writing some code to iterate through them and close with a message.&amp;#160; This felt heavy for my needs, I’d need to get tokens, blah blah.&amp;#160; I’m lazy.&lt;/p&gt;  &lt;p&gt;Then I thought to myself, &lt;em&gt;Self, I’m pretty sure you should be able to use the ‘labeled’ trigger in GitHub Actions to automate this!&lt;/em&gt; Thinking this way made me think that I could use a trigger to still bulk close them but the action would be able to add a message to each one.&amp;#160; Again, a quick thinking here led me to be writing more code than I thought…but I was on the right track.&amp;#160; Some more searching for different terms (adding actions) and I discovered the action &lt;a href="https://github.com/actions/stale"&gt;actions/stale&lt;/a&gt;&lt;strong&gt;&lt;/strong&gt; to the rescue!&amp;#160; This is a workflow designed to run on a schedule, look at ‘stale’ (to be defined by you) and label them and/or close them after certain intervals.&amp;#160; The design looks to be something like “run every day and look for things that are X days old, label them stale, then warn that if action isn’t taken in Y days that they would be closed” – perfect for my need except I wanted to close NOW!&amp;#160; No problem.&amp;#160; Looking at the sample it used a schedule trigger and a CRON format for the schedule.&amp;#160; Off to crontab.guru to help me figure out the thing I can never remember.&amp;#160; What’s worse, regex or cron?&amp;#160; Who knows?&lt;/p&gt;  &lt;p&gt;And then it dawned on me!&amp;#160; My favorite GitHub Actions tip is to add &lt;strong&gt;workflow_dispatch&lt;/strong&gt; as one of the triggers to workflows.&amp;#160; This allows you to manually trigger a workflow from your repo:&lt;/p&gt;  &lt;p&gt;&lt;img title="Screenshot of manual workflow trigger" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of manual workflow trigger" src="https://storage2.timheuer.com/bullkpost2.png" width="884" height="406" /&gt;&lt;/p&gt;  &lt;p&gt;I use this ALL the time to make sure I can not have to fake a commit or something on certain projects.&amp;#160; This was the perfect thing I needed.&amp;#160; The combination of workflow_dispatch and this stale action would enable me to complete this quickly.&amp;#160; I added the following workflow to our repo:&lt;/p&gt;    &lt;pre class="brush: yaml; highlight: [3,15,16,17];"&gt;name: &amp;quot;Close stale issues&amp;quot;
on:
  workflow_dispatch:
    branches:
    - master
    
jobs:
  stale:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/stale@v3
      with:
        repo-token: ${{ secrets.GITHUB_TOKEN }}
        days-before-stale: 30
        days-before-close: 0
        stale-issue-message: 'This issue is being closed as stale'
        close-issue-message: 'This repo has been made internal and no longer tracking product issues. Closing all open stale issues.'
&lt;/pre&gt;



&lt;p&gt;I just had to set a few parameters for a stale message (required) and I set the warning day basically to 0 so it would happen NOW.&amp;#160; Then I trigger the workflow manually.&amp;#160; Boom!&amp;#160; The workflow ran and 2 minutes later all 68 issues were marked closed with a message that serves as the reason and the user won’t be too alarmed for some random bulk closure.&lt;/p&gt;

&lt;p&gt;&lt;img title="Screenshot of GitHub message" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of GitHub message" src="https://storage2.timheuer.com/bulkpost4.png" width="1540" height="290" /&gt;&lt;/p&gt;

&lt;p&gt;I’m glad I remembered that GitHub Actions aren’t just for CI/CD uses and can be used to quickly automate much more.&amp;#160; In fact I’m writing this blog post maybe to help others, but certainly to serve as a bookmark to myself when I forget about this again.&lt;/p&gt;

&lt;p&gt;Hope this helps!&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://www.timheuer.com/blog/building-a-code-analyzer-for-net/</id>
    <title>Building a Code Analyzer for .NET</title>
    <updated>2020-12-12T06:33:15Z</updated>
    <published>2020-12-12T06:31:38Z</published>
    <link href="https://www.timheuer.com/blog/building-a-code-analyzer-for-net/" />
    <author>
      <name>Tim Heuer</name>
      <email>tim@timheuer.com</email>
    </author>
    <category term="dotnet" />
    <category term="visual studio" />
    <category term="code analysis" />
    <category term="roslyn" />
    <content type="html">&lt;p&gt;What the heck is a code analyzer?&amp;nbsp; Well if you are a Visual Studio user you probably have seen the lightbulbs and wrenches from time to time.&amp;nbsp; Put it simply in my own terms, code analyzers keep an eye on your code and find errors, suggest different ways of doing things, help you know what you aren’t using, etc.&amp;nbsp; Usually coupled with a code fix, an analyzer alerts you to an opportunity and a code fix can be applied to remedy that opportunity.&amp;nbsp; Here’s an example:&lt;/p&gt;  &lt;p&gt;&lt;img title="Screenshot of a code file with a code fix" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of a code file with a code fix" src="https://storage2.timheuer.com/capost1.png" width="1142" height="623"&gt;&lt;/p&gt;  &lt;p&gt;These are helpful in your coding workflow (or intended to be!) to be more productive, learn some things along the way, or enforce certain development approaches.&amp;nbsp; This is an area of Visual Studio and .NET that part of my team works on and I wanted to learn more than beyond what they are generally.&amp;nbsp; Admittedly despite being on the team I haven’t had the first-hand experience of creating a code analyzer before so I thought, why not give it a try.&amp;nbsp; I fired up Visual Studio and got started without any help from the team (I’ll note in one step where I was totally stumped and needed a teammates help later).&amp;nbsp; I figured I’d write my experience in that it helps anyone or just serves as a bookmark for me for later when I totally forget all this stuff and have to do it again.&amp;nbsp; I know I’ve made some mistakes and I don’t think it’s fully complete, but it is ‘good enough’ so I’ll carry you on the journey here with me!&lt;/p&gt;  &lt;h2&gt;Defining the analyzer&lt;/h2&gt;  &lt;p&gt;First off we have to decide what we want to do.&amp;nbsp; There are a lot of analyzers already in the platform and other places, so you may not need one custom.&amp;nbsp; For me I had a specific scenario come up at work where I thought &lt;em&gt;hmm, I wonder if I could build this into the dev workflow&lt;/em&gt; and that’s what I’m going to do.&amp;nbsp; The scenario is that we want to make sure that our products don’t contain terms that have been deemed inappropriate for various reasons.&amp;nbsp; These could be overtly recognized profanity, accessible terms, diversity-related, cultural considerations, etc. Either way we are starting from a place that someone has defined a list of these per policy.&amp;nbsp; So here is my requirements:&lt;/p&gt;  &lt;ul&gt;   &lt;li&gt;Provide an analyzer that starts from a specific set of known ‘database’ of terms in a structured format&lt;/li&gt;    &lt;li&gt;Warn/error on code symbols and comments in the code&lt;/li&gt;    &lt;li&gt;One analyzer code base that can provide different results for different severities&lt;/li&gt;    &lt;li&gt;Provide a code fix that removes the word and fits within the other VS refactor/renaming mechnisms&lt;/li&gt;    &lt;li&gt;Have a predictable build that produces the bits for anyone to easily consume&lt;/li&gt; &lt;/ul&gt;  &lt;p&gt;Pretty simple I thought, so let’s get started!&lt;/p&gt;  &lt;h2&gt;Getting started with Code Analyzer development&lt;/h2&gt;  &lt;p&gt;I did what any person would do and searched.&amp;nbsp; I ended up on a blog post from a teammate of mine who is the PM in this area, Mika titled “&lt;a href="https://devblogs.microsoft.com/dotnet/how-to-write-a-roslyn-analyzer/"&gt;How to write a Roslyn Analyzer&lt;/a&gt;” – sounds like exactly what I was looking for! Yay team! Mika’s post help me understand the basics and get the tools squared away.&amp;nbsp; I knew that I had the VS Extensibility SDK workload installed already but wasn’t seeing the templates, so the post helped me realize that the Roslyn SDK is optional and I needed to go back and install that.&amp;nbsp; Once I did, I was able to start with File…New Project and search for analyzer and choose the C# template:&lt;/p&gt;  &lt;p&gt;&lt;img title="New project dialog from Visual Studio" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="New project dialog from Visual Studio" src="https://storage2.timheuer.com/capost2.png" width="1024" height="679"&gt;&lt;/p&gt;  &lt;p&gt;This gave me a great starting point with 5 projects:&lt;/p&gt;  &lt;ul&gt;   &lt;li&gt;The analyzer library project&lt;/li&gt;    &lt;li&gt;The code fix library project&lt;/li&gt;    &lt;li&gt;A NuGet package project&lt;/li&gt;    &lt;li&gt;A unit test project&lt;/li&gt;    &lt;li&gt;A Visual Studio extension project (VSIX)&lt;/li&gt; &lt;/ul&gt;  &lt;p&gt;Visual Studio opened up the two key code files I’d be working with: the analyzer and code fix provider.&amp;nbsp; These will be the two things I focus on in this post.&amp;nbsp; First I recommend going to each of the projects and updating any/all NuGet Packages that have an offered update.&lt;/p&gt;  &lt;h2&gt;Analyzer library&lt;/h2&gt;  &lt;p&gt;Let’s look at the key aspects of the analyzer class we want to implement.&amp;nbsp; Here is the full template initially provided&lt;/p&gt;  &lt;pre class="brush: csharp; highlight: [3,9,13,18];"&gt;public class SimpleAnalyzerAnalyzer : DiagnosticAnalyzer
{
    public const string DiagnosticId = "SimpleAnalyzer";
    private static readonly LocalizableString Title = new LocalizableResourceString(nameof(Resources.AnalyzerTitle), Resources.ResourceManager, typeof(Resources));
    private static readonly LocalizableString MessageFormat = new LocalizableResourceString(nameof(Resources.AnalyzerMessageFormat), Resources.ResourceManager, typeof(Resources));
    private static readonly LocalizableString Description = new LocalizableResourceString(nameof(Resources.AnalyzerDescription), Resources.ResourceManager, typeof(Resources));
    private const string Category = "Naming";

    private static readonly DiagnosticDescriptor Rule = new DiagnosticDescriptor(DiagnosticId, Title, MessageFormat, Category, DiagnosticSeverity.Warning, isEnabledByDefault: true, description: Description);

    public override ImmutableArray&amp;lt;DiagnosticDescriptor&amp;gt; SupportedDiagnostics { get { return ImmutableArray.Create(Rule); } }

    public override void Initialize(AnalysisContext context)
    {
        context.ConfigureGeneratedCodeAnalysis(GeneratedCodeAnalysisFlags.None);
        context.EnableConcurrentExecution();

        context.RegisterSymbolAction(AnalyzeSymbol, SymbolKind.NamedType);
    }

    private static void AnalyzeSymbol(SymbolAnalysisContext context)
    {
        var namedTypeSymbol = (INamedTypeSymbol)context.Symbol;

        // Find just those named type symbols with names containing lowercase letters.
        if (namedTypeSymbol.Name.ToCharArray().Any(char.IsLower))
        {
            // For all such symbols, produce a diagnostic.
            var diagnostic = Diagnostic.Create(Rule, namedTypeSymbol.Locations[0], namedTypeSymbol.Name);

            context.ReportDiagnostic(diagnostic);
        }
    }
}
&lt;/pre&gt;

&lt;p&gt;A few key things to note here.&amp;nbsp; The DiagnosticId is what is reported to the errors and output.&amp;nbsp; You’ve probable seen a few of these that are like “CSC001” or stuff like that.&amp;nbsp; This is basically your identifier.&amp;nbsp; The other key area is the Rule here.&amp;nbsp; Each analyzer basically creates a DiagnosticDescriptor that it will produce and report to the diagnostic engine.&amp;nbsp; As you can see here and the lines below it you define it with a certain set of values and then indicate to the analyzer what SupportedDiagnostics this analyzer supports.&amp;nbsp; By the nature of this combination you can ascertain that you can have multiple rules each with some unique characteristics.&amp;nbsp; &lt;/p&gt;

&lt;h3&gt;Custom rules&lt;/h3&gt;

&lt;p&gt;Remember we said we wanted different severities and that is one of the differences in the descriptor so we’ll need to change that.&amp;nbsp; I wanted basically 3 types that would have different diagnostic IDs and severities.&amp;nbsp; I’ve modified my code as follows (and removing the static DiagnosticId):&lt;/p&gt;

&lt;pre class="brush: csharp;"&gt;private const string HtmlHelpUri = "https://github.com/timheuer/SimpleAnalyzer";

private static readonly DiagnosticDescriptor WarningRule = new DiagnosticDescriptor("TERM001", Title, MessageFormat, Category, DiagnosticSeverity.Warning, isEnabledByDefault: true, description: Description, helpLinkUri: HtmlHelpUri);
private static readonly DiagnosticDescriptor ErrorRule = new DiagnosticDescriptor("TERM002", Title, MessageFormat, Category, DiagnosticSeverity.Error, isEnabledByDefault: true, description: Description, helpLinkUri: HtmlHelpUri);
private static readonly DiagnosticDescriptor InfoRule = new DiagnosticDescriptor("TERM003", Title, MessageFormat, Category, DiagnosticSeverity.Info, isEnabledByDefault: true, description: Description, helpLinkUri: HtmlHelpUri);

public override ImmutableArray&amp;lt;DiagnosticDescriptor&amp;gt; SupportedDiagnostics { get { return ImmutableArray.Create(WarningRule, ErrorRule, InfoRule); } }
&lt;/pre&gt;

&lt;p&gt;I’ve created 3 specific DiagnosticDescriptors and ‘registered’ them as supported for my analyzer.&amp;nbsp; I also added the help link which will show up in the UI in Visual Studio and if you don’t supply one you’ll get a URL that won’t be terribly helpful to your consumers.&amp;nbsp; Notice each rule has a unique diagnostic ID and severity.&amp;nbsp; Now we’ve got these sorted it’s time to move on to some of our logic.&lt;/p&gt;

&lt;h3&gt;Initialize and register&lt;/h3&gt;

&lt;p&gt;We have to decide when we want the analyzer to run and what it is analyzing.&amp;nbsp; I learned that once you have the Roslyn SDK installed you have available to you this awesome new tool called the &lt;a href="https://docs.microsoft.com/en-us/dotnet/csharp/roslyn-sdk/syntax-visualizer?tabs=csharp#syntax-visualizer"&gt;Syntax Visualizer&lt;/a&gt; (View…Other Windows…Syntax Visualizer).&amp;nbsp; It let’s you see the view that Roslyn sees or what some of the old schoolers might consider the CodeDOM.&amp;nbsp; &lt;/p&gt;

&lt;p&gt;&lt;img title="Syntax Visualizer screenshot" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Syntax Visualizer screenshot" src="https://storage2.timheuer.com/capost3.png" width="1024" height="507"&gt;&lt;/p&gt;

&lt;p&gt;You can see here that with it on and you click anywhere in your code the tree updates and tells you what you are looking at.&amp;nbsp; In this case my cursor was on Initialize() and I can see this is considered a MethodDeclarationSyntax type and kind.&amp;nbsp; I can navigate the tree on the left and it helps me discover what other code symbols I may be looking for to consider what I need my analyzer to care about.&amp;nbsp; This was very helpful to understand the code tree that Roslyn understands.&amp;nbsp; From this I was able to determine what I needed to care about.&amp;nbsp; Now I needed to start putting things together.&lt;/p&gt;

&lt;p&gt;The first part I wanted to do is to register the compilation start action (remember I have intentions of loading the data from somewhere so I want this available sooner).&amp;nbsp; Within that I then have context to the analyzer and can ‘register’ actions that I want to participate in.&amp;nbsp; For this sample purposes I’m going to use RegisterSymbolAction because I just want specific symbols (as opposed to comments or full body method declarations).&amp;nbsp; I have to specify a callback to use when the symbol is analyzed and what symbols I care about.&amp;nbsp; In simplest form here is what the contents of my Initialize() method now looks like:&lt;/p&gt;

&lt;pre class="brush: csharp; highlight: [6,10,13,14];"&gt;public override void Initialize(AnalysisContext context)
{
    context.ConfigureGeneratedCodeAnalysis(GeneratedCodeAnalysisFlags.None);
    context.EnableConcurrentExecution();

    context.RegisterCompilationStartAction((ctx) =&amp;gt;
    {
        // TODO: load the terms dictionary

        ctx.RegisterSymbolAction((symbolContext) =&amp;gt;
        {
            // do the work
        }, SymbolKind.NamedType, SymbolKind.Method, SymbolKind.Property, SymbolKind.Field,
                SymbolKind.Event, SymbolKind.Namespace, SymbolKind.Parameter);
    });
}
&lt;/pre&gt;

&lt;p&gt;You can see that I’ve called RegisterCompilationStartAction from the full AnalysisContext and then called RegisterSymbolAction from within that, providing a set of specific symbols I care about.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;NOTE: Not all symbols are available to analyzers.&amp;nbsp; I found that SymbolKind.Local is one that is not…and there was an analyzer warning that told me so!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Note that since I’m using a lambda approach here I removed the template code AnalyzeSymbol function from the class.&amp;nbsp; Okay, let’s move on to actually looking at the next step and load our dictionary.&lt;/p&gt;

&lt;h3&gt;Seeding the analyzer with data&lt;/h3&gt;

&lt;p&gt;I mentioned that we’ll have a dictionary of terms already.&amp;nbsp; This is a JSON file with a specific format that looks like this:&lt;/p&gt;

&lt;pre class="brush: json;"&gt;[
  {
    "TermID": "1",
    "Severity": "1",
    "Term": "verybad",
    "TermClass": "Profanity",
    "Context": "When used pejoratively",
    "ActionRecommendation": "Remove",
    "Why": "No profanity is tolerated in code"
  }
]
&lt;/pre&gt;

&lt;p&gt;So the first thing I want to do is create a class that makes my life easier to work with this so I created Term.cs in my analyzer project.&amp;nbsp; The class basically is there to deserialize the file into strong types and looks like this:&lt;/p&gt;

&lt;pre class="brush: csharp;"&gt;using System.Text.Json.Serialization;

namespace SimpleAnalyzer
{
    class Term
    {
        [JsonPropertyName("TermID")]
        public string Id { get; set; }

        [JsonPropertyName("Term")]
        public string Name { get; set; }

        [JsonPropertyName("Severity")]
        public string Severity { get; set; }

        [JsonPropertyName("TermClass")]
        public string Class { get; set; }

        [JsonPropertyName("Context")]
        public string Context { get; set; }

        [JsonPropertyName("ActionRecommendation")]
        public string Recommendation { get; set; }

        [JsonPropertyName("Why")]
        public string Why { get; set; }
    }
}

&lt;/pre&gt;

&lt;p&gt;So you’ll notice that I’m using JSON and System.Text.Json so I’ve had to add that to my analyzer project.&amp;nbsp; More on the mechanics of that much later.&amp;nbsp; I wanted to use this to make my life easier working with the terms database I needed to.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;NOTE: Using 3rd party libraries (in this case System.Text.Json is considered one to the analyzer) requires more work and there could be issues depending on what you are doing.&amp;nbsp; Remember that analyzers run in the context of Visual Studio (or other tools) and there may be conflicts with other libraries.&amp;nbsp; It’s nuanced, so tred lightly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now that we have our class, let’s go back and load the dictionary file into our analyzer.&amp;nbsp;&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;NOTE:&amp;nbsp; Typically Analyzers and Source Generators use the concept called AdditionalFiles to load information.&amp;nbsp; This relies on the &lt;strong&gt;consuming&lt;/strong&gt; project to have the file though and different than my scenario.&amp;nbsp; Working at the lower level in the stack with Roslyn, the compilers need to manage a bit more of the lifetime of files and such and so there is this different method of working with them.&amp;nbsp; You can read more about AdditionalFiles on the Roslyn repo: &lt;a href="https://github.com/dotnet/roslyn/blob/master/docs/analyzers/Using%20Additional%20Files.md"&gt;Using Additional Files (dotnet/roslyn)&lt;/a&gt;.&amp;nbsp; This is generally the recommended way to work with files.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For us we are going to add the dictionary of terms *with* our analyzer so we need to do a few things.&amp;nbsp; First we need to make sure the JSON file is in the analyzer and also in the package output.&amp;nbsp; This requires us to mark the terms.json file in the Analyzer project as Content and to copy to output.&amp;nbsp; Second in the Package project we need to add the following to the csproj file in the _AddAnalyzersToOutput target:&lt;/p&gt;

&lt;pre class="brush: xml;"&gt;&amp;lt;TfmSpecificPackageFile Include="$(OutputPath)\terms.json" PackagePath="analyzers/dotnet/cs" /&amp;gt;
&lt;/pre&gt;

&lt;p&gt;And then in the VSIX project we need to do something similar where we specify the NuGet packages to include:&lt;/p&gt;

&lt;pre class="brush: xml;"&gt;&amp;lt;Content Include="$(OutputPath)\terms.json"&amp;gt;
    &amp;lt;IncludeInVSIX&amp;gt;true&amp;lt;/IncludeInVSIX&amp;gt;
&amp;lt;/Content&amp;gt;
&lt;/pre&gt;

&lt;p&gt;With both of these in place now we can get access to our term file in our Initialize method and we’ll add a helper function to ensure we get the right location.&amp;nbsp; The resulting modified Initialize portion looks like this:&lt;/p&gt;

&lt;pre class="brush: csharp;"&gt;context.RegisterCompilationStartAction((ctx) =&amp;gt;
{
    if (terms is null)
    {
        var currentDirecory = GetFolderTypeWasLoadedFrom&amp;lt;SimpleAnalyzerAnalyzer&amp;gt;();
        terms = JsonSerializer.Deserialize&amp;lt;List&amp;lt;Term&amp;gt;&amp;gt;(File.ReadAllBytes(Path.Combine(currentDirecory, "terms.json")));
    }
// other code removed for brevity in blog post
}
&lt;/pre&gt;

&lt;p&gt;The helper function here is a simple one liner:&lt;/p&gt;

&lt;pre class="brush: csharp;"&gt;private static string GetFolderTypeWasLoadedFrom&amp;lt;T&amp;gt;() =&amp;gt; new FileInfo(new Uri(typeof(T).Assembly.CodeBase).LocalPath).Directory.FullName;
&lt;/pre&gt;

&lt;p&gt;This now gives us a List&amp;lt;Term&amp;gt; to work with.&amp;nbsp; These lines of code required us to add some using statements in the class that luckily an analyzer/code fix helped us do! You can see how helpful analyzers/code fixes are in your everyday usage and we take for granted!&amp;nbsp; Now we have our data, we have our action we registered for, let’s do some analyzing.&amp;nbsp; &lt;/p&gt;

&lt;h3&gt;Analyze the code&lt;/h3&gt;

&lt;p&gt;We basically want each symbol to do a search to see if it contains a word in our dictionary and if there is a match, then register a diagnostic rule to the user.&amp;nbsp; So given that we are using RegisterSymbolAction the context provides us with the Symbol and name we can examine.&amp;nbsp; We will look at that against our dictionary of terms, seeing if there is a match and then create a DiagnosticDescriptor for that in line with the severity of that match.&amp;nbsp; Here’s how we start:&lt;/p&gt;

&lt;pre class="brush: csharp; highlight: [7,9];"&gt;ctx.RegisterSymbolAction((symbolContext) =&amp;gt;
{
    var symbol = symbolContext.Symbol;

    foreach (var term in terms)
    {
        if (ContainsUnsafeWords(symbol.Name, term.Name))
        {
            var diag = Diagnostic.Create(GetRule(term, symbol.Name), symbol.Locations[0], term.Name, symbol.Name, term.Severity, term.Class);
            symbolContext.ReportDiagnostic(diag);
            break;
        }
    }
}, SymbolKind.NamedType, SymbolKind.Method, SymbolKind.Property, SymbolKind.Field,
    SymbolKind.Event, SymbolKind.Namespace, SymbolKind.Parameter);
&lt;/pre&gt;

&lt;p&gt;In this we are looking in our terms dictionary and doing a comparison.&amp;nbsp; We created a simple function for the comparison that looks like this:&lt;/p&gt;

&lt;pre class="brush: csharp;"&gt;private bool ContainsUnsafeWords(string symbol, string term)
{
    return term.Length &amp;lt; 4 ?
        symbol.Equals(term, StringComparison.InvariantCultureIgnoreCase) :
        symbol.IndexOf(term, StringComparison.InvariantCultureIgnoreCase) &amp;gt;= 0;
}
&lt;/pre&gt;

&lt;p&gt;And then we have a function called GetRule that ensures we have the right DiagnosticDescriptor for this violation (based on severity).&amp;nbsp; That function looks like this:&lt;/p&gt;

&lt;pre class="brush: csharp;"&gt;private DiagnosticDescriptor GetRule(Term term, string identifier)
{
    var warningLevel = DiagnosticSeverity.Info;
    var diagId = "TERM001";
    var description = $"Recommendation: {term.Recommendation}{System.Environment.NewLine}Context: {term.Context}{System.Environment.NewLine}Reason: {term.Why}{System.Environment.NewLine}Term ID: {term.Id}";
    switch (term.Severity)
    {
        case "1":
        case "2":
            warningLevel = DiagnosticSeverity.Error;
            diagId = "TERM002";
            break;
        case "3":
            warningLevel = DiagnosticSeverity.Warning;
            break;
        default:
            warningLevel = DiagnosticSeverity.Info;
            diagId = "TERM003";
            break;
    }

    return new DiagnosticDescriptor(diagId, Title, MessageFormat, term.Class, warningLevel, isEnabledByDefault: true, description: description, helpLinkUri: HtmlHelpUri, term.Name);
}
&lt;/pre&gt;

&lt;p&gt;In this GetRule function you’ll notice a few things.&amp;nbsp; First, we are doing this because we want to set the diagnostic ID and the severity differently based on the term dictionary data.&amp;nbsp; Remember earlier we created different rule definitions (DiagnosticDescriptors) and we need to ensure what we return here matches one of them.&amp;nbsp; This allows us to have one analyzer that tries to be a bit more dynamic.&amp;nbsp; We are also passing in a final parameter (term.Name) in the ctor for the DiagnosticDescriptor.&amp;nbsp; This is passed in the CustomTags parameter of the ctor.&amp;nbsp; We’ll be using this later in the code fix so we used this as a means to pass some context from the analyzer to the code fix (the term to replace).&amp;nbsp; The other part you’ll notice is that in the Diagnositc.Create method earlier we’re passing in some additional parameters as optional.&amp;nbsp; These get passed to the MessageFormat string that you’ve defined in your analyzer.&amp;nbsp; We didn’t mention it detail earlier but it comes in to play now.&amp;nbsp; The template gave us a Resources.resx file with three values:&lt;/p&gt;

&lt;p&gt;&lt;img title="Resource file contents" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Resource file contents" src="https://storage2.timheuer.com/CAPOST4.png" width="1024" height="286"&gt;&lt;/p&gt;

&lt;p&gt;These are resources that are displayed in the outputs and Visual Studio user interface.&amp;nbsp; MessageFormat is one that enables you to provide some content into the formatter and that’s what we are passing here.&amp;nbsp; The result will be a more user-friendly message with the right context.&amp;nbsp; Great I think we have all our analyzer stuff working, let’s move on to the code fix!&lt;/p&gt;

&lt;h2&gt;Code fix library&lt;/h2&gt;

&lt;p&gt;With just the analyzer – which we are totally okay to only have – we have warnings/squiggles that will present to the user (or log in output).&amp;nbsp; We can optionally provide a code fix to remedy the situation.&amp;nbsp; In our simple sample here we’re going to do that and simply suggest to remove the word.&amp;nbsp; Code fixes also provide the user the means to suppress certain rules.&amp;nbsp; This is why we wanted different diagnostic IDs earlier as you may want to suppress the SEV3 terms but not the others.&amp;nbsp; Without that distinction/difference in DiagnosticDescriptors you cannot do that.&amp;nbsp; Moving over to the SimpleAnalyzer.CodeFixes project we’ll open the code fix provider and make some changes.&amp;nbsp; The default template provides a code fix to make the symbol all uppercase…we don’t want that but it provides a good framework for us to learn and make simple changes.&amp;nbsp; The first thing we need to do is tell the code fix provider what diagnostic IDs are fixable by this provider.&amp;nbsp; We make a change in the override provided by the template to provide our diagnostic IDs:&lt;/p&gt;

&lt;pre class="brush: csharp;"&gt;public sealed override ImmutableArray&amp;lt;string&amp;gt; FixableDiagnosticIds
{
    get { return ImmutableArray.Create("TERM001","TERM002","TERM003"); }
}
&lt;/pre&gt;

&lt;p&gt;Now look in the template for MakeUppercaseAsync and let’s make a few changes.&amp;nbsp; First rename to RemoveTermAsync.&amp;nbsp; Then in the signature of that change it to include IEnumerable&amp;lt;string&amp;gt; so we can pass in those CustomTags we provided earlier from the analyzer.&amp;nbsp; You’ll also need to pass in those custom tags to the call to RemoveTermAsync.&amp;nbsp; Combined those look like these changes now in the template:&lt;/p&gt;

&lt;pre class="brush: csharp; highlight: [16,21,25];"&gt;public sealed override async Task RegisterCodeFixesAsync(CodeFixContext context)
{
    var root = await context.Document.GetSyntaxRootAsync(context.CancellationToken).ConfigureAwait(false);

    // TODO: Replace the following code with your own analysis, generating a CodeAction for each fix to suggest
    var diagnostic = context.Diagnostics.First();
    var diagnosticSpan = diagnostic.Location.SourceSpan;

    // Find the type declaration identified by the diagnostic.
    var declaration = root.FindToken(diagnosticSpan.Start).Parent.AncestorsAndSelf().OfType&amp;lt;TypeDeclarationSyntax&amp;gt;().First();

    // Register a code action that will invoke the fix.
    context.RegisterCodeFix(
        CodeAction.Create(
            title: CodeFixResources.CodeFixTitle,
            createChangedSolution: c =&amp;gt; RemoveTermAsync(context.Document, declaration, diagnostic.Descriptor.CustomTags, c),
            equivalenceKey: nameof(CodeFixResources.CodeFixTitle)),
        diagnostic);
}

private async Task&amp;lt;Solution&amp;gt; RemoveTermAsync(Document document, TypeDeclarationSyntax typeDecl, IEnumerable&amp;lt;string&amp;gt; tags, CancellationToken cancellationToken)
{
    // Compute new uppercase name.
    var identifierToken = typeDecl.Identifier;
    var newName = identifierToken.Text.Replace(tags.First(), string.Empty);

    // Get the symbol representing the type to be renamed.
    var semanticModel = await document.GetSemanticModelAsync(cancellationToken);
    var typeSymbol = semanticModel.GetDeclaredSymbol(typeDecl, cancellationToken);

    // Produce a new solution that has all references to that type renamed, including the declaration.
    var originalSolution = document.Project.Solution;
    var optionSet = originalSolution.Workspace.Options;
    var newSolution = await Renamer.RenameSymbolAsync(document.Project.Solution, typeSymbol, newName, optionSet, cancellationToken).ConfigureAwait(false);

    // Return the new solution with the now-uppercase type name.
    return newSolution;
}
&lt;/pre&gt;

&lt;p&gt;With all these in place we now should be ready to try some things out.&amp;nbsp; Let’s debug.&lt;/p&gt;

&lt;h2&gt;Debugging&lt;/h2&gt;

&lt;p&gt;Before we debug remember we are using some extra libraries?&amp;nbsp; In order to make this work, your analyzer needs to ship those alongside.&amp;nbsp; This isn’t easy to figure out and you need to specify this in your csproj files to add additional outputs to your Package and Vsix projects.&amp;nbsp; I’m not going to emit them here, but you can look at this sample to see what I did.&amp;nbsp; Without this, the analyzer won’t start.&amp;nbsp; Please note if you are not using any 3rd party libraries then this isn’t required.&amp;nbsp; In my case I added System.Text.Json and so this is required.&lt;/p&gt;

&lt;p&gt;I found the easiest way to debug was to set the VSIX project as the startup and just F5 that project.&amp;nbsp; This launches another instance of Visual Studio and installs your analyzer as an extension.&amp;nbsp; When running analyzers as extensions these do NOT affect the build.&amp;nbsp; So even though you may have analyzer errors, they don’t prevent the build from happening.&amp;nbsp; Installing analyzers as NuGet packages into the project would affect the build and generate build errors during CI, for example.&amp;nbsp; For now we’ll use the VSIX project to debug.&amp;nbsp; When it launches create a new project or something to test with…I just use a console application.&amp;nbsp; Remember when earlier I mentioned that the &lt;strong&gt;consumer&lt;/strong&gt; project has to provide the terms dictionary?&amp;nbsp; It’s in this project that you’ll want to drop a terms.json file into the project in the format mentioned earlier.&amp;nbsp; This file also must be given the build action of “C# analyzer additional file” in the file properties.&amp;nbsp; Then let’s start writing code that includes method names that violate our rules.&amp;nbsp; When doing that we should now see the analyzer kick in and show the issues:&lt;/p&gt;

&lt;p&gt;&lt;img title="Screenshot of analyzer errors and warnings" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of analyzer errors and warnings" src="https://storage2.timheuer.com/capost5.png" width="1259" height="1077"&gt;&lt;/p&gt;

&lt;p&gt;Nice!&amp;nbsp; It worked.&amp;nbsp; One of the nuances of the template and the code fix is that I need to register a code action for each of the type of declaration that we had previously wanted the analyzer to work against (I think…still learning).&amp;nbsp; Without that the proper fix will not actually show/work if it isn’t the right type.&amp;nbsp; The template defaults are for NamedType, so my sample using method name won’t work on the fix, because it’s not the right declaration (again, I think…comment if you know).&amp;nbsp; I’ll have to enhance this later more, but the general workflow is working and if the type is named bad you can see the full end-to-end working:&lt;/p&gt;

&lt;h2&gt;Building it all in CI&lt;/h2&gt;

&lt;p&gt;Now let’s make sure we can have reproduceable builds of our NuGet and VSIX packages.&amp;nbsp; I’m using my quick template that I created for &lt;a href="https://timheuer.com/blog/generate-github-actions-workflow-from-cli/"&gt;creating a simple workflow for GitHub Actions&lt;/a&gt; from the CLI and modifying a bit.&amp;nbsp; Because analyzers use VSIX, we need to use a Windows build agent that has Visual Studio on it and thankfully GitHub Actions provides one.&amp;nbsp; Here’s my resulting final CI build definition:&lt;/p&gt;

&lt;pre class="brush: yaml;"&gt;name: "Build"

on:
  push:
    branches:
      - main
    paths-ignore:
      - '**/*.md'
      - '**/*.gitignore'
      - '**/*.gitattributes'
  workflow_dispatch:
    branches:
      - main
    paths-ignore:
      - '**/*.md'
      - '**/*.gitignore'
      - '**/*.gitattributes'
      
jobs:
  build:
    if: github.event_name == 'push' &amp;amp;&amp;amp; contains(toJson(github.event.commits), '***NO_CI***') == false &amp;amp;&amp;amp; contains(toJson(github.event.commits), '[ci skip]') == false &amp;amp;&amp;amp; contains(toJson(github.event.commits), '[skip ci]') == false
    name: Build 
    runs-on: windows-latest
    env:
      DOTNET_CLI_TELEMETRY_OPTOUT: 1
      DOTNET_SKIP_FIRST_TIME_EXPERIENCE: 1
      DOTNET_NOLOGO: true
      DOTNET_GENERATE_ASPNET_CERTIFICATE: false
      DOTNET_ADD_GLOBAL_TOOLS_TO_PATH: false
      DOTNET_MULTILEVEL_LOOKUP: 0
      PACKAGE_PROJECT: src\SimpleAnalyzer\SimpleAnalyzer.Package\
      VSIX_PROJECT: src\SimpleAnalyzer\SimpleAnalyzer.Vsix\

    steps:
    - uses: actions/checkout@v2
      
    - name: Setup .NET Core SDK
      uses: actions/setup-dotnet@v1
      with:
        dotnet-version: 5.0.x

    - name: Setup MSBuild
      uses: microsoft/setup-msbuild@v1

    - name: Setup NuGet
      uses: NuGet/setup-nuget@v1.0.5

    - name: Add GPR Source
      run: nuget sources Add -Name "GPR" -Source ${{ secrets.GPR_URI }} -UserName ${{ secrets.GPR_USERNAME }} -Password ${{ secrets.GITHUB_TOKEN }}

    - name: Build NuGet Package
      run: |
        msbuild /restore ${{ env.PACKAGE_PROJECT }} /p:Configuration=Release /p:PackageOutputPath=${{ github.workspace }}\artifacts

    - name: Build VSIX Package
      run: |
        msbuild /restore ${{ env.VSIX_PROJECT }} /p:Configuration=Release /p:OutDir=${{ github.workspace }}\artifacts

    - name: Push to GitHub Packages
      run: nuget push ${{ github.workspace }}\artifacts\*.nupkg -Source "GPR"

    # upload artifacts
    - name: Upload artifacts
      uses: actions/upload-artifact@v2
      with:
        name: release-pacakges
        path: |
            ${{ github.workspace }}\artifacts\**\*.vsix
            ${{ github.workspace }}\artifacts\**\*.nupkg
&lt;/pre&gt;

&lt;p&gt;I’ve done some extra work to publish this in the GitHub Package Repository but that’s just optional step and can be removed (ideally you’d publish this in the NuGet repository and you can learn about that by reading my blog post on that topic).&amp;nbsp; I’ve not got my CI set up and every commit builds the packages that can be consumed!&amp;nbsp; You might be asking what’s the difference between the NuGet and VSIX package.&amp;nbsp; The simple explanation is that you’d want people using your analyzer on the NuGet package because that is per-project and carries with the project, so everyone using the project gets the benefit of the analyzer.&amp;nbsp; The VSIX is per-machine and doesn’t affect builds, etc.&amp;nbsp; That may ideally be your scenario but it wouldn’t be consistent with everyone consuming the project that actually wants the analyzer.&lt;/p&gt;

&lt;h2&gt;Summary and resources&lt;/h2&gt;

&lt;p&gt;For me this was a fun exercise and distraction.&amp;nbsp; With some very much needed special assist from &lt;a href="https://twitter.com/JonathanMarolf"&gt;Jonathan Marolf&lt;/a&gt; on the team I learned a bunch and needed help on the 3rd party library thing mentioned earlier.&amp;nbsp; I’ve got a few TODO items to accomplish as I didn’t fully realize my goals.&amp;nbsp; The code fix isn’t working exactly how I want and would have thought, so I’m still working my way through this.&amp;nbsp; This whole sample by the way is on GItHub at &lt;a href="https://github.com/timheuer/SimpleAnalyzer"&gt;timheuer/SimpleAnalyzer&lt;/a&gt; for you to use and point out my numerous mistakes of everything analyzer and probably C# even…please do!&amp;nbsp; In fact in my starting this and conversing with a few on Twitter, a group in Australia created something very similar that is already published on NuGet.&amp;nbsp; Check out &lt;strong&gt;&lt;a href="https://github.com/merill/InclusivenessAnalyzer/"&gt;merill/InclusivenessAnalyzer&lt;/a&gt;&lt;/strong&gt; which aims to improve inclusivity in code.&amp;nbsp; The source like mine is up on GitHub and they have it published in the NuGet Gallery you can add to your project today!&lt;/p&gt;

&lt;p&gt;There are a few resources you should look at if you want this journey yourself:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://devblogs.microsoft.com/dotnet/how-to-write-a-roslyn-analyzer/"&gt;How to write a Roslyn analyzer&lt;/a&gt;&lt;/li&gt;

  &lt;li&gt;Learn Roslyn Now from Josh Varty (&lt;a href="https://joshvarty.com/learn-roslyn-now/"&gt;blog&lt;/a&gt;, &lt;a href="https://www.youtube.com/watch?v=wXXHd8gYqVg&amp;amp;list=PLxk7xaZWBdUT23QfaQTCJDG6Q1xx6uHdG"&gt;videos&lt;/a&gt;)&lt;/li&gt;

  &lt;li&gt;&lt;a href="https://github.com/dotnet/roslyn/blob/master/docs/analyzers/Analyzer%20Samples.md"&gt;Analyzer samples in the Roslyn repo&lt;/a&gt;&lt;/li&gt;

  &lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/dotnet/csharp/roslyn-sdk/"&gt;Roslyn SDK&lt;/a&gt;&lt;/li&gt;

  &lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/dotnet/csharp/roslyn-sdk/tutorials/how-to-write-csharp-analyzer-code-fix"&gt;Tutorial: Write your first analyzer&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content>
  </entry>
  <entry>
    <id>https://www.timheuer.com/blog/generate-github-actions-workflow-from-cli/</id>
    <title>Generate a GitHub Actions workflow file from dotnet CLI</title>
    <updated>2020-11-03T18:54:33Z</updated>
    <published>2020-11-03T18:54:33Z</published>
    <link href="https://www.timheuer.com/blog/generate-github-actions-workflow-from-cli/" />
    <author>
      <name>Tim Heuer</name>
      <email>tim@timheuer.com</email>
    </author>
    <category term="dotnet" />
    <category term="github" />
    <category term="devops" />
    <category term="workflow" />
    <content type="html">&lt;p&gt;I’ve become a huge fan of DevOps and spending more time ensuring my own projects have a good CI/CD automation using GitHub Actions.&amp;#160; The team I work on in Visual Studio for .NET develops the “right click publish” feature that has become a tag line for DevOps folks (okay, maybe not in the post flattering way!).&amp;#160; We know that a LOT of developers use the Publish workflow in Visual Studio for their .NET applications for various reasons.&amp;#160; In reaching out to a sampling and discussing CI/CD we heard a lot of folks talking about they didn’t have the time to figure it out, it was too confusing, there was no simple way to get started, etc.&amp;#160; In this past release we aimed to improve that experience for those users of Publish to help them very quickly get started with CI/CD for their apps deploying to Azure.&amp;#160; Our new feature enables you to &lt;a href="https://devblogs.microsoft.com/visualstudio/using-github-actions-in-visual-studio-is-as-easy-as-right-click-and-publish/" target="_blank"&gt;generate a GitHub Actions workflow file using the Publish wizard&lt;/a&gt; to walk you through it.&amp;#160; In the end you have a good getting started workflow.&amp;#160; I did a quick video on it to demonstrate how easy it is:&lt;/p&gt;  &lt;blockquote class="twitter-tweet"&gt;   &lt;p lang="en" dir="ltr"&gt;One of my favorite features we've been working on in &lt;a href="https://twitter.com/VisualStudio?ref_src=twsrc%5Etfw"&gt;@VisualStudio&lt;/a&gt;. Yes, Right Click Publish!!!! &lt;a href="https://twitter.com/hashtag/devops?src=hash&amp;amp;ref_src=twsrc%5Etfw"&gt;#devops&lt;/a&gt; &lt;a href="https://twitter.com/hashtag/dotnet?src=hash&amp;amp;ref_src=twsrc%5Etfw"&gt;#dotnet&lt;/a&gt; &lt;a href="https://t.co/Jy2jSWplam"&gt;pic.twitter.com/Jy2jSWplam&lt;/a&gt;&lt;/p&gt; — Tim Heuer (@timheuer) &lt;a href="https://twitter.com/timheuer/status/1323678182403313664?ref_src=twsrc%5Etfw"&gt;November 3, 2020&lt;/a&gt;&lt;/blockquote&gt; &lt;script async src="https://platform.twitter.com/widgets.js" charset="utf-8"&gt;&lt;/script&gt;  &lt;p&gt;It really is that simple!&amp;#160; &lt;/p&gt;  &lt;h2&gt;Making it simple from the start&lt;/h2&gt;  &lt;p&gt;I have to admit though, as much as I have been doing this the YAML still is not sticking in my memory enough to type from scratch (dang you IntelliSense for making me lazy!).&amp;#160; There are also times where I’m not using Azure as my deployment but still want CI/CD to something like NuGet for my packages.&amp;#160; I still want that flexibility to get started quickly and ensure as my project grows I’m not waiting to the last minute to add more to my workflow.&amp;#160; I just saw &lt;a href="https://twitter.com/damovisa" target="_blank"&gt;Damian&lt;/a&gt; comment on this recently as well:&lt;/p&gt;  &lt;blockquote class="twitter-tweet"&gt;   &lt;p lang="en" dir="ltr"&gt;100% this.      &lt;br /&gt;Create a pipeline to deploy Hello World, then build on that. You'll only have to tweak the pipeline as you go instead of trying to figure out how to deploy this big complex thing at the end. &lt;a href="https://t.co/DV8J1IXtQW"&gt;https://t.co/DV8J1IXtQW&lt;/a&gt;&lt;/p&gt; — Damian Brady  #BLM (@damovisa) &lt;a href="https://twitter.com/damovisa/status/1323242702457073664?ref_src=twsrc%5Etfw"&gt;November 2, 2020&lt;/a&gt;&lt;/blockquote&gt; &lt;script async src="https://platform.twitter.com/widgets.js" charset="utf-8"&gt;&lt;/script&gt;  &lt;p&gt;I totally agree!&amp;#160; Recently I found myself continuing to go to older repos to copy/paste from existing workflows I had.&amp;#160; Sure, I can do that because I’m good at copying/pasting, but it was just frustrating to switch context for even that little bit.&amp;#160; My searching may suck but I also didn’t see a quick solution to this either (please point out in my comments below if I missed a better solution!!!).&amp;#160; So I created a quick `dotnet new` way of doing this for my projects from the CLI.&lt;/p&gt;  &lt;p&gt;&lt;img title="Screenshot of terminal window with commands" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of terminal window with commands" src="https://storage2.timheuer.com/dotnetnewworkflow.png" width="1466" height="780" /&gt;&lt;/p&gt;  &lt;p&gt;I created a simple item template that can be called using the `dotnet new` command from the CLI.&amp;#160; Calling this in simplest form:&lt;/p&gt;  &lt;pre class="brush: ps;"&gt;dotnet new workflow
&lt;/pre&gt;

&lt;p&gt;will create a .github\workflows\foo.yaml file where you call it from (where ‘foo’ is the name of your folder) with the default content of a workflow for .NET Core that restores/builds/tests your project (using a default SDK version and ‘main’ as the branch).&amp;#160; You can customize the output a bit more with a command like:&lt;/p&gt;

&lt;pre class="brush: ps;"&gt;dotnet new workflow --sdk-version 3.1.403 -n build -b your_branch_name
&lt;/pre&gt;

&lt;p&gt;This will enable you to specify a specific SDK version, a specific name for the .yaml file, and the branch to monitor to trigger the workflow.&amp;#160; An example of the output is here:&lt;/p&gt;

&lt;pre class="brush: ps; highlight: [6,13,38];"&gt;name: &amp;quot;Build&amp;quot;

on:
  push:
    branches:
      - main
    paths-ignore:
      - '**/*.md'
      - '**/*.gitignore'
      - '**/*.gitattributes'
  workflow_dispatch:
    branches:
      - main
    paths-ignore:
      - '**/*.md'
      - '**/*.gitignore'
      - '**/*.gitattributes'
      
jobs:
  build:
    if: github.event_name == 'push' &amp;amp;&amp;amp; contains(toJson(github.event.commits), '***NO_CI***') == false &amp;amp;&amp;amp; contains(toJson(github.event.commits), '[ci skip]') == false &amp;amp;&amp;amp; contains(toJson(github.event.commits), '[skip ci]') == false
    name: Build 
    runs-on: ubuntu-latest
    env:
      DOTNET_CLI_TELEMETRY_OPTOUT: 1
      DOTNET_SKIP_FIRST_TIME_EXPERIENCE: 1
      DOTNET_NOLOGO: true
      DOTNET_GENERATE_ASPNET_CERTIFICATE: false
      DOTNET_ADD_GLOBAL_TOOLS_TO_PATH: false
      DOTNET_MULTILEVEL_LOOKUP: 0

    steps:
    - uses: actions/checkout@v2
      
    - name: Setup .NET Core SDK
      uses: actions/setup-dotnet@v1
      with:
        dotnet-version: 3.1.x

    - name: Restore
      run: dotnet restore

    - name: Build
      run: dotnet build --configuration Release --no-restore

    - name: Test
      run: dotnet test
&lt;/pre&gt;

&lt;p&gt;You can see the areas that would be replaced by some input parameters on lines 6,13,38 here in this example (these are the defaults).&amp;#160; This isn’t meant to be your final workflow, but as Damian suggests, this is a good practice to start immediately from your “File…New Project” aspect and build up the workflow as you go along, rather than wait until the end to cobble everything together.&amp;#160; For me, now I just need to add my specific NuGet deployment steps when I’m ready to do so.&lt;/p&gt;

&lt;h2&gt;Installing and feature wishes&lt;/h2&gt;

&lt;p&gt;If you find this helpful feel free to install this template from NuGet using:&lt;/p&gt;



&lt;pre class="brush: ps;"&gt;dotnet new --install TimHeuer.GitHubActions.Templates
&lt;/pre&gt;



&lt;p&gt;You can find the package at &lt;a href="https://www.nuget.org/packages/TimHeuer.GitHubActions.Templates/" target="_blank"&gt;TimHeuer.GitHubActions.Templates&lt;/a&gt; which also has the link to the repo if you see awesome changes or horrible bugs.&amp;#160; This is a simple item template so there are some limitations that I wish it would do automatically.&amp;#160; Honestly I started out making a global tool that would solve some of these but it felt a bit overkill.&amp;#160; For example:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;It adds the template from where you are executing.&amp;#160; Actions need to be in the root of your repo, so you need to execute this in the root of your repo locally.&amp;#160; Otherwise it is just going to add some folders in random places that won’t work.&amp;#160; &lt;/li&gt;

  &lt;li&gt;It won’t auto-detect the SDK you are using.&amp;#160; Not horrible, but would be&amp;#160; nice to say “oh, you are a .NET 5 app, then this is the SDK you need”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both of these could be solved with more access to the project system and in a global tool, but again, they are minor in my eyes.&amp;#160; Maybe I’ll get around to solving them, but selfishly I’m good for now!&lt;/p&gt;

&lt;p&gt;I just wanted to share this little tool that has become helpful for me, hope it helps you a bit!&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://www.timheuer.com/blog/filtering-data-table-with-blazor/</id>
    <title>Filtering a Bootstrap table in C# and Blazor</title>
    <updated>2020-10-20T00:37:04Z</updated>
    <published>2020-10-20T00:35:59Z</published>
    <link href="https://www.timheuer.com/blog/filtering-data-table-with-blazor/" />
    <author>
      <name>Tim Heuer</name>
      <email>tim@timheuer.com</email>
    </author>
    <category term=".net" />
    <category term="blazor" />
    <category term="dotnet" />
    <category term="aspnet" />
    <category term="visual studio" />
    <content type="html">&lt;p&gt;I was finally getting around to updating a little internal app I had that showed some various data that some groups use to triage bugs.&amp;#160; As you can imagine it is a classic “table of stuff” type dataset with various titles, numbers, IDs, etc. as visible columns.&amp;#160; I had built it using Blazor server and wanted to update it a bit.&amp;#160; In doing some of the updates I came across a preferred visual I liked for the grid view and applied the CASE methodology to implement that.&amp;#160; Oh you don’t know what CASE methodology is?&amp;#160; &lt;strong&gt;C&lt;/strong&gt;opy &lt;strong&gt;A&lt;/strong&gt;lways, &lt;strong&gt;S&lt;/strong&gt;teal &lt;strong&gt;E&lt;/strong&gt;verything.&amp;#160; In this case the culprit was &lt;a href="https://twitter.com/terrajobst"&gt;Immo&lt;/a&gt; on my team.&amp;#160; I know right? I couldn’t believe it either that he had something I wanted to take from a UI standpoint.&amp;#160; I digress…&lt;/p&gt;  &lt;p&gt;In the end I wanted to provide a rendered table UI quickly and provide a global filter:&lt;/p&gt;  &lt;p&gt;&lt;img title="Picture of a filtered data table" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Picture of a filtered data table" src="https://storage2.timheuer.com/blazorfilterpreview2.png" width="1803" height="798" /&gt;&lt;/p&gt;  &lt;h2&gt;Styling the table&lt;/h2&gt;  &lt;p&gt;I copied what I needed and realized I could be using the &lt;a href="https://getbootsrap.com"&gt;Bootstrap&lt;/a&gt; styles/tables in my use case.&amp;#160; Immo was using just &amp;lt;divs&amp;gt; but I own this t-shirt, so I went with &amp;lt;table&amp;gt; and plus, I like that Bootsrap had a &lt;a href="https://getbootstrap.com/docs/4.0/content/tables/"&gt;nice example&lt;/a&gt; for me.&amp;#160; Off I went and changed my iteration loop. to a nice beautiful striped table.&amp;#160; Here’s what it looked like in the styling initially:&lt;/p&gt;  &lt;pre class="brush: xml;"&gt;&amp;lt;table class=&amp;quot;table table-striped&amp;quot;&amp;gt;
    &amp;lt;thead class=&amp;quot;thead-light&amp;quot;&amp;gt;
        &amp;lt;tr&amp;gt;
            &amp;lt;th scope=&amp;quot;col&amp;quot;&amp;gt;Date&amp;lt;/th&amp;gt;
            &amp;lt;th scope=&amp;quot;col&amp;quot;&amp;gt;Temp. (C)&amp;lt;/th&amp;gt;
            &amp;lt;th scope=&amp;quot;col&amp;quot;&amp;gt;Temp. (F)&amp;lt;/th&amp;gt;
            &amp;lt;th scope=&amp;quot;col&amp;quot;&amp;gt;Summary&amp;lt;/th&amp;gt;
        &amp;lt;/tr&amp;gt;
    &amp;lt;/thead&amp;gt;
    &amp;lt;tbody&amp;gt;
        @foreach (var forecast in forecasts)
        {
            &amp;lt;tr&amp;gt;
                &amp;lt;td&amp;gt;@forecast.Date.ToShortDateString()&amp;lt;/td&amp;gt;
                &amp;lt;td&amp;gt;@forecast.TemperatureC&amp;lt;/td&amp;gt;
                &amp;lt;td&amp;gt;@forecast.TemperatureF&amp;lt;/td&amp;gt;
                &amp;lt;td&amp;gt;@forecast.Summary&amp;lt;/td&amp;gt;
            &amp;lt;/tr&amp;gt;
        }
    &amp;lt;/tbody&amp;gt;
&amp;lt;/table&amp;gt;
&lt;/pre&gt;

&lt;h2&gt;Adding a filter&lt;/h2&gt;

&lt;p&gt;Now I wanted to add some filtering capabilities more globally.&amp;#160; Awesome “boostrap filtering” searching I went and landed on &lt;a href="https://www.w3schools.com/Bootstrap/bootstrap_filters.asp"&gt;this simple tutorial&lt;/a&gt;.&amp;#160; Wow! a few lines of JavaScript, sweet, done.&amp;#160; Or so I thought.&amp;#160; As someone who hasn’t done a lot of SPA web app development I was quickly hit with the reality that once you choose a SPA framework (like Angular, React, Vue, Blazor) that you are essentially buying in to the whole philosophy and that for the most part jQuery-style DOM manipulations will no longer be at your fingertips as easily.&amp;#160; Sigh, off to some teammates I went to complain and look for their sympathy.&amp;#160; Narrator: they had no sympathy.&lt;/p&gt;

&lt;p&gt;After another quick chat with Immo who had implementing the same thing he smacked me around and said in the most polite German accent “Why don’t you just use C# idiot?”&amp;#160; Okay, I added the idiot part, but I felt like he was typing it and then deleted that part before hitting send.&amp;#160; Knowing that Blazor renders everything and then re-renders when things change I just had to implement some checking logic in the foreach loop.&amp;#160; First I needed to add the filter input field:&lt;/p&gt;

&lt;pre class="brush: xml; highlight: [3,4];"&gt;&amp;lt;div class=&amp;quot;form-group&amp;quot;&amp;gt;
    &amp;lt;input class=&amp;quot;form-control&amp;quot; type=&amp;quot;text&amp;quot; placeholder=&amp;quot;Filter...&amp;quot; 
           @bind=&amp;quot;Filter&amp;quot; 
           @bind:event=&amp;quot;oninput&amp;quot;&amp;gt;
&amp;lt;/div&amp;gt;
&amp;lt;table class=&amp;quot;table table-striped&amp;quot;&amp;gt;
...
&amp;lt;/table&amp;gt;
&lt;/pre&gt;

&lt;p&gt;Observe that I added an @bind and @bind:event attributes that enable me to wire these up to properties and client-side events.&amp;#160; So I’m telling it to bind the input to my ‘Field’ property and do this on ‘oninput’ (basically when the keys are typed in the input box).&amp;#160; Now off to implement the property.&amp;#160; I’m doing this simply in the @code block of the page itself:&lt;/p&gt;

&lt;pre class="brush: csharp; highlight: [9];"&gt;@code {
    private WeatherForecast[] forecasts;

    protected override async Task OnInitializedAsync()
    {
        forecasts = await ForecastService.GetForecastAsync(DateTime.Now);
    }

    public string Filter { get; set; }
}
&lt;/pre&gt;

&lt;p&gt;And then I needed to implement the logic for filtering.&amp;#160; I’m doing a global filter so that I can control whatever fields I want searched/filtered.&amp;#160; I basically have the IsVisible function called each iteration and deciding if it should be rendered.&amp;#160; For this sample I’m looking at if the summary contains the filter text or if the celsius or farenheit temperatures start with the digits being entered.&amp;#160; I actually have access to the item model so I could even filter off of something not visible if I wanted (which would be weird for your users, so you probably shouldn’t do that).&amp;#160; Here’s what I implemented:&lt;/p&gt;

&lt;pre class="brush: csharp; highlight: [6,9];"&gt;public bool IsVisible(WeatherForecast forecast)
{
    if (string.IsNullOrEmpty(Filter))
        return true;

    if (forecast.Summary.Contains(Filter, StringComparison.OrdinalIgnoreCase))
        return true;

    if (forecast.TemperatureC.ToString().StartsWith(Filter) || forecast.TemperatureF.ToString().StartsWith(Filter))
        return true;

    return false;
}
&lt;/pre&gt;

&lt;h2&gt;Implementing the filter&lt;/h2&gt;

&lt;p&gt;Once I had the parameter, input event, and the logic, now I just needed to implement that in my loop.&amp;#160; A simple change to the foreach loop does the trick:&lt;/p&gt;

&lt;pre class="brush: xml; highlight: [13,14];"&gt;&amp;lt;table class=&amp;quot;table table-striped&amp;quot;&amp;gt;
    &amp;lt;thead class=&amp;quot;thead-light&amp;quot;&amp;gt;
        &amp;lt;tr&amp;gt;
            &amp;lt;th scope=&amp;quot;col&amp;quot;&amp;gt;Date&amp;lt;/th&amp;gt;
            &amp;lt;th scope=&amp;quot;col&amp;quot;&amp;gt;Temp. (C)&amp;lt;/th&amp;gt;
            &amp;lt;th scope=&amp;quot;col&amp;quot;&amp;gt;Temp. (F)&amp;lt;/th&amp;gt;
            &amp;lt;th scope=&amp;quot;col&amp;quot;&amp;gt;Summary&amp;lt;/th&amp;gt;
        &amp;lt;/tr&amp;gt;
    &amp;lt;/thead&amp;gt;
    &amp;lt;tbody&amp;gt;
        @foreach (var forecast in forecasts)
        {
            if (!IsVisible(forecast))
                continue;
            &amp;lt;tr&amp;gt;
                &amp;lt;td&amp;gt;@forecast.Date.ToShortDateString()&amp;lt;/td&amp;gt;
                &amp;lt;td&amp;gt;@forecast.TemperatureC&amp;lt;/td&amp;gt;
                &amp;lt;td&amp;gt;@forecast.TemperatureF&amp;lt;/td&amp;gt;
                &amp;lt;td&amp;gt;@forecast.Summary&amp;lt;/td&amp;gt;
            &amp;lt;/tr&amp;gt;
        }
    &amp;lt;/tbody&amp;gt;
&amp;lt;/table&amp;gt;
&lt;/pre&gt;

&lt;p&gt;Now when I type it automatically filters the view based on input.&amp;#160; Like a thing of beauty.&amp;#160; Here it is in action:&lt;/p&gt;

&lt;p&gt;&lt;img title="Animation of table being filtered" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Animation of table being filtered" src="https://storage2.timheuer.com/blazorfilteringtable.gif" width="1280" height="720" /&gt;&lt;/p&gt;

&lt;p&gt;Pretty awesome.&amp;#160; While I’ve used the default template here to show this example, this technique can of course be applied to your logic.&amp;#160; I’ve put this in a repo to look at more detailed (this is running .NET 5-rc2 bits) at &lt;a href="https://github.com/timheuer/BlazorFilteringWithBootstrap" target="_blank"&gt;timheuer/BlazorFilteringWithBootstrap&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;More advanced filtering&lt;/h2&gt;

&lt;p&gt;This was a simple use case and worked fine for me.&amp;#160; But there are more advanced use-cases, better user experiences to provide more logic to the filter (i.e., define your own contains versus equals, etc.) and that’s where 3rd party components come in.&amp;#160; There are a lot that provide built-in grids that have this capability.&amp;#160; Here are just a few:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://www.telerik.com/blazor-ui/grid" target="_blank"&gt;Telerik UI for Blazor – Grid&lt;/a&gt;&lt;/li&gt;

  &lt;li&gt;&lt;a href="https://www.devexpress.com/blazor/data-grid/" target="_blank"&gt;DevExpress Blazor DataGrid&lt;/a&gt;&lt;/li&gt;

  &lt;li&gt;&lt;a href="https://www.infragistics.com/products/ignite-ui-blazor" target="_blank"&gt;Infragistics Ignite UI Blazor Data Grid&lt;/a&gt;&lt;/li&gt;

  &lt;li&gt;&lt;a href="https://blazor.radzen.com/datagrid" target="_blank"&gt;Radzen DataGrid&lt;/a&gt;&lt;/li&gt;

  &lt;li&gt;&lt;a href="https://www.syncfusion.com/blazor-components/blazor-datagrid" target="_blank"&gt;Syncfusion Blazor DataGrid&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Just to name a few popular ones.&amp;#160; These are all great components authored by proven vendors in the .NET component space.&amp;#160; These are way richer than simple filtering and provide a plethora of capabilities on top of grid-based rendering of large sets of data.&amp;#160; I recommend if you have those needs you check them out.&lt;/p&gt;

&lt;p&gt;I’m enjoying my own journey writing Blazor apps and hope you found this dumb little sample useful.&amp;#160; If not, that’s cool.&amp;#160; I mainly am bookmarking here for my own use later when I forget and need to search…maybe I’ll find it back on my own site.&lt;/p&gt;

&lt;p&gt;Hope this helps!&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://www.timheuer.com/blog/hosting-blazor-in-azure-static-web-apps/</id>
    <title>Hosting Blazor WebAssembly in Azure Static Web Apps (Preview)</title>
    <updated>2020-05-20T21:18:57Z</updated>
    <published>2020-05-19T17:37:57Z</published>
    <link href="https://www.timheuer.com/blog/hosting-blazor-in-azure-static-web-apps/" />
    <author>
      <name>Tim Heuer</name>
      <email>tim@timheuer.com</email>
    </author>
    <category term="blazor" />
    <category term="aspnet" />
    <category term="github" />
    <category term="devops" />
    <category term="azure" />
    <content type="html">&lt;p&gt;At Build the Azure team launched a new service called Azure Static Web Apps in preview. This service is tailored for scenarios that really work well when you have a static web site front-end and using things like serverless APIs for your communication to services/data/etc. You should read more about it here: &lt;a href="https://aka.ms/swapreview"&gt;&lt;strong&gt;Azure Static Web Apps&lt;/strong&gt;&lt;/a&gt; .&lt;/p&gt;  &lt;p&gt;Awesome, so &lt;strong&gt;Blazor WebAssembly (Wasm) &lt;/strong&gt;is a static site right? Can you use this new service to host your Blazor Wasm app? Let’s find out!&lt;/p&gt;  &lt;blockquote&gt;   &lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: This is just an experiment for me.&amp;#160; This isn’t any official stance of what may come with the service, but only what we can do now with Blazor apps.&amp;#160; As you see with Azure Static Web Apps there is a big end-to-end there with functions, debug experience, etc.&amp;#160; I just wanted to see if Blazor Wasm (as a static web app) could be pushed to the service.&lt;/p&gt; &lt;/blockquote&gt;  &lt;p&gt;As of this post the service is tailored toward JavaScript app development and works seamlessly in that setup. However, with a few tweaks (for now) we can get our Blazor app working. First we’ll need to get things setup!&lt;/p&gt;  &lt;h2&gt;Setting up your repo&lt;/h2&gt;  &lt;p&gt;The fundamental aspects of the service are deployment from your source in GitHub using GitHub Actions. So first you’ll need to make sure you have a repository on GitHub.com for your repository. I’m going to continue to use my &lt;a href="https://timheuer.com/blog/deploy-blazor-webassembly-applications-on-azure-using-github-actions-wasm/"&gt;Blazor Wasm Hosting Sample repo&lt;/a&gt; (which has different options as well to host Wasm apps) for this example. My app is the basic Blazor Wasm template, nothing fancy at all. Okay, we’ve got the repo set up, now let’s get the service setup.&lt;/p&gt;  &lt;h2&gt;Create the Azure Static Web App resource&lt;/h2&gt;  &lt;p&gt;You’ll need an Azure account of course and if you don’t have one, you can &lt;a href="https://azure.com/free"&gt;create an Azure account for free&lt;/a&gt;. Go ahead and do that and then come back here to make it easier to follow along. Once you have the account you’ll log in to the Azure portal and create a new resource using the Static Web App (Preview) resource type. You’ll see a simple form to fill out a few things like your resource group and a name for your app and the region.&lt;/p&gt;  &lt;p&gt;&lt;img title="Screenshot of Azure Portal configuration" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of Azure Portal configuration" src="https://storage2.timheuer.com/bwasmstatic1.png" width="1157" height="925" /&gt;&lt;/p&gt;  &lt;p&gt;The last thing there is where you’ll connect to your GitHub repo and make selections for what repo to use. It will launch you to authorize Azure Static Web Apps to make changes to your repo (for workflow and adding secrets):&lt;/p&gt;  &lt;p&gt;&lt;img title="Picture of GitHub permission prompt" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Picture of GitHub permission prompt" src="https://storage2.timheuer.com/bwasmstatic2.png" width="839" height="663" /&gt;&lt;/p&gt;  &lt;p&gt;Once authorized then more options show for the resource creation and just choose your org/repo/branch:&lt;/p&gt;  &lt;p&gt;&lt;img title="Picture of GitHub repo choices" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Picture of GitHub repo choices" src="https://storage2.timheuer.com/bwasmstatic3.png" width="1114" height="374" /&gt;&lt;/p&gt;  &lt;p&gt;Once you complete these selections, click Review+Create and the resource will create! The process will take a few minutes, but when complete you’ll have a resource with a few key bits of information:&lt;/p&gt;  &lt;p&gt;&lt;img title="Picture of finished Azure resource config" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Picture of finished Azure resource config" src="https://storage2.timheuer.com/bwasmstatic4.png" width="1883" height="436" /&gt;&lt;/p&gt;  &lt;p&gt;The URL of your app is auto-generated with probably a name that will make you chuckle a bit. Hey, it’s random, don’t try to make sense of it, just let the names like “icy cliff” inspire you. Additionally you’ll see the “Workflow file” YAML file and link. If you click it (go ahead and do that) it will take us over to your repo and the GitHub Actions workflow file that was created. We’ll take a look at the details next, but for now if you navigate to the Actions tab of your repo, you’ll see a fail. This is expected for us right now in our steps…more on that later.&lt;/p&gt;  &lt;p&gt;&lt;img title="Picture of workflows in Actions" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Picture of workflows in Actions" src="https://storage2.timheuer.com/bwasmstatic5.png" width="1554" height="340" /&gt;&lt;/p&gt;  &lt;p&gt;In addition to the Actions workflow navigate to the settings tab of your repo and choose Secrets. You’ll see a new secret (with that random name) was added to your repo.&lt;/p&gt;  &lt;p&gt;&lt;img title="Picture of GitHub secrets" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Picture of GitHub secrets" src="https://storage2.timheuer.com/bwasmstatic6.png" width="1182" height="718" /&gt;&lt;/p&gt;  &lt;p&gt;This is the API token needed to communicate with the service.&lt;/p&gt;  &lt;p&gt;Why can’t you see the token itself and give the secret a different name? Great question. For now just know that you can’t. Maybe this will change, but this is the secret name you’ll have to use. It’s cool though, the only place it is used is in your workflow file. Speaking of that file, let’s take a look more in detail now!&lt;/p&gt;  &lt;h2&gt;Understanding and modifying the Action&lt;/h2&gt;  &lt;p&gt;So the initial workflow file was created and added to your workflow has all the defaults. Namely we’re going to focus on the “jobs” node of the workflow, which should start about line 12. The previous portions in the workflow define the triggers which you can modify if you’d like but they are intended to be a part of your overall CI/CD flow with the static site (automatic PR closure, etc.). Let’s look at the jobs as-is:&lt;/p&gt;  &lt;pre class="brush: yaml; first-line: 12; highlight: [18,23];"&gt;jobs:
  build_and_deploy_job:
    if: github.event_name == 'push' || (github.event_name == 'pull_request' &amp;amp;&amp;amp; github.event.action != 'closed')
    runs-on: ubuntu-latest
    name: Build and Deploy Job
    steps:
    - uses: actions/checkout@v2
    - name: Build And Deploy
      id: builddeploy
      uses: Azure/static-web-apps-deploy@v0.0.1-preview
      with:
        azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_ICY_CLIFF_XXXXXXXXX }}
        repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
        action: 'upload'
        ###### Repository/Build Configurations - These values can be configured to match you app requirements. ######
        app_location: '/' # App source code path
        api_location: 'api' # Api source code path - optional
        app_artifact_location: '' # Built app content directory - optional
        ###### End of Repository/Build Configurations ######
&lt;/pre&gt;

&lt;p&gt;Before we make changes, let’s just look. Oh see that parameter for api token? It’s using that secret that was added to your repo. GitHub Actions has built in a ‘secrets’ object that can reference those secrets and this is where that gets used. That is required for proper deployment. So there, that is where you can see the relationship to it being used!&lt;/p&gt;

&lt;p&gt;This is great, but also was failing for our Blazor Wasm app. Why? Well because it’s trying to build it and doesn’t quite know how yet. That’s fine, we can help nudge it along! I’m going to make some changes here. First, change the checkout version to @v2 on Line 18. This is faster.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: I suspect this will change to be the default soon, but you can change it now to use v2&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now we need to get .NET SDK set up to build our Blazor app. So after the checkout step, let’s add another to first set up the .NET SDK we want to use. It will look like this, using the setup-dotnet action:&lt;/p&gt;

&lt;pre class="brush: yaml; first-line: 18; highlight: [20-23];"&gt;    - uses: actions/checkout@v2
    
    - name: Setup .NET SDK
      uses: actions/setup-dotnet@v1
      with:
        dotnet-version: 3.1.201
&lt;/pre&gt;

&lt;p&gt;Now that we are setup, we need to build the Blazor app. So let’s add another step that explicitly builds the app and publish to a specific output location for easy reference in a later step!&lt;/p&gt;

&lt;pre class="brush: yaml; first-line: 18; highlight: [25-26];"&gt;    - uses: actions/checkout@v2
    
    - name: Setup .NET SDK
      uses: actions/setup-dotnet@v1
      with:
        dotnet-version: 3.1.201

    - name: Build App
      run: dotnet publish -c Release -o published
&lt;/pre&gt;

&lt;p&gt;There, now we’ve got it building!&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: I’m taking a bit of a shortcut in this tutorial and I’d recommend the actual best practice of Restore, Build, Test, Publish as separate steps. This allows you to more precisely see what is going on in your CI and clearly see what steps may fail, etc.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Our Blazor app is now build and prepared for static deployment in the location ‘published’ referenced in our ‘-o’ parameter during build. All the files we need start now at the root of that folder. A typical Blazor Wasm app published will have a web.config and a wwwroot at the published location.&lt;/p&gt;

&lt;p&gt;&lt;img title="Picture of Windows explorer folders" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Picture of Windows explorer folders" src="https://storage2.timheuer.com/bwasmstatic7.png" width="518" height="245" /&gt;&lt;/p&gt;

&lt;p&gt;Let’s get back to the action defaults. Head back to the YAML file and look for the ‘app_location’ parameter in the action. We now want to change that to our published folder location, but specifically the wwwroot location as the root (as for now the web.config won’t be helpful). So you’d change it to look like this (a snippet of the YAML file)&lt;/p&gt;

&lt;pre class="brush: yaml; first-line: 18; highlight: [36-38];"&gt;    - uses: actions/checkout@v2
    
    - name: Setup .NET SDK
      uses: actions/setup-dotnet@v1
      with:
        dotnet-version: 3.1.201

    - name: Build App
      run: dotnet publish -c Release -o published

    - name: Build And Deploy
      id: builddeploy
      uses: Azure/static-web-apps-deploy@v0.0.1-preview
      with:
        azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_ICY_CLIFF_XXXXXXXXX }}
        repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
        action: 'upload'
        ###### Repository/Build Configurations - These values can be configured to match you app requirements. ######
        app_location: 'published/wwwroot' # App source code path
        api_location: '' # Api source code path - optional
        app_artifact_location: 'published/wwwroot' # Built app content directory - optional
        ###### End of Repository/Build Configurations ######
&lt;/pre&gt;

&lt;p&gt;This tells the Static Web App deployment steps to push our files from here. Go ahead and commit the workflow file back to your repository and the Action will trigger and you will see it complete:&lt;/p&gt;

&lt;p&gt;&lt;img title="Screenshot of completed workflow steps" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of completed workflow steps" src="https://storage2.timheuer.com/bwasmstatic8.png" width="1361" height="1112" /&gt;&lt;/p&gt;

&lt;p&gt;We have now successfully deployed our Blazor Wasm app to the Static Web App Preview service! Now you’ll note that there is a lot of output in the Deploy step, including warnings about build warnings. For now this is okay as we are not relying on the service to build our app (yet). You’ll also see the note about Functions not being found (reminder we changed our parameter to not have that value). Let’s talk about that.&lt;/p&gt;

&lt;h2&gt;What about the Functions?&lt;/h2&gt;

&lt;p&gt;For now the service will automatically build a JavaScript app including serverless functions built using JavaScript in this one step. If you are a .NET developer you’ll most likely be building your functions in C# along with your Blazor front-end. Right now the service doesn’t automatically allow you to specify an API location in your project for C# function classes and automatically build them. Hopefully in the future we will see that be enabled. Until then you’ll have to deploy your functions app separately. You can do it in the same workflow though if it is a part of your same repo. You’ll just leverage the other &lt;a href="https://github.com/Azure/functions-action"&gt;Azure Functions GitHub Action&lt;/a&gt; to accomplish that. Maybe I should update my sample repo to also include that?&lt;/p&gt;

&lt;h2&gt;But wait, it is broken!&lt;/h2&gt;

&lt;p&gt;Well maybe you find out that the routing of URLs may not work all the time.&amp;#160; You’re right!&amp;#160; You need to supply a routes.json file located in your app’s wwwroot directory to provide the global rewrite rule so that URLs will always work.&amp;#160; The routes.json file should look like&lt;/p&gt;



&lt;pre class="brush: json;"&gt;{
  &amp;quot;routes&amp;quot;: [
    {
      &amp;quot;route&amp;quot;: &amp;quot;/*&amp;quot;,
      &amp;quot;serve&amp;quot;: &amp;quot;/index.html&amp;quot;,
      &amp;quot;statusCode&amp;quot;: 200
    }
  ]
}
&lt;/pre&gt;



&lt;p&gt;and put in your source project’s wwwroot folder.&amp;#160; This will be picked up by the service and interpreted so routes work!&lt;/p&gt;

&lt;h2&gt;Considerations and Summary&lt;/h2&gt;

&lt;p&gt;So you’ve now seen it’s possible, but you should also know the constraints. I’ve already noted that you’ll need to deploy your Functions app separately and you have to build your Blazor app in a pre-step (which I think is a good thing personally), so you may be wondering why might you use this service. I’ll leave that answer to you as I think there are scenarios will it will be helpful and I do believe this is just a point in time for the preview and more frameworks hopefully will be supported. I know those of us on the .NET team are working with the service to better support Blazor Wasm, for example.&lt;/p&gt;

&lt;p&gt;Another thing that Blazor build does for you is produce pre-compressed files for Brotli and Gzip compression delivered from the server. When you host Blazor Wasm using ASP.NET Core, we deliver these files to the client automatically (via middleware). When you host using Windows App Service you can supply a web.config to have rewrite rules that will solve this for you as well (you can in Linux as well). For the preview of the Static Web App service and Blazor Wasm, you won’t automatically get this, so your app size will be the default uncompressed sizes of the assemblies and static assets.&lt;/p&gt;

&lt;p&gt;I hope that you can give the service a try with your apps regardless of if they are Blazor or not. I just wanted to demonstrate how you would get started using the preview making it work with your Blazor Wasm app. I’ve added this specific workflow to my Blazor Wasm Deployment Samples repository where you can see other forms as well on Azure to deploy the client app.&lt;/p&gt;

&lt;p&gt;I hope this helps see what’s possible in preview today!&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://www.timheuer.com/blog/deploy-blazor-webassembly-applications-on-azure-using-github-actions-wasm/</id>
    <title>Different ways to host Blazor WebAssembly (Wasm)</title>
    <updated>2020-05-12T00:01:36Z</updated>
    <published>2020-05-12T00:01:36Z</published>
    <link href="https://www.timheuer.com/blog/deploy-blazor-webassembly-applications-on-azure-using-github-actions-wasm/" />
    <author>
      <name>Tim Heuer</name>
      <email>tim@timheuer.com</email>
    </author>
    <category term="blazor" />
    <category term="aspnet" />
    <category term="dotnet" />
    <category term="github" />
    <category term="devops" />
    <content type="html">&lt;p&gt;Everyone!&amp;#160; As a part of my responsibilities on the Visual Studio team for .NET tools I try to spend the time using our products in various different ways, learning what pitfalls customers may face and ways to solve them.&amp;#160; I work a lot with Azure services and how to deploy apps and I’m a fan of GitHub Actions so I thought I’d share some of my latest experiments.&amp;#160; This post will outline the various ways as of this writing you can host Blazor WebAssembly (Wasm) applications.&amp;#160; We actually have some great documentation on this topic in the &lt;a href="https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/blazor/webassembly?view=aspnetcore-3.1#standalone-deployment"&gt;Standalone Deployment&lt;/a&gt; section of our docs but I wanted to take it a bit further and demonstrate the GitHub Actions deployment of those options to Azure using the &lt;a href="https://github.com/Azure/actions"&gt;Azure GitHub Actions&lt;/a&gt;.&lt;/p&gt;  &lt;p&gt;Let’s get started!&lt;/p&gt;  &lt;p&gt;If you don’t know what Blazor Wasm is then you should read a bit about &lt;a href="https://dotnet.microsoft.com/apps/aspnet/web-apps/blazor"&gt;What is Blazor&lt;/a&gt; on our docs.&amp;#160; Blazor Wasm enables you to write your web application front-end using C# with .NET running in the browser.&amp;#160; This is different than previous models that enabled you to write C# in the browser like Silverlight where a separate plug-in was required to enable this.&amp;#160; With modern web standards and browser, &lt;a href="https://webassembly.org/"&gt;WebAssembly&lt;/a&gt; has emerged as a standard to enable compilation of high-level languages for web deployment via browsers.&amp;#160; Blazor enables you to use C# and create your web app from front-end to back-end using a single language and .NET.&amp;#160; It’s great.&amp;#160; When you create a Blazor Wasm project and publish the output you are essentially creating a static site with assets that can be deployed to various places as there is no hard server requirement (other than to be able to serve the content and mime types).&amp;#160; Let’s explore these options…&lt;/p&gt;  &lt;h2&gt;ASP.NET Core-hosted&lt;/h2&gt;  &lt;p&gt;For sure the simplest way to host Blazor Wasm would be to also use ASP.NET Core web app to serve it.&amp;#160; ASP.NET Core is cross-platform and can run pretty much anywhere.&amp;#160; If you are likely using C# for all your development, this is likely the scenario you’d be using anyway and you can deploy your web app, which would container your Blazor Wasm assets as well to the same location (Linux, Windows, containers).&amp;#160; When creating a Blazor Wasm site you can choose this option in Visual Studio by selecting these options:&lt;/p&gt;  &lt;p&gt;&lt;img title="Blazor Wasm creation in Visual Studio" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Blazor Wasm creation in Visual Studio" src="https://storage2.timheuer.com/bwasm-hosting-vs1.png" width="1458" height="775" /&gt;&lt;/p&gt;  &lt;p&gt;or using the dotnet CLI using this method:&lt;/p&gt;  &lt;pre class="brush: bash;"&gt;dotnet new blazorwasm --hosted -o YourProjectName
&lt;/pre&gt;

&lt;p&gt;Both of these create a solution with a Blazor Wasm client app, ASP.NET Core Server app, and a shared (optional) library project for sharing code between the two (like models or things like that).&amp;#160; This is an awesome option and your deployment method would follow the same method of deploying the ASP.NET Core app you’d already be using.&amp;#160; I won’t focus on that here as it isn’t particularly unique.&amp;#160; One advantage of using this method is ASP.NET Core already has middleware to properly serve the pre-compressed Brotli/gzip formats of your Blazor Wasm assets from the server, reducing the payload across the wire.&amp;#160; You’ll see more of this in below options, but using ASP.NET Core does this automatically for you.&amp;#160; You can deploy your app to Azure App Service or really anywhere else easily. &lt;/p&gt;

&lt;p&gt;Benefits:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;You’re deploying a ‘solution’ of your full app in one place, using the same tech to host the front/back end code&lt;/li&gt;

  &lt;li&gt;ASP.NET Core enables a set of middleware for you for Blazor routing and compression&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Be Aware:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Basically billing.&amp;#160; Know that you would most likely host in an App Service or non-serverless (consumption) model.&amp;#160; It’s not a negative, just an awareness.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Azure Storage&lt;/h2&gt;

&lt;p&gt;&lt;img src="https://github.com/timheuer/blazor-deploy-sample/workflows/.NET%20Core%20Build%20and%20Deploy%20(Storage)/badge.svg" /&gt;&lt;/p&gt;

&lt;p&gt;If you just have the Blazor Wasm site and are calling in to a set of web APIs, serverless functions, or whatever and you just want to host the Wasm app only then using Storage is an option.&amp;#160; I actually already wrote about this previously in this blog post &lt;a href="https://timheuer.com/blog/deploy-blazor-app-to-azure-using-github-actions"&gt;Deploy a Blazor Wasm Site to Azure Storage Using GitHub Actions&lt;/a&gt; so I won’t repeat it here…go over there and read that detail.&lt;/p&gt;

&lt;p&gt;Example GitHub Action Deployment to Azure Storage: &lt;a href="https://github.com/timheuer/blazor-deploy-sample/blob/master/.github/workflows/azure-storage-deploy.yml"&gt;azure-deploy-storage.yml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Benefits:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Consumption-based billing for storage.&amp;#160; You aren’t paying for ‘on all the time’ compute&lt;/li&gt;

  &lt;li&gt;Blob-storage managed (many different tools to see the content)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Be Aware:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Routing: errors will need to be routed to index.html as well and even though they will be ‘successful’ routes, it will still be an HTTP 404 response code.&amp;#160; This could be mitigated by adding Azure CDN in front of your storage and using more granular rewrite rules (but this is also an additional service)&lt;/li&gt;

  &lt;li&gt;Pre-compressed assets won’t be served as there is no middleware/server to automatically detect and serve these files.&amp;#160; Your app will be larger than it could be if serving the compressed brotli/gzip assets.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Azure App Service (Windows)&lt;/h2&gt;

&lt;p&gt;&lt;img src="https://github.com/timheuer/blazor-deploy-sample/workflows/.NET%20Core%20Build%20and%20Deploy%20(AppSvc%20Win)/badge.svg" /&gt;&lt;/p&gt;

&lt;p&gt;You can directly publish your Blazor Wasm client app to Azure App Service for Windows.&amp;#160; When you publish a Blazor Wasm app, we provide a little web.config in the published output (unless you supply your own) and this contains some rewrite information for routing to index.html.&amp;#160; Since App Service for Windows uses IIS when you publish this output this web.config is used and will help your app routing.&amp;#160; You can also publish from Visual Studio using this method as well:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://storage2.timheuer.com/bwasm-hosting-vs2.png"&gt;&lt;img title="Visual Studio publish dialog" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Visual Studio publish dialog" src="https://storage2.timheuer.com/bwasm-hosting-vs2_thumb.png" width="1537" height="656" /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;or using GitHub Actions easily using the Azure Actions.&amp;#160; Without the ASP.NET Core host you will want to provide IIS with better hinting on the pre-compressed files as well.&amp;#160; This is documented in our &lt;a href="https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/blazor/webassembly?view=aspnetcore-3.1#brotli-and-gzip-compression"&gt;Brotli and Gzip documentation&lt;/a&gt; section and a sample web.config is also provided in this sample repo.&amp;#160; This web.config in the root of your project (not in the wwwroot) will be used during publish instead of the pre-configured one we would provide if there was none.&lt;/p&gt;

&lt;p&gt;Example GitHub Action Deployment to Azure App Service for Windows: &lt;a href="https://github.com/timheuer/blazor-deploy-sample/blob/master/.github/workflows/azure-app-svc-windows-deploy.yml"&gt;azure-app-svc-windows-deploy.yml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Benefits:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Easy deployment and default routing configuration provided in published output&lt;/li&gt;

  &lt;li&gt;Managed PaaS&lt;/li&gt;

  &lt;li&gt;Publish easily from Actions or Visual Studio&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Be Aware:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Really just understanding your billing choices for the App Service&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Azure App Service (Linux w/Containers)&lt;/h2&gt;

&lt;p&gt;&lt;img src="https://github.com/timheuer/blazor-deploy-sample/workflows/.NET%20Core%20Build%20and%20Deploy%20(Container)/badge.svg" /&gt;&lt;/p&gt;

&lt;p&gt;If you like containers, you can put your Blazor Wasm app in a container and deploy that where supported, including Azure App Service Containers!&amp;#160; This enables you to encapsulate a little bit more in your own container image and also control the configuration of the server a bit more.&amp;#160; For Linux, you’d be able to specify a specific OS image you want to host your app and even supply the configuration of that server.&amp;#160; This is nice because we need to do a bit of that for some routing rules for the Wasm app.&amp;#160; Here is an example of a Docker file that can be used to host a Blazor Wasm app:&lt;/p&gt;

&lt;pre class="brush: bash; highlight: [8-19,23-24];"&gt;FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app

COPY . ./
WORKDIR /app/
RUN dotnet publish -c Release

FROM nginx:1.18.0 AS build
WORKDIR /src
RUN apt-get update &amp;amp;&amp;amp; apt-get install -y git wget build-essential libssl-dev libpcre3-dev zlib1g-dev
RUN CONFARGS=$(nginx -V 2&amp;gt;&amp;amp;1 | sed -n -e 's/^.*arguments: //p') \
    git clone https://github.com/google/ngx_brotli.git &amp;amp;&amp;amp; \
    cd ngx_brotli &amp;amp;&amp;amp; git submodule update --init &amp;amp;&amp;amp; cd .. &amp;amp;&amp;amp; \
    wget -nv http://nginx.org/download/nginx-1.18.0.tar.gz -O - | tar -xz &amp;amp;&amp;amp; \
    cd nginx-1.18.0 &amp;amp;&amp;amp; \ 
    ./configure --with-compat $CONFARGS --add-dynamic-module=../ngx_brotli

WORKDIR nginx-1.18.0
RUN    make modules

FROM nginx:1.18.0 as final

COPY --from=build /src/nginx-1.18.0/objs/ngx_http_brotli_filter_module.so /usr/lib/nginx/modules/
COPY --from=build /src/nginx-1.18.0/objs/ngx_http_brotli_static_module.so /usr/lib/nginx/modules/

WORKDIR /var/www/web
COPY --from=build-env /app/bin/Release/netstandard2.1/publish/wwwroot .
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80 443
&lt;/pre&gt;

&lt;p&gt;In this configuration we’re using an image to first build/publish our Blazor Wasm app, then using the nginx:1.18.0 image as our base and building the nginx_brotli compression modules we want to use (lines 8-19,23-24).&amp;#160; We want to supply some configuration information to the nginx server and we supply an nginx.conf file that looks like this:&lt;/p&gt;

&lt;pre class="brush: bash;"&gt;load_module modules/ngx_http_brotli_filter_module.so;
load_module modules/ngx_http_brotli_static_module.so;
events { }
http {
    include mime.types;
    types {
        application/wasm wasm;
    }
    server {
        listen 80;
        index index.html;

        location / {
            root /var/www/web;
            try_files $uri $uri/ /index.html =404;
        }

        brotli_static on;
        brotli_types text/plain text/css application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript image/x-icon image/vnd.microsoft.icon image/bmp image/svg+xml application/octet-stream application/wasm;
        gzip on;
        gzip_types      text/plain application/xml application/x-msdownload application/json application/wasm application/octet-stream;
        gzip_proxied    no-cache no-store private expired auth;
        gzip_min_length 1000;
        
    }
}
&lt;/pre&gt;

&lt;p&gt;Now when we deploy the Docker image is composed, provided to Azure Container Registry and then deployed to App Service for us.&amp;#160; In the above example, the first two lines are loading the modules we build in the Docker image previously.&lt;/p&gt;

&lt;p&gt;Example GitHub Action Deployment to Azure App Service using Linux Container: &lt;a href="https://github.com/timheuer/blazor-deploy-sample/blob/master/.github/workflows/azure-app-svc-linux-container.yml"&gt;azure-app-svc-linux-container.yml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Benefits:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Containers are highly customizable, allowing you some portability and flexibility&lt;/li&gt;

  &lt;li&gt;Easy deployment from Actions and Visual Studio (you can use the same publish mechanism in VS)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Be Aware&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Additional service here of using Azure Container Registry (or another registry to pull from)&lt;/li&gt;

  &lt;li&gt;Understanding your billing plan for App Service&lt;/li&gt;

  &lt;li&gt;Might need more configuration awareness to take advantage of pre-compressed assets (by default nginx requires an additional module for brotli and you’d have to rebuild it into nginx)&lt;/li&gt;

  &lt;ul&gt;
    &lt;li&gt;NOTE: The example repo has a sample configuration which adds brotli compression support for nginx&lt;/li&gt;
  &lt;/ul&gt;
&lt;/ul&gt;

&lt;h2&gt;Azure App Service (Linux)&lt;/h2&gt;

&lt;p&gt;&lt;img src="https://github.com/timheuer/blazor-deploy-sample/workflows/.NET%20Core%20Build%20and%20Deploy%20(AppSvc%20Linux)/badge.svg" /&gt;&lt;/p&gt;

&lt;p&gt;Similarly to App Service for Windows you could also just use App Service for Linux to deploy your Wasm app.&amp;#160; However there is a big known workaround you have to achieve right now in order to enable this method.&amp;#160; Primarily this is because there is no default configuration or ability to use the web.config like you can for Windows.&amp;#160; Because of this if you use the Visual Studio publish mechanism it will appear as if the publish fails.&amp;#160; Once completed and you navigate to your app you’d get a screen that looks like the default “Welcome to App Service” page if no content is there.&amp;#160; This is a bit of a false positive :-).&amp;#160; Your content/app DOES get published using this mechanism, but since we pus the publish folder the App Service Linux configuration doesn’t have the right rewrite defaults to navigate to index.html.&amp;#160; Because of this I’d recommend if Linux is your desired host, that you use containers to achieve this.&amp;#160; However you CAN do this using GitHub Actions as you manipulate the content to push.&lt;/p&gt;

&lt;p&gt;Example GitHub Action Deployment to Azure App Service Linux: &lt;a href="https://github.com/timheuer/blazor-deploy-sample/blob/master/.github/workflows/azure-app-svc-linux-deploy.yml"&gt;azure-app-svc-linux-deploy.yml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Benefits:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Managed PaaS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Be Aware:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Cannot publish ideally from Visual Studio &lt;/li&gt;

  &lt;li&gt;No pre-compressed assets will be served&lt;/li&gt;

  &lt;li&gt;Understand your billing plan for App Service&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Summary&lt;/h2&gt;

&lt;p&gt;Just like you have options with SPA frameworks or other static sites, for a Blazor Wasm client you have similar options as well.&amp;#160; The unique aspects of pre-compressed assets provide some additional config you should be aware of if you aren’t using ASP.NET Core hosted solutions, but with a small bit of effort you can get it working fine.&amp;#160; &lt;/p&gt;

&lt;p&gt;All of the samples I have listed here are provided in this repository: &lt;a href="https://github.com/timheuer/blazor-deploy-sample"&gt;timheuer/blazor-deploy-samples&lt;/a&gt; and would love to see any issues you may find.&amp;#160; I hope this helps summarize the documentation we have on configuring options in Azure to support Blazor Wasm.&amp;#160; What other tips might you have?&lt;/p&gt;

&lt;p&gt;Stay tuned for more!&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://www.timheuer.com/blog/skipping-ci-github-actions-workflows/</id>
    <title>Skipping CI in GitHub Actions Workflows</title>
    <updated>2020-01-29T20:28:11Z</updated>
    <published>2020-01-29T20:28:11Z</published>
    <link href="https://www.timheuer.com/blog/skipping-ci-github-actions-workflows/" />
    <author>
      <name>Tim Heuer</name>
      <email>tim@timheuer.com</email>
    </author>
    <category term="github" />
    <category term="devops" />
    <content type="html">&lt;p&gt;One of the things that I like about Azure DevOps Pipelines is the ability to make minor changes to your code/branch but not have full CI builds happening.&amp;#160; This is helpful when you are updating docs or README or things like that which don’t materially change the build output.&amp;#160; In Pipelines you have the built-in functionality to put some comments in the commit message that trigger (or don’t trigger rather) the CI build to stop.&amp;#160; The various ones that are supported are identified in ‘&lt;a href="https://docs.microsoft.com/azure/devops/pipelines/build/triggers?view=azure-devops&amp;amp;tabs=yaml#skipping-ci-for-individual-commits"&gt;Skipping CI for individual commits&lt;/a&gt;’ documentation.&lt;/p&gt;  &lt;p&gt;Today that functionality isn’t built-in to GitHub Actions, but you can add it as a base part of your workflows with the help of being able to get to the context of the commit before a workflow starts!&amp;#160; Here is an example of my workflow where I look for it:&lt;/p&gt;    &lt;pre class="brush: yaml; toolbar: false; highlight: [10];"&gt;name: .NET Core Build and Deploy

on:
  push:
    branches:
      - master

jobs:
  build:
    if: github.event_name == 'push' &amp;amp;&amp;amp; contains(toJson(github.event.commits), '***NO_CI***') == false &amp;amp;&amp;amp; contains(toJson(github.event.commits), '[ci skip]') == false &amp;amp;&amp;amp; contains(toJson(github.event.commits), '[skip ci]') == false
    name: Build Package 
    runs-on: ubuntu-latest
&lt;/pre&gt;



&lt;p&gt;You can see at Line 10 that I’m looking at the commit message text for: ***NO_CI***, [ci skip], or [skip ci].&amp;#160; If any of these are present then the job there does not run.&amp;#160; It’s as simple as that!&amp;#160; Here is an example of my last commit where I just was updating the repo to include the build badge:&lt;/p&gt;

&lt;p&gt;&lt;img title="Screenshot of a commit message on GitHub" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of a commit message on GitHub" src="https://storage2.timheuer.com/SNAG_Program-0001.png" width="826" height="428" /&gt;&lt;/p&gt;

&lt;p&gt;And you can see in the workflows that it was not run:&lt;/p&gt;

&lt;p&gt;&lt;img title="Screenshot of workflow status on GitHub" style="border: 0px currentcolor; border-image: none; margin-right: auto; margin-left: auto; float: none; display: block; background-image: none;" border="0" alt="Screenshot of workflow status on GitHub" src="https://storage2.timheuer.com/SNAG_Program-0002.png" width="1270" height="462" /&gt;&lt;/p&gt;

&lt;p&gt;A helpful little tip to add to your workflows to give you that flexibility!&amp;#160; Hope this helps!&lt;/p&gt;</content>
  </entry></feed>