Saturday, November 8, 2025

Quick Bytes: Program.cs For Serving Static Files In A .NET Core Container App

If you're creating a containerized application from an ASP.NET Core Empty project and you just want to serve static files your Program.cs can just look like below:

var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

app.UseDefaultFiles();
app.UseStaticFiles();
app.MapFallbackToFile("index.html");
app.Run(); 

UseDefaultFiles() and MapFallbackToFile while similar are just a little different and the later is used a little more in SPAs (but can also direct to a general Not Found file as well).

For a containerized app serving static files, the project directory would look similar to the following:


 

 

 

 

An Open Letter To Power BI

Dear Power BI,

I just would like to say (again) that for all the good things you seem to be able to do (because you can be so intuitive and easy), I really wish there was a way to get administrative/management/governance information from you in one simple API.

Not one API for this, a scanner for that, another program for the other thing because this and that didn't have it...and yes, I understand what I'm starting to sound like, but in all reality--you've pushed me to it.

I mean, why can't I just get all the usage statistics, connection information, dataset, report, workspaces, dashboard information, queries--all of it, not just some of it (because we all know some information just doesn't show up)--in one simple easy to use API. 

Please?

Okay. Thank you. That's really all I wanted to say for now...because if I say any more I think they'll come to get me... 

Best (or you know, at least the best I can offer),
Semi-Curmudgeonly Developer Guy 

First Try Fridays: Claude Code Was Perfect...Until It Wasn't + General AI Tool Thoughts

Like others, I've been integrating AI into my development and administrative purview using a number of different tools, settling on a mix of many including GitHub Copilot, ChatGPT, Claude Desktop, Copilot, Microsoft 365 Copilot. 

There are also the APIs, Agents, GPTs, all the MCP technologies, and the tools to create them. They all serve different purposes, and nothing is one-size-fits-all.

When I think about it, I'm looking at AI tools from a singular developer perspective, a team perspective, organizational perspective, and also from a work, community, and personal perspective. 

It's seeing what the tools can do. How do they increase my own productivity and creativity. What if it was rolled out to more developers? What are the risks from an organizational perspective? How can autonomous agents, AI chatbots, MCP services, et al. be utilized for different projects and services in an effective way (cost, performance, manageability, and more)?

What is the gap between hype and real world use?

I like to think I have a good grasp on general tools like Claude Desktop, ChatGPT, Copilot(s). They do increase productivity, creativity, and sometimes just the sheer enjoyment of programming and developing, in part, because you can increase your knowledge quick, allowing you to build faster, sometimes doing things you weren't sure were possible. 

Pretty great! 

And all made possible because one AI tool like a GitHub Copilot, or Claude Desktop, or ChatGPT--just one of those can fill 1-3 sets of tasks (or more) that could be done asynchronously by 3 developers or done synchronously by 1 developer (but still very fast, maybe almost as fast depending on the task). 

It's definitely more synchronous if using an external client like ChatGPT or Claude Desktop, or any of the Copilot flavors with exception to GitHub Copilot because of direct access to your files and context, although still more confined to the solution project opened in VS.

Overall, I've been very happy with how AI tools can help create unit tests from code, documentation from code, scaffolding a full program, or having the ability to give me large pieces of a truly functional code.

In that way Claude Code intrigued me because it sat in the middle between chat services and full back-end solutions. Some developers I knew were using it. Anthropic was bubbling to the top more in general AI conversations. At the same time I wanted to work with something a little more disconnected from my IDE (like GitHub Copilot) but that could interact with my files, versus downloading and moving files, or copying the source, etc. 

Claude Code seemed to be what I was looking for:
 
  
 
After a quick install I was ready to go!
 

I read some of the quick docs and wanted to see what was possible and at first I started with 
analyze my solution but as it started I wasn't quite sure what it was trying to do. It was reading my files and then also looked like it was building projects so I stopped it and closed down the terminal. To be honest, I wasn't sure what its plan was for what I was asking.

So I started again this time with /init.


With /init I knew more what it was supposed to do and what I would have at the end of the operation, and next thing you know I had a new .md file, and an overview of the solution. Before I knew it, I was asking it questions about features in a specific directory and then asking it to make upgrades.
 
I was cautious checking over the edits, making sure to approve/reject what I liked and what I did not.
 
A few more questions and answers, and becoming more comfortable with what I had seen (much like Desktop where the answers are pretty good), and I was off to the races. 
 
Please give me some unit tests based on these new features.
 
Please update this documentation file with the new features. 
 
Can you check my documentation files and tell me what's missing?
  
At that point it's almost like I was in The Matrix--the code and questions in the terminal windows going faster and faster as it updated my files.
 
And then... 
 
It stopped.
 
Nothing. 
 
Where was Neo?!!!
 
Out of tokens. 
 
I went to my Claude Desktop (I have the Pro subscription) and checked my usage and it said I had used up 100%. I would have to wait until morning to get refreshed to use anything (Claude Desktop or Claude Code).
 
I was working with it for maybe two/two and a half hours--just enough time to start getting into it and enjoying what I was seeing, and also enough time to have questions and make some observations.

 

What Worked and What Didn't


Some of my general thoughts on using Claude Code that first time.
 
1. Tokens To Commands: I don't quite understand the relation between commands and how many tokens they will use. To me there is not a clear path to being able to calculate those costs like I can for general cloud services. For instance if I run "analyze the code" vs "/init" or "document my code"--what truly happens and what does it cost?

At the same time, because I think it could change from ask to ask and be dependent on the code base and project, there's not a really good, and repeatable way, to find that out innately (albeit there are ways you can try and judge it, but not as precise as it should be).
 
Some of you might be saying "But you can get reports for those"--true, but not for individual accounts.
 
Others might say, "Use /cost" but that yields "With your Claude Pro subscription, no need to monitor cost — your subscription includes Claude Code usage".
 
And still I know there are docs for token counting and pricing and for different types of services, but for Claude Code, I think it can be more apparent.
 
2. Did I Ask For That? In one part of the session I was asking Claude Code to look at the unit tests, maybe 20 files and 220 tests or so, to see what was missing in the documentation but that was a feature being tested. 
 
What it initially started to do was go through and execute the tests file by file and go through each test, etc. when in reality it just needed to look at the files and interpret the expected/failed results to make internal lists and compare to the documentation. It didn't need to run tests and I can't say for certain what that cost was in terms of tokens.

Forget for a moment someone may do it differently or there was a better way or prompt--for me it was doing more work than what I thought it should do, making it more complicated, and also increasing cost (or at least I think so).
 
This didn't happen all the time and the testing was the most extreme case of it--and it's not something a human developer can't or won't do either. But it still needs to be taken into consideration.
 
3. Terminal Freedom: Something I noticed the first time using it was just how light it felt. How easy it seemed to work in a terminal window vs inside an IDE. I really did like that. 
 
4. Overall The Code Was Good: Like all AI code, at least in my uses, there has to be someone guiding it and looking it over. It really is like a new developer that comes on to your team and is both learning the environment as well as development itself. The code was good though and I found myself liking the speed and quality.

 

Some General Thoughts

 

From trusting code and answers, with less hallucinations and running down a solution that's not possible--I put ChatGPT, GitHub Copilot, and Claude Desktop all around the same from trusting the code and solution. With GPT-5/4o, Sonnet 4.5, etc. one doesn't outshine the other in general development tasks from my experience right now. 
 
As an example, I was working on a project and while Claude Desktop gave me a great solution, how to implement, documentation--it was also not secure and only partially worked. Only because I understood the overall system and was able to ask the questions--for general searching and with ChatGPT--did I come away with a working solution. 
 
Claude started, ChatGPT finished. 
 
At the same time, while I like Claude Code and use Claude Desktop, I have run out of tokens for both of them in the Pro subscription. 
 
For GitHub Copilot and ChatGPT, both of those Pro subscriptions, I've never had that issue and I can go for long blocks of development time. In that way to be fair, I don't know all of the answers to tokens and pricing for ChatGPT and GitHub Copilot, because I haven't had to. 
 
I think overall, as I evaluate Claude Code and other tools, it still leads me to know this space, like others in the past, is just beginning. 
 
It's promising, but also needs to understand itself better. Measure itself better. Give us the tools to plan and forecast better.
 
With that, then maybe I get back to the Matrix.

Wednesday, June 4, 2025

Git Reset Or Git Revert?

A few months ago I had to clean up some branches from an older project that was being added to with new features where I needed to rollback one of the branches to a specified commit. 

In my mind I really had two choices overall: git reset and git revert. Here are the main descriptions for each:

NAME

git-revert - Revert some existing commits

SYNOPSIS

git revert [--[no-]edit] [-n] [-m <parent-number>] [-s] [-S[<keyid>]] <commit>…​
git revert (--continue | --skip | --abort | --quit)

DESCRIPTION

Given one or more existing commits, revert the changes that the related patches introduce, and record some new commits that record them. This requires your working tree to be clean (no modifications from the HEAD commit).

Note: git revert is used to record some new commits to reverse the effect of some earlier commits (often only a faulty one). If you want to throw away all uncommitted changes in your working directory, you should see git-reset[1], particularly the --hard option. If you want to extract specific files as they were in another commit, you should see git-restore[1], specifically the --source option. Take care with these alternatives as both will discard uncommitted changes in your working directory.

NAME

git-reset - Reset current HEAD to the specified state

SYNOPSIS

git reset [-q] [<tree-ish>] [--] <pathspec>…​
git reset [-q] [--pathspec-from-file=<file> [--pathspec-file-nul]] [<tree-ish>]
git reset (--patch | -p) [<tree-ish>] [--] [<pathspec>…​]
git reset [--soft | --mixed [-N] | --hard | --merge | --keep] [-q] [<commit>]

DESCRIPTION

In the first three forms, copy entries from <tree-ish> to the index. In the last form, set the current branch head (HEAD) to <commit>, optionally modifying index and working tree to match. The <tree-ish>/<commit> defaults to HEAD in all forms.


History: To Be Seen Or Not To Be See. That Is Thy Question

While there's a lot to look at, and there are other considerations to be thinking about, for me it mainly came down to whether or not I wanted a clean commit history or to maintain the older history of commits as well. 

Since I didn't want to keep the history, because there was no need in maintaining any commit history after the commit that I wanted to be the last shown (before the new features were added). I stayed with what I had done in the past--good old git-reset:

git reset --hard <commit>
git push --force

It's clean. It gets me what I want. It doesn't have the overhead of managing conflicts or multiple reverts, which can make it even more unwieldy because it adds another commit for each rollback. 

And absolutely--if not coordinated with multiple users it can lead to problems because it essentially re-writes history--and there are use cases where you may need or want to keep that history alive. 

But if there are bad commits that need to go?

git reset for the win!

Monday, June 2, 2025

So About That NY's Resolution...

And then right before I know it, three months has passed, and there's not one new blog post written.

But, while I could be a part of the New Year's resolutions statistics machine, this post only goes to show that I won't be going down without a fight--

I can feel those blog posts coming...

Saturday, February 8, 2025

Why I Decided To Use Vitest Instead Of Jest (For My Purposes At Least)

 

 
I've been working on this new project where it's vanilla Javascript and HTML and really needed and wanted to implement good Javascript testing (more on that later). While I will need to cover more types of testing I first wanted something easy to setup that felt familiar like back-end unit testing in other languages. 
 
You may be asking yourself, what did you, or your teams, or teams you've been on use in the past for Javascript testing? 

I am going to lift the banner of shame:
  • For React/Node.js some did use Jest. I saw it but didn't always work on those projects in-depth. I was probably working more on the back-end for those, albeit did pick up those skills, but I trended toward Vue.js...for another post.

  • With ASP.NET/.NET MVC/Static HTML (basically fill in the blank), and javascript libraries like jQuery...sometimes absolutely no unit testing whatsoever because a lot are basic fetch calls, or yes some could be fairly complex, used Javascript classes, modularization, etc. but you had a browser debugger, you had integration testing.

    Sure, you made it easy to read, organized for good library and folder structures, etc. But testing was second or third to all of that.

  • And then there was good old fashioned custom Javascript tests where you really needed to make sure it wasn't going to blow up if someone changed that code.
After years of projects in different languages and frameworks, the focus has been more on the back-end (especially in my own experience) and absolutely, being a full stack developer, you have to know CSS, HTML, Javscript, and client side frameworks--be modern, and I love that--tweaking out the CSS, making my Javascript more performant, doing more with less (no pun intended and I do not...).
 
But testing? Testing has always not been at the forefront of my development. 
 
And while I could be shamed--it's just been an opportunity to level up and do what I know I should be doing, as well as being able to pass on that knowledge to other developers who don't have that in their back pocket, or don't have those resources in their own companies or organizations (because not all places have dedicated front-end developers who focus solely on the client side--I have been lucky enough to have worked at places like that even if I did not pick up everything they were giving out at those times though).
 
So Now That The Above Is Done

I did not end up using Jest because of the experimental support for Javascript modules (aka ECMAScript Modules aka ESM) and I had heard that Vitest was a little faster. 
 
And Jest failed me. 

Because I tried it first which is when I found out it didn't have true support for ESM.
 
When I tried Jest (and just thought it would work, because I did not RTM on that part), I just thought to myself "And this is why I'm not doing this a lot because it should be much easier than this". 
 
And yes, I should have read the docs on that part before I chose it--but it was easy and quick to try versus choosing a whole back-end framework where you should due your due diligence before choosing and implementing.
 
So Vitest it was and it worked flawlessly.
 
Make the test files, do the installs, update my package.json, make a vitest.config, open up a terminal in Visual Studio, and I was off with a simple "npm test".
 
And it was exactly what I wanted. 
 
The output and seeing the tests, what failed, what passed was exactly what I was used to (overall) from a back-end coding perspective. And I didn't need to change any of my code or rely on a testing framework which was still experimental for ESM.
 
Since then I've found out and and read up more on Vitest (docs are great) and how it's really gaining support and being used (and yes, just in case you were wondering I am using Vite for my build as well).
 
I feel like I made a good choice, and more importantly--now I'm squarely in the game when it comes to Javascript and testing.
 
Vite and Vitest for the win!

Sunday, January 19, 2025

NY Resolution: Blog More

I realize I have said this before, and while it has made me post over the last now two calendar years, albeit still not a lot, I will say it again, as a New Year's resolution:

I will blog more.

I will blog more.

I will blog more (hopefully...?).