Saturday, November 8, 2025

Quick Bytes: Program.cs For Serving Static Files In A .NET Core Container App

If you're creating a containerized application from an ASP.NET Core Empty project and you just want to serve static files your Program.cs can just look like below:

var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

app.UseDefaultFiles();
app.UseStaticFiles();
app.MapFallbackToFile("index.html");
app.Run(); 

UseDefaultFiles() and MapFallbackToFile while similar are just a little different and the later is used a little more in SPAs (but can also direct to a general Not Found file as well).

For a containerized app serving static files, the project directory would look similar to the following:


 

 

 

 

An Open Letter To Power BI

Dear Power BI,

I just would like to say (again) that for all the good things you seem to be able to do (because you can be so intuitive and easy), I really wish there was a way to get administrative/management/governance information from you in one simple API.

Not one API for this, a scanner for that, another program for the other thing because this and that didn't have it...and yes, I understand what I'm starting to sound like, but in all reality--you've pushed me to it.

I mean, why can't I just get all the usage statistics, connection information, dataset, report, workspaces, dashboard information, queries--all of it, not just some of it (because we all know some information just doesn't show up)--in one simple easy to use API. 

Please?

Okay. Thank you. That's really all I wanted to say for now...because if I say any more I think they'll come to get me... 

Best (or you know, at least the best I can offer),
Semi-Curmudgeonly Developer Guy 

First Try Fridays: Claude Code Was Perfect...Until It Wasn't + General AI Tool Thoughts

Like others, I've been integrating AI into my development and administrative purview using a number of different tools, settling on a mix of many including GitHub Copilot, ChatGPT, Claude Desktop, Copilot, Microsoft 365 Copilot. 

There are also the APIs, Agents, GPTs, all the MCP technologies, and the tools to create them. They all serve different purposes, and nothing is one-size-fits-all.

When I think about it, I'm looking at AI tools from a singular developer perspective, a team perspective, organizational perspective, and also from a work, community, and personal perspective. 

It's seeing what the tools can do. How do they increase my own productivity and creativity. What if it was rolled out to more developers? What are the risks from an organizational perspective? How can autonomous agents, AI chatbots, MCP services, et al. be utilized for different projects and services in an effective way (cost, performance, manageability, and more)?

What is the gap between hype and real world use?

I like to think I have a good grasp on general tools like Claude Desktop, ChatGPT, Copilot(s). They do increase productivity, creativity, and sometimes just the sheer enjoyment of programming and developing, in part, because you can increase your knowledge quick, allowing you to build faster, sometimes doing things you weren't sure were possible. 

Pretty great! 

And all made possible because one AI tool like a GitHub Copilot, or Claude Desktop, or ChatGPT--just one of those can fill 1-3 sets of tasks (or more) that could be done asynchronously by 3 developers or done synchronously by 1 developer (but still very fast, maybe almost as fast depending on the task). 

It's definitely more synchronous if using an external client like ChatGPT or Claude Desktop, or any of the Copilot flavors with exception to GitHub Copilot because of direct access to your files and context, although still more confined to the solution project opened in VS.

Overall, I've been very happy with how AI tools can help create unit tests from code, documentation from code, scaffolding a full program, or having the ability to give me large pieces of a truly functional code.

In that way Claude Code intrigued me because it sat in the middle between chat services and full back-end solutions. Some developers I knew were using it. Anthropic was bubbling to the top more in general AI conversations. At the same time I wanted to work with something a little more disconnected from my IDE (like GitHub Copilot) but that could interact with my files, versus downloading and moving files, or copying the source, etc. 

Claude Code seemed to be what I was looking for:
 
  
 
After a quick install I was ready to go!
 

I read some of the quick docs and wanted to see what was possible and at first I started with 
analyze my solution but as it started I wasn't quite sure what it was trying to do. It was reading my files and then also looked like it was building projects so I stopped it and closed down the terminal. To be honest, I wasn't sure what its plan was for what I was asking.

So I started again this time with /init.


With /init I knew more what it was supposed to do and what I would have at the end of the operation, and next thing you know I had a new .md file, and an overview of the solution. Before I knew it, I was asking it questions about features in a specific directory and then asking it to make upgrades.
 
I was cautious checking over the edits, making sure to approve/reject what I liked and what I did not.
 
A few more questions and answers, and becoming more comfortable with what I had seen (much like Desktop where the answers are pretty good), and I was off to the races. 
 
Please give me some unit tests based on these new features.
 
Please update this documentation file with the new features. 
 
Can you check my documentation files and tell me what's missing?
  
At that point it's almost like I was in The Matrix--the code and questions in the terminal windows going faster and faster as it updated my files.
 
And then... 
 
It stopped.
 
Nothing. 
 
Where was Neo?!!!
 
Out of tokens. 
 
I went to my Claude Desktop (I have the Pro subscription) and checked my usage and it said I had used up 100%. I would have to wait until morning to get refreshed to use anything (Claude Desktop or Claude Code).
 
I was working with it for maybe two/two and a half hours--just enough time to start getting into it and enjoying what I was seeing, and also enough time to have questions and make some observations.

 

What Worked and What Didn't


Some of my general thoughts on using Claude Code that first time.
 
1. Tokens To Commands: I don't quite understand the relation between commands and how many tokens they will use. To me there is not a clear path to being able to calculate those costs like I can for general cloud services. For instance if I run "analyze the code" vs "/init" or "document my code"--what truly happens and what does it cost?

At the same time, because I think it could change from ask to ask and be dependent on the code base and project, there's not a really good, and repeatable way, to find that out innately (albeit there are ways you can try and judge it, but not as precise as it should be).
 
Some of you might be saying "But you can get reports for those"--true, but not for individual accounts.
 
Others might say, "Use /cost" but that yields "With your Claude Pro subscription, no need to monitor cost — your subscription includes Claude Code usage".
 
And still I know there are docs for token counting and pricing and for different types of services, but for Claude Code, I think it can be more apparent.
 
2. Did I Ask For That? In one part of the session I was asking Claude Code to look at the unit tests, maybe 20 files and 220 tests or so, to see what was missing in the documentation but that was a feature being tested. 
 
What it initially started to do was go through and execute the tests file by file and go through each test, etc. when in reality it just needed to look at the files and interpret the expected/failed results to make internal lists and compare to the documentation. It didn't need to run tests and I can't say for certain what that cost was in terms of tokens.

Forget for a moment someone may do it differently or there was a better way or prompt--for me it was doing more work than what I thought it should do, making it more complicated, and also increasing cost (or at least I think so).
 
This didn't happen all the time and the testing was the most extreme case of it--and it's not something a human developer can't or won't do either. But it still needs to be taken into consideration.
 
3. Terminal Freedom: Something I noticed the first time using it was just how light it felt. How easy it seemed to work in a terminal window vs inside an IDE. I really did like that. 
 
4. Overall The Code Was Good: Like all AI code, at least in my uses, there has to be someone guiding it and looking it over. It really is like a new developer that comes on to your team and is both learning the environment as well as development itself. The code was good though and I found myself liking the speed and quality.

 

Some General Thoughts

 

From trusting code and answers, with less hallucinations and running down a solution that's not possible--I put ChatGPT, GitHub Copilot, and Claude Desktop all around the same from trusting the code and solution. With GPT-5/4o, Sonnet 4.5, etc. one doesn't outshine the other in general development tasks from my experience right now. 
 
As an example, I was working on a project and while Claude Desktop gave me a great solution, how to implement, documentation--it was also not secure and only partially worked. Only because I understood the overall system and was able to ask the questions--for general searching and with ChatGPT--did I come away with a working solution. 
 
Claude started, ChatGPT finished. 
 
At the same time, while I like Claude Code and use Claude Desktop, I have run out of tokens for both of them in the Pro subscription. 
 
For GitHub Copilot and ChatGPT, both of those Pro subscriptions, I've never had that issue and I can go for long blocks of development time. In that way to be fair, I don't know all of the answers to tokens and pricing for ChatGPT and GitHub Copilot, because I haven't had to. 
 
I think overall, as I evaluate Claude Code and other tools, it still leads me to know this space, like others in the past, is just beginning. 
 
It's promising, but also needs to understand itself better. Measure itself better. Give us the tools to plan and forecast better.
 
With that, then maybe I get back to the Matrix.