I read some of the quick docs and wanted to see what was possible and at first I started with
analyze my solution but as it started I wasn't quite sure what it was trying to do. It was reading my files and then also looked like it was building projects so I stopped it and closed down the terminal. To be honest, I wasn't sure what its plan was for what I was asking.
So I started again this time with /init.
With /init I knew more what it was supposed to do and what I would have at the end of the operation, and next thing you know I had a new .md file, and an overview of the solution. Before I knew it, I was asking it questions about features in a specific directory and then asking it to make upgrades.
I was cautious checking over the edits, making sure to approve/reject what I liked and what I did not.
A few more questions and answers, and becoming more comfortable with what I had seen (much like Desktop where the answers are pretty good), and I was off to the races.
Please give me some unit tests based on these new features.
Please update this documentation file with the new features.
Can you check my documentation files and tell me what's missing?
At that point it's almost like I was in The Matrix--the code and questions in the terminal windows going faster and faster as it updated my files.
And then...
It stopped.
Nothing.
Where was Neo?!!!
Out of tokens.
I went to my Claude Desktop (I have the Pro subscription) and checked my usage and it said I had used up 100%. I would have to wait until morning to get refreshed to use anything (Claude Desktop or Claude Code).
I was working with it for maybe two/two and a half hours--just enough time to start getting into it and enjoying what I was seeing, and also enough time to have questions and make some observations.
What Worked and What Didn't
Some of my general thoughts on using Claude Code that first time.
1. Tokens To Commands: I don't quite understand the relation between commands and how many tokens they will use. To me there is not a clear path to being able to calculate those costs like I can for general cloud services. For instance if I run "analyze the code" vs "/init" or "document my code"--what truly happens and what does it cost?
At the same time, because I think it could change from ask to ask and be dependent on the code base and project, there's not a really good, and repeatable way, to find that out innately (albeit there are ways you can try and judge it, but not as precise as it should be).
Some of you might be saying "But you can get reports for those"--true, but not for individual accounts.
Others might say, "Use /cost" but that yields "With your Claude Pro subscription, no need to monitor cost — your subscription includes Claude Code usage".
And still I know there are docs for
token counting and
pricing and for different types of services, but for Claude Code, I think it can be more apparent.
2. Did I Ask For That? In one part of the session I was asking Claude Code to look at the unit tests, maybe 20 files and 220 tests or so, to see what was missing in the documentation but that was a feature being tested.
What it initially started to do was go through and execute the tests file by file and go through each test, etc. when in reality it just needed to look at the files and interpret the expected/failed results to make internal lists and compare to the documentation. It didn't need to run tests and I can't say for certain what that cost was in terms of tokens.
Forget for a moment someone may do it differently or there was a better way or prompt--for me it was doing more work than what I thought it should do, making it more complicated, and also increasing cost (or at least I think so).
This didn't happen all the time and the testing was the most extreme case of it--and it's not something a human developer can't or won't do either. But it still needs to be taken into consideration.
3. Terminal Freedom: Something I noticed the first time using it was just how light it felt. How easy it seemed to work in a terminal window vs inside an IDE. I really did like that.
4. Overall The Code Was Good: Like all AI code, at least in my uses, there has to be someone guiding it and looking it over. It really is like a new developer that comes on to your team and is both learning the environment as well as development itself. The code was good though and I found myself liking the speed and quality.
Some General Thoughts
From trusting code and answers, with less hallucinations and running down a solution that's not possible--I put ChatGPT, GitHub Copilot, and Claude Desktop all around the same from trusting the code and solution. With GPT-5/4o, Sonnet 4.5, etc. one doesn't outshine the other in general development tasks from my experience right now.
As an example, I was working on a project and while Claude Desktop gave me a great solution, how to implement, documentation--it was also not secure and only partially worked. Only because I understood the overall system and was able to ask the questions--for general searching and with ChatGPT--did I come away with a working solution.
Claude started, ChatGPT finished.
At the same time, while I like Claude Code and use Claude Desktop, I have run out of tokens for both of them in the Pro subscription.
For GitHub Copilot and ChatGPT, both of those Pro subscriptions, I've never had that issue and I can go for long blocks of development time. In that way to be fair, I don't know all of the answers to tokens and pricing for ChatGPT and GitHub Copilot, because I haven't had to.
I think overall, as I evaluate Claude Code and other tools, it still leads me to know this space, like others in the past, is just beginning.
It's promising, but also needs to understand itself better. Measure itself better. Give us the tools to plan and forecast better.
With that, then maybe I get back to the Matrix.