A Weekend With Github Copilot:Followup
Hero image credit: Photo by Kelly
So, a minor followup to my last blog post:
I was continuing to play around with Copilot on this project last night, and I finally hit the token limit for that thread. For those that aren’t familiar with AI coding assistants in general, and Github Copilot in this particular case, you can only have so much interaction with the AI assistant in a single thread. It’s in these particular threads where the AI has a “memory” of the prompts you used previously, and what it did and learned as part of those interactions. THe more prompts and actions, the more memory and processing power the AI has to use to keep track of things related to you specifically. To limit the resources expended on your “relationship”, AI providers limit the size of that thread with what is essentially a counter of resources being taken up called tokens. Usually its tied to the amount of text and code involved. The larger your prompts and the more code it writes, the fewer interactions can be taken up by a thread.
The amount of tokens allocated for a thread depends on the provider and the model being used, and the type of account you have. GPT-4o may be different from GPT-5 or Claude Sonnet and so forth. And those limits change from time to time, so I won’t list them out here. Check the current docs for what the current limits are.
In addition to limits on a single thread, there are typically monthly limits on the number of requests you can make overall, or the number of threads you can create. I haven’t hit that yet, but then aside from the weekend and a couple followup evenings, I haven’t pushed it too hard as yet. If I start to hit those, maybe I’ll update this followup.
There are also a few other things I haven’t experimented with yet, such as having it generate unit tests, commit messages, or adding documentation to the code, so there’s still a lot of places I could go with this. We’ll see what happens.