But the token limit is still there!
Token limits are somewhat meaningless because it depends on the tokeniser approach (Byte Pair Encoding - BPE, Wordpiece and perhaps directly code aware tokenising, etc) which in turn might lead to a striking difference in the semantic outcome EG more statistical vs less statistical (they are all just statistical). For code it can be less granular than for direct language because it can tokenise for the coding languages syntax so each syntactical element (if, else, for ...) can have its own code which drives much faster processing, and of course, much larger contexts - still not large enough for larger projects. If your coding style is consistent enough EG naming conventions, structures etc or has a domain specific corpus (EG you always doing things around walk on glazing) you could customise your own tokeniser but you would probably need to use an open source model for that or get very clever, not saying you aren't, to go in that sort of direction with Claude by perhaps pre-processing, using much shorter identifiers and avoiding any rarer characters. In the commercial sphere you want it just to work.
Oh and don't forget to turn off the 'train from conversations' privacy box or we will all be learning from your work, but hey that is one of the problems of the LLMs they eat their own output!