PromptTokenText now reflects the actual downstream context cost: the uploaded IGNORE.txt file content plus the neutral live prompt, instead of only the pre-split prompt text.
Lock in the current_input_file regression with API-level tests and document that returned context token counts now track full prompt semantics with conservative sizing.
Extract the compacted-context prompt string into a single function in
promptcompat and add a [context note] block to the injected file wrapper
so the model knows the attached history is compressed context, not new
instructions.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>