If you are already using coding agents, the following video may be pretty ho-hum, no big deal. However, I’m still blown away by how quickly we’ve come to the point where I can get a support query, open up Cursor AI on the source code project, ask it the support question, and get an accurate and useful answer. The following video is a recording of the Cursor AI user interface. I asked it a question, it looked at the source code for the application, and then answered based on an analysis of what it found. The gray text you see that eventually disappears is the thinking going on behind the scenes, while the final white text is the actual response to my question. The last couple of sentences were included purely because I was going to post this as a video; normally it would respond with more source code.
There happens to be, in this instance, some organizationally identifiable information typically included in responses to questions like the one I posed. Interestingly, a less capable model, such as the Cursor Composer model, didn’t follow my instructions to exclude that information. However, the Claude Opus 4.5 model did. I find this to be a common pattern: more capable models pick up the nuances in prompt-response combinations better than less capable models.
The key to getting such effective responses and behavior is providing the more capable models with the information context they need. Source code that has a strong domain model, where the programming constructs match the real-world concepts at the user interface level, combined with good comments and documentation, results in more effective agentic behavior.