Skip to main content

AI Chat

AI Chat is a built-in assistant that can answer programming questions, generate code, and explain or refactor what's already in your editor. It runs against the AI provider you configure in Settings → AI, so you can connect it to OpenAI, Anthropic, Gemini, Mistral, or a local model served by a tool such as Ollama.

AI Chat

Opening AI Chat

Open AI Chat from the Activity Bar on the left edge of the window. It appears as a sidebar to the left of the editor pane. The same icon closes the sidebar again when you're done.

If the Activity Bar is hidden, enable it from the View menu or under Settings → Appearance.

Sending prompts

Type your prompt into the textarea at the bottom of the sidebar and press send. Responses stream back as the model generates them.

AI Chat is context-aware: it can see the code in your current tab, so you don't need to copy-paste it into your prompt. This makes prompts like the following work without further setup:

  • "Explain what this function does."
  • "Refactor this to use async/await."
  • "There's a bug in this code — can you find it?"
  • "Add JSDoc comments to each function."

You can also ask general programming questions that don't depend on your code — language features, API behaviour, debugging strategies, and so on.

Configuration

Before AI Chat can send requests, you need to configure a provider, model, and API key under Settings → AI. API keys are stored locally and used only to authenticate requests to the provider you selected.

If you want to run a model locally, choose Local as the provider and point the Base URL at the endpoint exposed by your local runtime (for example, an Ollama instance running on http://localhost:11434).

Tips

  • Be specific. "Rewrite this using a Map" produces better results than "make this better".
  • Iterate. If the first response isn't what you wanted, follow up in the same conversation rather than starting over — the model keeps the prior context.
  • Pick the right model. Larger models are slower but tend to handle complex refactors and reasoning more reliably; smaller or local models are faster and cheaper for short questions and quick edits.