Cody Context
Understand how context helps Cody write more accurate code.
Context refers to any additional information provided to help Cody understand and write code relevant to your codebase. While LLMs have extensive knowledge, they lack context about an individual or organization's codebase. Cody's ability to provide context-aware code responses is what sets it apart.
Why is context important?
Context and methods of retrieving context are crucial to the quality and accuracy of AI. Cody relies on its ability to retrieve context from user codebases in order to provide reliable and accurate answers to developers’ questions. When Cody has access to the most relevant context about your codebase, it can:
- Answer questions about your codebase
- Produce unit tests and documentation
- Generate code that aligns with the libraries and style of your codebase
- Significantly reduce your work that's required to translate LLM-provided answers into actionable value for your users
Context selection
Cody employs various methods to gather context relevant to user input, ensuring the quality of the information provided. These methods include:
- Keyword Search: A traditional text search approach that finds keywords matching the user input. When needed, queries are automatically rewritten to include more relevant terms.
- Sourcegraph Search: Sourcegraph Search API. Queries are sent to the SG instance (managed or self-hosted), and search is done using the SG search stack. Relevant documents are returned to the user IDE for use by the LLM
- Code Graph: Analyzing the structure of the code, Cody examines how components are interconnected and used, finding context based on code elements' relationships
All these methods collectively ensure Cody's ability to provide relevant and high-quality context to enhance your coding experience.
Context fetching mechanism
Enterprise users on Cody Web, VS Code and JetBrains can leverage the full power of the Sourcegraph search engine as the primary context provider to Cody. This means access to more repos and files from across your codebase and a system that is more scalable and easier to configure.
Users on Cody Free or Pro plans use context depending on the type of clients they use.
Here's a detailed breakdown of the context fetching mechanism for each client for Cody Free, Pro, and Enterprise users:
Tier | Client | Repositories |
---|---|---|
Free/Pro | VS Code | 1 |
JetBrains | 1 | |
Enterprise | Cody Web | Multi |
VS Code | Multi | |
JetBrains | Multi |
How does context work with Cody prompts?
Cody works in conjunction with an LLM to provide codebase-aware answers. The LLM is a machine learning model that generates text in response to natural language prompts. However, the LLM doesn't inherently understand your codebase or specific coding requirements. Cody bridges this gap by generating context-aware prompts.
A typical prompt has three parts:
- Prefix: An optional description of the desired output, often derived from predefined Commands that specify tasks the LLM can perform
- User input: The information provided, including your code query or request
- Context: Additional information that helps the LLM provide a relevant answer based on your specific codebase
For example, Cody's /explain
code command receives a prompt like this:
- Prefix: Explain the following Go code at a high level. Only include details that are essential to an overall understanding of what's happening in the code
- User input:
zoekt.QueryToZoektQuery(b.query, b.resultTypes, b.features, typ)
- Context: Contents of
sourcegraph/sourcegraph/internal/search/zoekt/query.go
Impact of context LLM vs Cody
When the same prompt is sent to a standard LLM, the response may lack specifics about your codebase. In contrast, Cody augments the prompt with context from relevant code snippets, making the answer far more specific to your codebase. This difference underscores the importance of context in Cody's functionality.
Usage and Limits
Cody's usage and limits policies help optimize its performance and ensure cost-effectiveness. This section provides insights into the context window size, token limits for chat, commands, and completions, and administrators' control over these settings.
Cody context window size
While Cody aims to provide maximum context for each prompt, there are limits to ensure efficiency. For more details, see our documentation on token limits.
Manage context window size
Site administrators can update the maximum context window size to meet their specific requirements. While using fewer tokens is a cost-saving solution, it can also cause errors. For example, using the /edit
command with a too small context window size might give you errors like You've selected too much code
.
Using more tokens usually produces higher-quality responses, but also increases response times. In general, it's recommended not to modify the token limit. However, if needed, you can set it to a value that should not compromise quality or generate errors.