Introduction
When building complex AI applications with multiple patterns like RAG, agents, and LLM chains, having context-aware development assistance becomes crucial. This post walks through setting up GitHub Copilot instructions and custom chat modes for a GenAI demo application that showcases LangChain, Ollama, and Streamlit integration patterns.
The goal is to create an AI assistant that understands your project’s architecture, coding patterns, and can provide specialized guidance for learning and implementing LLM/RAG/Agentic AI concepts.
Full code and examples are available in this repo.
Setting Up Copilot Instructions
GitHub Copilot Instructions allow you to provide context about your project’s patterns, architecture, and coding standards. For AI/LLM projects, this is particularly valuable given the rapidly evolving ecosystem.
Step 1: Create the Instructions File
Create .github/copilot-instructions.md in your repository root:
# Copilot Instructions for GenAI Demo Application
This repository demonstrates various LLM/RAG/Agentic AI patterns using LangChain, Ollama, and Streamlit.
## Repository Structure & Architecture
### Core Application
- `src/app.py`: Main Streamlit application
- `src/example/`: Demo implementations by complexity
- `src/internal/`: Shared utilities and prompt templates
- `tests/`: Comprehensive test suite
### Example Categories
#### ๐ Basic Examples (Beginner)
- `simple_chat.py`: Basic LLM integration with ChatOllama
- `city.py`, `country.py`, `state.py`: Prompt templating
#### ๐ RAG Examples (Intermediate)
- `wikipedia.py`: Dynamic document retrieval
- `chroma.py`: Vector storage with ChromaDB
- `pgvector.py`: Production PostgreSQL vector storage
#### ๐ค Agent Examples (Advanced)
- `agentic_chat.py`: Multi-tool ReAct agentStep 2: Define Key Patterns
Include your project’s specific patterns:
### Standard Architecture Pattern
def create_example_chain(model_name):
"""Create and return the example chain for easier testing."""
llm = create_llm(model_name)
# Build chain components
return chain
def process_example_query(chain, input_data):
"""Process query with error handling."""
# Business logic here
def handle_example_ui(st, model_name):
"""Handle Streamlit UI components."""
# UI logic hereCreating Custom Chat Modes
Chat modes provide specialized interactive assistance. For an AI learning project, we can create a teaching assistant mode.
Step 1: Create Chat Mode Directory
Create .github/chatmodes/genai-teacher.chatmode.md:
---
description: 'Interactive teaching assistant for LLM/RAG/Agentic AI development'
tools: []
---
# GenAI Teacher Chat Mode
You are an expert AI teaching assistant specializing in LLM/RAG/Agentic AI development.
## Your Teaching Style
- **Interactive & Encouraging**: Make learning engaging
- **Code-Focused**: Reference actual repository files
- **Progressive**: Guide from basics to advanced concepts
- **Practical**: Emphasize hands-on learning
## Available Teaching Modes
### ๐ Explain Mode
When users ask to explain files:
- Provide detailed breakdowns of example types
- Highlight key features based on user level
- Reference related files and patterns
### ๐ฏ Guide Mode
When users request learning guidance:
- **Basics**: simple_chat.py โ city.py โ country.py
- **RAG**: wikipedia.py โ arxiv.py โ chroma.py โ pgvector.py
- **Agents**: agentic_chat.py and ReAct patternsStep 2: Define Specialized Behaviors
## Repository Knowledge
### Example Categories
- **Basic Examples**: simple_chat.py, city.py, country.py, state.py, mtg.py
- **RAG Examples**: wikipedia.py, arxiv.py, web.py, chroma.py, pgvector.py
- **Agent Examples**: agentic_chat.py with ReAct pattern
### Key Patterns
- **LLM Integration**: ChatOllama with local Ollama hosting
- **RAG Architecture**: RunnablePassthrough chains with retrievers
- **Agent Architecture**: ReAct pattern with tool integration
## Response Guidelines
1. **Always reference actual files** from the repository
2. **Provide working code examples** following established patterns
3. **Suggest next steps** for continued learning
4. **Explain the "why"** behind implementation choicesImplementation Guide
Step 1: File Structure Setup
.github/
โโโ copilot-instructions.md
โโโ chatmodes/
โโโ genai-teacher.chatmode.mdStep 2: Test the Setup
- Commit and push your files
- Open GitHub Copilot Chat
- Type
@workspaceto see if instructions are loaded - Try asking: “Explain simple_chat.py” or “Guide me through RAG”
Step 3: Validate Responses
- Check if Copilot references your specific files
- Verify it follows your established patterns
- Ensure responses match your teaching style
Results and Benefits
After implementing these customizations, you’ll notice:
- Context-Aware Suggestions: Copilot understands your project’s specific patterns and suggests code that follows your established architecture
- Educational Guidance: The teaching chat mode provides structured learning paths tailored to your repository’s examples
- Consistent Code Quality: New developers get suggestions that match your project’s standards and patterns
- Reduced Onboarding Time: New team members can ask questions and get project-specific guidance
Best Practices
- Keep Instructions Updated: Update your Copilot instructions as your project evolves
- Be Specific: Include actual code patterns and file structures
- Test Regularly: Validate that Copilot is following your instructions
- Document Environment Variables: Include required setup information
- Version Control: Treat these files as part of your documentation
Conclusion
Setting up customized GitHub Copilot instructions and chat modes transforms the development experience for AI/LLM projects. By providing context about your specific patterns, architecture, and learning objectives, you create an intelligent assistant that grows with your project.
The investment in setup pays dividends in faster development, better code consistency, and improved developer onboarding. As AI development tools continue to evolve, having well-documented, context-aware assistance becomes increasingly valuable for complex projects involving multiple AI patterns and frameworks.
Next Steps:
- Experiment with different chat mode personalities
- Add tool integrations for enhanced functionality
- Create specialized modes for different team roles (QA, DevOps, etc.)
- Integrate with your CI/CD pipeline for automated documentation updates

Leave a Reply