Integrate AI with VS Code | GitHub Copilot Free Alternative
Learn how can we install Ollama on our computer and start using popular LLM (Large Language Models) locally for free. We will also learn how can we integrate it with VS Code to speed up our development and make it less error prone by using AI as our code assistant
Integrate AI with VS Code | GitHub Copilot Free Alternative
PREREQUISITES
- Install VS Code Link
- Install Ollama locally for free Link
- Pull LLM (Large Language Model) using Ollama using command
ollama pull <image-name>
. Popular LLMs-
ollama pull llama3.2:3b
- Use it on old laptops/desktops -
ollama pull llama3.1:8b
- Use it on normal laptops/desktops -
ollama pull llama3.1:70b
- Use it on normal 32GB+ RAM laptop/desktop -
ollama pull deepseek-coder-v2:16b
- Use it Laptop/desktop for coding specifically -
ollama pull codellama:7b
- Lighter LLM for Lower end laptops/desktops
-
INTEGRATING WITH VS CODE
- Install VS Code Extension
Continue.continue
andex3ndr.llama-coder
RUNNING A LOCAL WEB VERSION
- Install Docker Desktop
- Run command
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
- Access the Web UI at
http://localhost:3000
- Sign Up using new username and password. You can use any valid email address as your username.
This post is licensed under CC BY 4.0 by the author.