Post

Integrate AI with VS Code | GitHub Copilot Free Alternative

Learn how can we install Ollama on our computer and start using popular LLM (Large Language Models) locally for free. We will also learn how can we integrate it with VS Code to speed up our development and make it less error prone by using AI as our code assistant

Integrate AI with VS Code | GitHub Copilot Free Alternative


PREREQUISITES

  1. Install VS Code Link
  2. Install Ollama locally for free Link
  3. Pull LLM (Large Language Model) using Ollama using command ollama pull <image-name>. Popular LLMs
    1. ollama pull llama3.2:3b - Use it on old laptops/desktops
    2. ollama pull llama3.1:8b - Use it on normal laptops/desktops
    3. ollama pull llama3.1:70b - Use it on normal 32GB+ RAM laptop/desktop
    4. ollama pull deepseek-coder-v2:16b - Use it Laptop/desktop for coding specifically
    5. ollama pull codellama:7b - Lighter LLM for Lower end laptops/desktops

INTEGRATING WITH VS CODE

  1. Install VS Code Extension Continue.continue and ex3ndr.llama-coder

RUNNING A LOCAL WEB VERSION

  1. Install Docker Desktop
  2. Run command docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
  3. Access the Web UI at http://localhost:3000
  4. Sign Up using new username and password. You can use any valid email address as your username.
This post is licensed under CC BY 4.0 by the author.