Member-only story

Run DeepSeek R1 Locally with Ollama (Windows & Mac) + Terminal/UI Integration

Tirupati Rao (bitbee)
6 min readFeb 1, 2025

--

Introduction

Running LLMs (Large Language Models) locally ensures privacy, speed, and flexibility. In this guide, we will:

✅ Install Ollama (Windows & Mac)
✅ Set up DeepSeek R1 (1.5B model)
✅ Run it in the terminal
✅ Build a modern UI for easy interactions

🌟Acces Alert🌟

Non medium members can explore this full article for free by clicking this friend link here 🙂

What Are Ollama & DeepSeek R1?

1. Ollama

Ollama is a tool that lets you run large language models (LLMs) on your local machine without cloud dependencies.

✔️ Why Use Ollama?

  • Easy installation and model management
  • Works offline (after downloading models)
  • Supports multiple AI models (DeepSeek, LLaMA, Mistral, etc.)

2. DeepSeek R1

--

--

Tirupati Rao (bitbee)
Tirupati Rao (bitbee)

Written by Tirupati Rao (bitbee)

Technical Writing | YT ▶️ BitBee | #Tech #Coding #interviews #bitbee #datastructures #algorithms #leetcode

No responses yet