This tutorial will guide you through the process of installing DeepSeek on your laptop using two different methods: LM Studio for beginners and Ollama for more technical users. By the end of this guide, you will be able to run DeepSeek entirely offline, enhancing your privacy and utilizing this powerful model efficiently.

Step 1: Download LM Studio

If you are a non-technical user, the first step is to visit the LM Studio website. Once there, download the appropriate version for your operating system:

  • For Mac and Linux users, select the only available option.
  • For Windows users, you need to determine your system type by clicking the Start button, typing MSINFO 32, and pressing Enter. If your system type mentions ‘arm’, you will require the ARM 64 version of LM Studio. Otherwise, download the regular version.

Step 2: Install LM Studio

Once the download is complete, open the installation file. Follow the prompts to install LM Studio, ensuring that you check the box to run LM Studio before finishing the installation. After the installation, the application should open automatically.

Step 3: Choose and Download a Model

Before using LM Studio, you must select a model to work with. To do this:

  1. Click on the search icon located in the left sidebar.
  2. In the search box that appears, type Deep Seek to find the available models.

You will find two options for Deep Seek R1 models: one distilled into Qwen 7B and the other into Llama 8B. Here’s how to choose:

  • The numbers (7B and 8B) refer to the number of parameters in the models. A higher number implies more complexity and requires more processing power.
  • As a general guideline, if you need to use multiple languages, select the Qwen model. Otherwise, start with the Llama model.

Step 4: Use LM Studio

Now that you’ve downloaded the model, you can start using LM Studio just like any other AI tool. Here’s how:

  • Formulate a question and ensure your computer is disconnected from the internet to verify local functionality.
  • Click Send to receive responses from the AI model.

Step 5: Install Ollama (Advanced User Method)

For users who are more technically inclined, you might want to use Ollama instead. To do this:

  1. Visit the Ollama website.
  2. Click the large download button and choose your operating system for the installer.

Once downloaded, run the installer and make sure Ollama is running. You should see it active in the taskbar.

Step 6: Running Commands in Ollama

Next, you will need to open the command prompt:

  • On Windows, type CMD and press Enter to open the command prompt.

With Ollama, run the commands associated with the Deep Seek R1 model. Go back to the Ollama website and search for “Deep Seek R1”, select the model, and note the command to run, which might look like:

ollama run deep-seek r1

This process may take time. Once completed, you can interact with the model using the command prompt.

Step 7: Performance Comparison

To understand the difference between your local Deep Seek and the web-based model, conduct a few practical tests:

  • Ask both models to perform related tasks such as rewriting emails, solving math problems, or answering logic riddles.
  • Compare the responses to gauge the effectiveness and quality of local versus cloud models.

Extra Tips & Common Issues

Here are some tips to enhance your experience:

  • Experiment with different models if the distilled one doesn’t provide satisfactory results.
  • Your responses might vary based on the processing power of your laptop, so consider testing with multiple model configurations.
  • Always ensure that you have sufficient memory and processing resources for optimal performance.

Conclusion

By following this tutorial, you have successfully installed and configured Deep Seek on your laptop, giving you the ability to run it completely offline. Experiment with different models to determine which one works best for your needs and enjoy the newfound control and privacy!

2025