Learn about Distilled Models and a comprehensive guide to setting up DeepSeek R1 in LM Studio in this blog.
Run DeepSeek R1 Distilled Models Locally: A Simple Guide
The world of AI is moving fast, and having access to powerful models like DeepSeek R1 locally is becoming increasingly important. However, running the full-fledged DeepSeek R1 can be challenging due to its size.
This is where distilled versions come into play. This blog post will guide you through using a distilled version of the DeepSeek R1 model on your local machine, whether you're running Windows, Mac, or Linux. We’ll focus on a simple and user-friendly approach using LM Studio.
Why Distilled Models?
DeepSeek R1 is a state-of-the-art reasoning model, but it's HUGE. To run it locally, most of us would need high end machines that are quite expensive. Distilled models offer a more accessible approach, they are smaller, and more easily managed, trained on the outputs of the original DeepSeek R1 model. This means that they retain key capabilities, like their reasoning power, while being much less demanding on resources.
Introducing LM Studio
To make this process as easy as possible, we'll be using LM Studio. This tool is:
User-Friendly: Easy for beginners to grasp.
Cross-Platform: Works on Windows, Mac, and Linux.
Comprehensive: Handles model discovery, download, and local operation.
Step-by-Step Guide to Local DeepSeek R1 with LM Studio
Here's how you can get started:
Download LM Studio: Go to the LM Studio website and download the version that matches your operating system.
Install LM Studio: Follow the prompts to install LM Studio on your machine. It is generally a pretty smooth and straight forward installation process.
Explore the Interface: Open LM Studio, and you'll see options like "Chat," "Developer," "My Models," and "Discover" on the sidebar.
Go to "Discover" Tab: This is where you find and download models.
Search for DeepSeek R1: In the search bar, type "DeepSeek R1 Distill”.
Select your DeepSeek R1 Distill Model: You will see several options including Qwen-7B, Llama 8B, and others. Choose the model that works for you, I suggest Qwen-7B for this example. Also, ensure both GGUF and MLX checkboxes are selected.
Download Your Model: There will be a Green "Download" button next to the model you chose, click on that.
Use the Model: After the download is complete, return to "Chat" tab. Click on "Select a model to load" to load the newly downloaded DeepSeek model. Now, you can start interacting with your model using text inputs.
Important Points to Remember
Use the Latest LM Studio: Make sure you are using the 0.3.7 Build 2 version of LM Studio since this one supports DeepSeek R1.
Model Details: View the model card on Hugging Face via LM Studio to get more technical info.
Understanding the output: You’ll notice that the model “thinks” and describes its processes before presenting a final response. You may or may not choose to see the “Think” stages.
Advanced Features of LM Studio
Beyond simple chatting, LM Studio allows for more advanced operations:
Developer Mode: This allows you to serve the model through a compatible API endpoint for integration with other tools.
Function Calling: Some models also support function calling, which lets you to access specific functionalities with prompts.
Custom Settings: Adjust system prompts and other parameters via the “Advanced Configurations” menu to tailor the model.
Closing Remarks
Using DeepSeek R1 locally is much more doable now, thanks to distilled versions and tools like LM Studio. You don’t need a top-tier computer, but you can still enjoy amazing capabilities of the original DeepSeek R1 Model. So download LM Studio, explore some models and most importantly keep your data safe and private.
Share on socials
Create Presentations in Seconds
Transform your ideas into professional presentations with AI. No design skills needed.
Easily convert video content into engaging slide presentations. Perfect for businesses, educators, and content creators looking to turn videos into informative presentations.