LM Studio
self_hosted
54 signals tracked
LM Studio releases `lms` CLI for automated LLM workflows
A command line tool for scripting and automating your local LLM workflows.
Date not specified
LowCapabilityMeta Llama 3.1 Released - New Model Sizes & Languages
Run Llama 3.1 locally on your computer with LM Studio.
Date not specified
MediumCapabilityLM Studio 0.3.0 Released: RAG, Themes, and Network Serving
LM Studio 0.3.0 is here! Built-in (naïve) RAG, light theme, internationalization, Structured Outputs API, Serve on the network, and more.
Date not specified
MediumCapabilityLM Studio 0.3.1 Release — Chat Migration & Bug Fixes
LM Studio 0.3.1 Release Notes
Date not specified
LowCapabilityLM Studio 0.3.2 Release - Model Pinning, Chat Migration, Reduced Context Size
LM Studio 0.3.2 Release Notes
Date not specified
LowCapabilityLM Studio 0.3.3: Config Presets, Live Token Counts, and Bug Fixes
Config presets are back! So are live token counts for user input and system prompt. Many bug fixes. Also several new app languages thanks to community contributors.
Date not specified
LowCapabilityLM Studio 0.3.4 ships with Apple MLX support
Super fast and efficient on-device LLM inferencing using MLX for Apple Silicon Macs.
Date not specified
HighCapabilityLM Studio 0.3.5 Release: Headless Mode, CLI Downloads, and Pixtral Support
Headless mode, on-demand model loading, server auto-start, CLI command to download models from the terminal, and support for Pixtral with Apple MLX.
Date not specified
MediumCapabilityIntroducing venvstacks: Layered Python Virtual Environments
An open source utility for packaging Python applications and all their dependencies into a portable, deterministic format based on Python's `sitecustomize.py`.
Date not specified
MediumCapabilityLM Studio 0.3.6: New Tool Calling API, Vision Models, and Updated Installer
Tool Calling API in beta, new installer / updater system, and support for `Qwen2VL` and `QVQ` (both GGUF and MLX)
Date not specified
MediumCapabilityLM Studio 0.3.7: DeepSeek R1 and KV Cache Quantization
DeepSeek R1 support and KV Cache quantization for llama.cpp models
Date not specified
MediumCapabilityLM Studio 0.3.8 Release - DeepSeek R1 UI, LaTeX, Bug Fixes
Thinking UI for DeepSeek R1, LaTeX rendering improvements, and bug fixes
Date not specified
LowCapabilityDeepSeek R1: Open Source Reasoning Model Released
Run DeepSeek R1 models locally and offline on your computer
Date not specified
HighCapabilityLM Studio 0.3.9 Release: Idle TTL, HF Repo Support, Reasoning Content
Idle TTL, auto-update for runtimes, support for nested folders in HF repos, and separate `reasoning_content` in chat completion responses
Date not specified
MediumCapabilityLM Studio 0.3.10: Speculative Decoding for Faster Inference
Inference speed up with Speculative Decoding for `llama.cpp` and `MLX`
Date not specified
HighCapabilityLM Studio 0.3.11 Release: SDK, Speculative Decoding, Bug Fixes
Support for LM Studio SDK (Python, TS/JS), advanced Speculative Decoding settings, and bug fixes
Date not specified
InfoCapabilityOpenVoice releases lmstudio-python and lmstudio-js SDKs
Developer SDKs for Python and TypeScript are now available in a 1.0.0 release. A programmable toolkit for local AI software.
Date not specified
HighCapabilityLM Studio 0.3.12 Release — RAG performance and bug fixes
Bug fixes and document chunking speed improvements for RAG
Date not specified
LowCapabilityLM Studio 0.3.13: Google Gemma 3 Support Added
LM Studio 0.3.13 supports Google's latest multi-modal model, Gemma 3. Run it locally on your Mac, Windows, or Linux machine.
Date not specified
InfoCapabilityLM Studio 0.3.14: Multi-GPU Controls Released
Advanced controls for multi-GPU setups: enable/disable specific GPUs, choose allocation strategy, limit model weight to dedicated GPU memory, and more.
Date not specified
HighCapability
Get alerts for LM Studio
Never miss a breaking change. SignalBreak monitors LM Studio and dozens of other AI providers in real time.
Sign up free — no credit card required