Home 9 AI 9 Tinker: AI Tuning, Without the Pain

Tinker: AI Tuning, Without the Pain

by | Oct 8, 2025

Thinking Machines Lab is making frontier model fine-tuning accessible.
Mira Murati, founder of Thinking Machines Lab (source: WIRED Staff; Kimberly White; Getty Images).

The recently-founded startup Thinking Machines Lab has launched its first product, Tinker, designed to simplify and democratize the process of fine-tuning large AI models. The tool aims to strip away much of the infrastructure overhead, i.e., GPU clusters, distributed training orchestration, and failure recovery, and let users focus on what matters: data, algorithms, and experimentation, tells Wired.com.

Tinker supports both supervised learning and reinforcement learning (RL), giving researchers flexibility in how they guide the training process. It lets users fine-tune open-weight models (such as Meta’s Llama or Alibaba’s Qwen) by writing just a few lines of code. After tuning, users can download and run their customized models independently.

On the back end, Tinker handles resource scheduling, failure recovery, and distributed execution. It uses techniques such as LoRA (low-rank adaptation) so that compute is shared efficiently across training jobs, helping to reduce cost and accelerate experimentation.

Early users include academic labs and safety research groups. Some report that Tinker allowed them to achieve results in theorem proving, chemical reasoning, and tool-using agents more quickly and less painfully than before.

Tinker is currently in private beta, with free access for approved users. Usage-based pricing is expected later.

At a time when many powerful AI models remain closed or opaque, Tinker signals a push toward more open, researcher-driven customization. By lowering the barrier to entry, Thinking Machines hopes to broaden who can experiment, iterate, and innovate with frontier AI systems.