Home 9 AI 9 AI Builds 3D Characters from a Single Photo

AI Builds 3D Characters from a Single Photo

by | Jan 9, 2026

New framework animates 3D models from flat images while keeping natural proportions.
Overview of the task. Given a single target image and an initial 3D Gaussian, DeformSplat deforms the Gaussian to match the target image while preserving geometry. The motion is represented by varying the transparency of the images over time (source: Proceedings of the SIGGRAPH Asia 2025 Conference Papers, 2025. DOI: 10.1145/3757377.3763937).

 

A research team from the Ulsan National Institute of Science and Technology (UNIST) has introduced an artificial-intelligence technique that turns a single two-dimensional image into a fully animated three-dimensional character with preserved proportions, a major step in image-based 3D reconstruction and animation, tells Tech Xplore. The work, published in the Proceedings of the SIGGRAPH Asia 2025 Conference Papers, focuses on overcoming long-standing challenges in creating realistic 3D content from limited visual data, a task critical to fields such as gaming, animation, and virtual production.

Traditional methods for generating 3D characters require multiple images or extensive motion data to avoid distortions when animating a model. Without such inputs, generated 3D figures often suffer unnatural stretching or skewing when posed from different viewpoints. The new framework, called DeformSplat, integrates a Gaussian-based representation with advanced matching and segmentation techniques to address this issue. It begins with a 3D Gaussian model of a character and deforms it to match the single input image. At the same time, it segments rigid parts of the model, such as limbs and torso, ensuring that these segments move naturally and maintain shape during animation.

DeformSplat’s innovation lies in two key components: Gaussian-to-Pixel matching and rigid part segmentation. The first links the latent 3D structure to the 2D image pixels, allowing pose and shape information to map directly onto the 3D model. The segmentation process identifies solid regions that should not distort, preserving anatomical fidelity as the character moves. The result is an animation that accurately reflects the pose and proportions shown in the original photo, even when viewed from the side, front, or back.

This advancement simplifies a traditionally laborious content creation workflow and lowers barriers to entry for creators without extensive image capture setups or large image datasets. It offers a pathway toward more accessible 3D character animation that still respects structural integrity, and it points to broader applications of AI in converting everyday 2D visuals into dynamic 3D content.