Using Dream Fields (described below), it is possible to train a view-consistent NeRF (neural radiance fields) for an object without any images. A core feature of CLIP is that an object (e.g., a table) should resemble a table regardless of the direction from which it is viewed. Dream Fields utilizes this principle. It is essentially a case of rendering a randomly-initiated NeRF from a random viewpoint, scoring this image against a text prompt, updating the NeRF, and repeating the process until convergence has been achieved.
Abstract: We combine neural rendering with multi-modal image and text representations to synthesize diverse 3D objects solely from natural language descriptions. Our method, Dream Fields, can generate the geometry and colour of a wide range of objects without 3D supervision. Due to the scarcity of diverse, captioned 3D data, prior methods only generate objects from a handful of categories, such as ShapeNet. Instead, we guide generation with image-text models pre-trained on large datasets of captioned images from the web. Our method optimizes a Neural Radiance Field from many camera views so that rendered images score highly with a target caption according to a pre-trained CLIP model. To improve fidelity and visual quality, we introduce simple geometric priors, including sparsity-inducing transmittance regularization, scene bounds, and new MLP architectures. In experiments, Dream Fields produce realistic, multi-view consistent object geometry and color from a variety of natural language captions.
If you haven’t yet, follow us on Twitter, TikTok, or YouTube!