TY - GEN
T1 - DreamCraft
T2 - 19th International Conference on the Foundations of Digital Games, FDG 2024
AU - Earle, Sam
AU - Kokkinos, Filippos
AU - Nie, Yuhe
AU - Togelius, Julian
AU - Raileanu, Roberta
N1 - Publisher Copyright:
© 2024 ACM.
PY - 2024/5/21
Y1 - 2024/5/21
N2 - Procedural Content Generation (PCG) algorithms enable the automatic generation of complex and diverse artifacts. However, they don't provide high-level control over the generated content and typically require domain expertise. In contrast, text-to-3D methods allow users to specify desired characteristics in natural language, offering a high amount of flexibility and expressivity. But unlike PCG, such approaches cannot guarantee functionality, which is crucial for certain applications like game design. In this paper, we present a method for generating functional 3D artifacts from free-form text prompts in the open-world game Minecraft. Our method, DreamCraft, trains quantized Neural Radiance Fields (NeRFs) to represent artifacts that, when viewed in-game, match given text descriptions. We find that DreamCraft produces more aligned in-game artifacts than a baseline that post-processes the output of an unconstrained NeRF. Thanks to the quantized representation of the environment, functional constraints can be integrated using specialized loss terms. We show how this can be leveraged to generate 3D structures that match a target distribution or obey certain adjacency rules over the block types. DreamCraft inherits a high degree of expressivity and controllability from the NeRF, while still being able to incorporate functional constraints through domain-specific objectives.
AB - Procedural Content Generation (PCG) algorithms enable the automatic generation of complex and diverse artifacts. However, they don't provide high-level control over the generated content and typically require domain expertise. In contrast, text-to-3D methods allow users to specify desired characteristics in natural language, offering a high amount of flexibility and expressivity. But unlike PCG, such approaches cannot guarantee functionality, which is crucial for certain applications like game design. In this paper, we present a method for generating functional 3D artifacts from free-form text prompts in the open-world game Minecraft. Our method, DreamCraft, trains quantized Neural Radiance Fields (NeRFs) to represent artifacts that, when viewed in-game, match given text descriptions. We find that DreamCraft produces more aligned in-game artifacts than a baseline that post-processes the output of an unconstrained NeRF. Thanks to the quantized representation of the environment, functional constraints can be integrated using specialized loss terms. We show how this can be leveraged to generate 3D structures that match a target distribution or obey certain adjacency rules over the block types. DreamCraft inherits a high degree of expressivity and controllability from the NeRF, while still being able to incorporate functional constraints through domain-specific objectives.
KW - Minecraft
KW - Neural Radiance Fields
KW - Procedural Content Generation
UR - http://www.scopus.com/inward/record.url?scp=85197592083&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85197592083&partnerID=8YFLogxK
U2 - 10.1145/3649921.3649943
DO - 10.1145/3649921.3649943
M3 - Conference contribution
AN - SCOPUS:85197592083
T3 - ACM International Conference Proceeding Series
BT - Proceedings of the 19th International Conference on the Foundations of Digital Games, FDG 2024
A2 - Smith, Gillian
A2 - Whitehead, Jim
A2 - Samuel, Ben
A2 - Spiel, Katta
A2 - van Rozen, Riemer
PB - Association for Computing Machinery
Y2 - 21 May 2024 through 24 May 2024
ER -