Abstract
Procedural Content Generation via Machine Learning (PCGML) has enhanced game content creation, yet challenges in controllability and limited training data persist. This study addresses these issues by distilling a constructive PCG algorithm into a controllable PCGML model. We first generate a large amount of content with a constructive algorithm and label it using a Large Language Model (LLM). We use these synthetic labels to condition two PCGML models for content-specific generation, the Five-Dollar Model and the Discrete Diffusion Model. This neural network distillation process ensures that the generation aligns with the original algorithm while introducing controllability through plain text. We define this text-conditioned PCGML as a Text-to-game-Map (T2M) task, offering an alternative to prevalent text-to-image multi-modal tasks. We compare our distilled models with the baseline constructive algorithm. Our analysis of the variety, accuracy, and quality of our generation demonstrates the efficacy of distilling constructive methods into controllable text-conditioned PCGML models.
Original language | English (US) |
---|---|
Pages (from-to) | 14344-14351 |
Number of pages | 8 |
Journal | Proceedings of the AAAI Conference on Artificial Intelligence |
Volume | 39 |
Issue number | 13 |
DOIs | |
State | Published - Apr 11 2025 |
Event | 39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025 - Philadelphia, United States Duration: Feb 25 2025 → Mar 4 2025 |
Keywords
- Multimodal Machine Learning
- Procedural Content Generation
- Synthesis Generation
- Text-to-game-Map Generation
ASJC Scopus subject areas
- Artificial Intelligence