There is a growing interest in how to leverage information about user's emotions as a mean of personalizing the response of computer systems. This is particularly useful for computer-aided learning, health, and entertainment systems. However, there are few architectures, frameworks, libraries, or software tools that allow developers to easily integrate emotion recognition into their software projects. The work reported in this paper offers a way to address this shortcoming in models by proposing the use of software design patterns for modeling a multimodal emotion recognition framework. The framework is designed to: (1) integrate existing sensing devices and SDK platforms, (2) include diverse inference algorithms, and (3) correlate measurements from diverse sources. We describe our experience using this model and its impact on facets, such as creating a common language among stakeholders, supporting an incremental development, and adjusting to a highly shifting development team, as well as the qualities achieved and trade-offs made.