We propose Markov random fields (MRFs) as a probabilistic mathematical model for incorporating the internal states of other agents, both human and robotic, into robot decision making. By using estimates of Theory of Mind (ToM), the mental states of other agents can be incorporated into decision making through statistical inference, allowing robots to balance their own goals and internal objectives with those of other collaborating agents. The MRF model is well-suited to domains in which the joint probability over latent (action) and observed (perceived) variables can be factored into pairwise interactions between these variables. Specifically, these interactions occur through functions that evaluate "local evidence" between an observed and latent variable and "compatibility" between a pair of latent variables. We will describe how experimental findings from the ToM literature can be explained using MRF models, and then show how this framework can be applied to a social robotics task. We will also describe how to use belief propagation on a multi-robot MRF as a novel approach to multirobot coordination, with parallels to human collaboration strategies.