In this article we address a number of important issues that arise in the analysis of nonindependent data. Such data are common in studies in which predictors vary within "units" (e.g., within-subjects, within-classrooms). Most researchers analyze categorical within-unit predictors with repeated-measures ANOVAs, but continuous within-unit predictors with linear mixed-effects models (LMEMs). We show that both types of predictor variables can be analyzed within the LMEM framework. We discuss designs with multiple sources of nonindependence, for example, studies in which the same subjects rate the same set of items or in which students nested in classrooms provide multiple answers. We provide clear guidelines about the types of random effects that should be included in the analysis of such designs. We also present a number of corrective steps that researchers can take when convergence fails in LMEM models with too many parameters. We end with a brief discussion on the trade-off between power and generalizability in designs with "within-unit" predictors. Translational Abstract Researchers and practitioners sometimes want to analyze data that are "nonindependent." Data are said to be nonindependent when the study is designed such that certain data points can be expected to be on average more similar to each other than other data points. This is usually the case when each subject provides multiple data points (so-called within-subject designs), when subjects belonging to higher-order units influence each other (e.g., students clustered in classrooms, employees clustered in teams), or when subjects react to or evaluate the same set of items (e.g., pictures, words, sentences, products, art works, target individuals). In the present article, we propose that all types of nonindependent data can be analyzed with the same statistical technique called "linear mixed-effects models." Compared to standard statistical tests belonging to the family of "General Linear Models" (e.g., ANOVA, regression), linear mixed-effects models have a "complex error term," i.e., the data analyst has to explicitly include all possible reasons for why the predictions of the statistical model may be wrong (these possible reasons are called "random effects"). It is not always obvious how to identify all possible sources of error. In this article, we provide clear guidelines on the type of random effects that researchers and practitioners should include when estimating linear mixed-effects models. Failure to include the appropriate random effects leads to an unacceptable false positive rate (or "type I error rate"), i.e., a high proportion of statistically significant results for effects that do not exist in reality.