Random Forest (RF) is well-known as an efficient ensemble learning method with strong predictive performance. However, it is often regarded as a "black box"due to its reliance on hundreds of deep decision trees. This lack of interpretability can be a real drawback for the acceptance of RF models in several real- world applications, especially those affecting individuals' lives. In this work, we present Forest-ORE, a method that makes RF interpretable via an optimized rule ensemble (ORE) for local and global interpretation. Unlike other rule-based approaches aimed at interpreting the RF model, this method simultaneously considers several parameters that influence the choice of an interpretable rule ensemble. Existing methods often prioritize predictive performance over interpretability coverage and do not account for existing overlaps or interactions between rules. Forest-ORE uses a mixed-integer optimization program to build an ORE that considers the trade-off between predictive performance, interpretability coverage, and model size (ensemble size, rule length, and overlap). In addition to producing an ORE competitive with RF in predictive performance, this method enriches the ORE through other rules that afford complementary information. This framework is illustrated through an example, and its robustness is evaluated across 36 benchmark datasets. A comparative analysis with well-known methods shows that Forest-ORE achieves an excellent trade-off between predictive performance, interpretability coverage, and model size.