Probabilistic argumentation combines Dung's abstract argumentation framework with probability theory in order to model uncertainty in argumentation. In this setting, we address the fundamental problem of computing the probability that a set of arguments is an extension according to a given semantics over structured probabilistic argumentation frameworks. We focus on the most popular semantics (i.e., admissible, stable, complete, grounded, and preferred), for which the problem of computing extension's probabilities over structured probabilistic argumentation frameworks was shown to be FP#(P)-complete. Our aim is that of experimentally establishing when, due to the complexity of the problem and the size of the structured probabilistic argumentation framework, estimating the extension's probabilities is preferable to computing it (as computing the probability cannot be done in reasonable time). To do this, we devise two algorithms: the naive one, which computes the extension's probabilities, and the Monte-Carlo simulation one, which estimates the extension's probabilities, and evaluate both algorithms over two datasets to compare their efficiency.