In this paper, we consider the problem of designing a feedback policy for a discrete time stochastic hybrid system that should be kept operating within some compact set A. To this purpose, we introduce an infinite-horizon discounted average reward function, where a negative reward is associated to the transitions driving the system outside A and a positive reward to those leading it back to A. The idea is that the stationary policy maximizing this reward function will keep the system within A as long as possible, and, if the system happens to exit A, it will bring it back to A as soon as possible, compatibly with the system dynamics. This self-recovery approach is particularly useful in those cases where it is not possible to maintain the system within A indefinitely. The performance of the resulting strategy is assessed on a benchmark example.