Autotelic reinforcement learning (RL) agents sample their own goals, and try to reach them. They often prioritize goal sampling according to some intrinsic reward, ex. novelty or absolute learning progress (ALPs). Novelty-based approaches work robustly in unsupervised image-based environments when there are no distractors. However, they construct simple curricula that do not take the agent's performance into account: in complex environments, they often get attracted by impossible tasks. ALP-based approaches, which are often combined with a clustering mechanism, construct complex curricula tuned to the agent's current capabilities. Such curricula sample goals on which the agent is currently learning the most, and do not get attracted by impossible tasks. However, ALP approaches have not so far been applied to DRL agents perceiving complex environments directly in the image space. Goal regions guided intrinsically motivated goal exploration process (GRIMGEP), without using any expert knowledge, combines the ALP clustering approaches with novelty-based approaches and extends them to those complex scenarios. We experiment on a rich 3-D image-based environment with distractors using novelty-based exploration approaches: Skewfit and CountBased. We show that wrapping them with GRIMGEP-using them only in the cluster sampled by ALP-creates a better curriculum. The wrapped approaches are attracted less by the distractors, and achieve drastically better performances.