Robot navigation systems suffer from relatively localizing the robots and object goals in the three-dimensional(3D) dynamic environment. Especially, most object detection algorithms adopt in navigation suffer from large resource consumption and a low calculation rate. Hence, this paper proposes a lightweight PyTorch-based monocular vision 3D aware object goal navigation system for nursing robot, which relies on a novel pose-adaptive algorithm for inverse perspective mapping (IPM) to recover 3D information of an indoor scene from a monocular image. First, it detects objects and combines their location with the bird-eye view (BEV) information from the improved IPM to estimate the objects' orientation, distance, and dynamic collision risk. Additionally, the 3D aware object goal navigation network utilizes an improved spatial pyramid pooling strategy, which introduces an average-pooling branch and a max-pooling branch, better integrating local and global features and thus improving detection accuracy. Finally, a novel pose-adaptive algorithm for IPM is proposed, which introduces a novel voting mechanism to adaptively compensate for the monocular camera's pose variations to enhance further the depth information accuracy, called the adaptive IPM algorithm. Several experiments demonstrate that the proposed navigation algorithm has a lower memory consumption, is computationally efficient, and improves ranging accuracy, thus meeting the requirements for autonomous collision-free navigation.