As a continuous representation function, implicit neural representation has found widespread applications in various research fields such as computer graphics and computer vision. In recent years, many researchers have utilized implicit neural representation for data compression. However, current data compression methods based on implicit neural representation face several challenges, with a critical issue being the inability to adaptively allocate network parameters based on the complex features in the data. This limitation leads to problems such as attention dispersion, decreased reconstruction quality, and increased training costs. To address this challenge, this paper draws inspiration from adaptive grid partitioning and proposes an adaptive volumetric compression method based on implicit neural representation. In this paper, octree is employed to conduct domain non-uniform decomposition of the data, and training a network model for data compression on each leaf node, allowing the network to focus attention on complex data regions during training. The block-wise training approach reduces training time and lowers training costs while achieving the same compression rate. Finally, the effectiveness of our proposed method has been demonstrated through several qualitative and quantitative experiments.