Various topological techniques and tools have been applied to neural networks in terms of network complexity, explainability, and performance. One fundamental assumption of this line of research is the existence of a global (Euclidean) coordinate system upon which the topological layer is constructed. Despite promising results, such a topologization method has yet to be widely adopted because the parametrization of a topologization layer takes a considerable amount of time and lacks a theoretical foundation, leading to suboptimal performance and lack of explainability. This paper proposes a learnable topological layer for neural networks without requiring an Euclidean space. Instead, the proposed construction relies on a general metric space, specifically a Hilbert space that defines an inner product. As a result, the parametrization for the proposed topological layer is free of user-specified hyperparameters, eliminating the costly parametrization stage and the corresponding possibility of suboptimal networks. Experimental results on three popular data sets demonstrate the effectiveness of the proposed approach.