Although knowledge graphs store a large number of facts in the form of triplets, they are still limited by incompleteness. Hence, Knowledge Graph Completion (KGC), defined as inferring missing entities or relations based on observed facts, has long been a fundamental issue for various knowledge driven downstream applications. Prevailing KG embedding methods for KGC like TransE rely solely on mining structural information of existing facts, thus failing to handle generalization issue as they are inapplicable to unseen entities. Recently, a series of researches employ pre-trained encoders to learn textual representation for triples i.e., textual-encoding methods. While exhibiting great generalization for unseen entities, they are still inferior compared with above KG embedding based ones. In this paper, we devise a novel textual-encoding learning framework for KGC. To enrich textual prior knowledge for more informative prediction, it features three hierarchical maskings which can utilize far contexts of input text so that textual prior knowledge can be elicited. Besides, to solve predictive ambiguity caused by improper relational modeling, a relational-aware structure learning scheme is applied based on textual embeddings. Extensive experimental results on several popular datasets suggest the effectiveness of our approach even compared with recent state-of-the-arts in this task.