Encoding words is the first step in computing with words. Most word modeling approaches rely on collecting data intervals from users and transform them into interval type-2 fuzzy sets (IT2 FSs). These approaches usually discard more than half of the data intervals in the word modeling process. Moreover, they perform several transformations on the remaining data. Consequently, the constructed IT2 FS cannot be expected to reflect the perceptions of the word properly. This paper presents a new approach, called the information-preserving approach (IPA), which employs most of the data, with minimal information loss or change, in constructing a word IT2 FS. Applying the principle of justifiable granularity, IPA strikes a suitable balance between the data coverage and interpretability in word modeling so that the resulting IT2 FSs are justified and semantically sound. Experiments show that IPA outperforms the existing approaches in the evaluation with objective measures and in application.