We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer. Program summary Program Title: NTPoly Program Files doi: http://dx.doi.org/10.17632/mp7wzj5z5t.1 Licensing provisions: MIT Programming language: C, C++, Fortran, Python Nature of problem: Calculation of the functions of large, symmetric, sparse matrices. Solution method: Functions are expanded on a set of polynomials, after which the polynomial of a matrix is computed using sparse matrix multiplication and addition. A hybrid MPI+OpenMP implementation which exhibits strong scaling performance enables the calculation of large matrices. Unusual Features: For sufficiently sparse matrices with local characteristics, matrix functions can be computed in time that grows linearly with the number of matrix elements. (C) 2017 Elsevier B.V. All rights reserved.