A classical approach used to obtain basic facts in the theory of square matrices involves an analysis of the relationship between polynomials p in one variable and square matrices T such that p(T) = 0. We consider matrices and operators which satisfy a different type of polynomial constraint. Let H be a complex Hilbert space, T be a bounded linear transformation of H, T* be the adjoint of T, and C[x, y] be the algebra of polynomials in x and with complex coefficients. For a polynomial p is an element of C[x, y] in two variables with complex coefficients, define p(T) := Sigma(m, n greater than or equal to 0) p(boolean AND)(m, n)(TTm)-T-*n, where p(boolean AND)(m, n) is the coefficient of y(n)x(m) in the expansion of p in a power series about the point (0, 0). T is called a root of p if and only if p(T) = 0. Note that if p is an element of C[x, y] is a polynomial in the single variable x, then the definition of p(T) given here agrees with the classical definition. In this paper, we study the relationships which p(T) = 0 forces between p and T when T is an algebraic operator (i.e., there exists n greater than or equal to 1 and complex numbers a(0), ..., a(n-1) such that 0 = a(0) + a(1)T + ... + a(n-1)T(n-1) + T-n). The classification starts with the follow ing observation: Suppose p is an element of C[x, y] and an algebraic operator T is an element of L(H) satisfy p(T) = 0. Then certain subspaces of H which are invariant for T must be orthogonal or certain coefficients of p must vanish. This leads to the notions of a graph attached to each p is an element of C[x, y] and a graph attached to each square matrix T. For diagonalizable T, a necessary and sufficient graph theoretic condition for solving p(T) = 0 is given. For nondiagonalizable T, this condition is necessary, but not sufficient. The use of these graphs does, however, reduce the problem to the problem of solving the equation p(T) = 0 for T with exactly one or two eigenvalues. For T with one eigenvalue, we give a necessary and sufficient condition for solving p(T) = 0. This leaves the case of solving p(T) = 0 when T has exactly two eigenvalues. This problem mixes algebra involving polynomials with matrix theory. We show that it is equivalent to the purely algebraic problem of determining if equations of the form Sigma((i, j) is an element of E)c(i, j)x(i + r, j + s) = 0 have solutions of finite support with certain nonvanishing properties. We call these equations bi-Hankel equations subordinate to a given subset E of the lattice of integer pairs {(i, j):0 less than or equal to i less than or equal to m - 1, 0 less than or equal to j less than or equal to n - I}. It turns out that there is an algorithm (which uses Grobner bases) for determining if the type of solution we seek exists and for computing it. (C) 1998 Elsevier Science Inc.