The Karp-Miller-Rosenberg (1972) algorithm was one of the first efficient (almost linear) sequential algorithms for finding repeated patterns and for string matching. In the area of efficient sequential computations on strings it was soon superseded by more efficient (and more sophisticated) algorithms. We show that the Karp-Miller-Rosenberg algorithm (KMR) must be considered as a basic technique in parallel computations. For many problems, variations of KMR must be considered as a basic technique in parallel computations. For many problems, variations of KMR give the (known) most efficient parallel algorithms. The representation of the set of basic factors (subarrays) of a string (array) produced by the algorithm is an extremely useful data structure in parallel algorithms on strings and arrays. This gives also a general unifying framework for a large variety of problems. We show that the following problems for strings and arrays can be solved by almost optimal parallel algorithms: pattern-matching, longest repeated factor (subarray), longest common factor (subarray), maximal symmetric factor (subarray). Also the following problems for strings can be solved within the same complexity bounds: finding squares, testing even palstars and compositions of k palindromes for k = 2, 3, 4, computing Lyndon factorization and building minimal pattern-matching automata. In the model without concurrent writes the parallel time is O(log(n)2) (with n processors) and in the model with concurrent writes the time, for most of the problems, is O(log(n)) (with n processors). For two problems related to the one-dimensional case (longest repeated factor and longest common factor) there were designed parallel algorithms using suffix trees (Apostolico et al. 1988). However, our data structure is simpler and, furthermore, for the two-dimensional case suffix trees do not work. The complexity of our algorithms does not depend on the size of the alphabet, except for the computation of pattern-matching automata.