We present new results on direct methods of the calculus of variations for nonlocal optimal control problems involving functional-differential equations with argument deviation minI(y, u), (1) (y) over dot = f(t,y(g(t)), u(h (t))), y(0) = y(0), t is an element of [0, 1], where y((.)) is an element of R-n is a state vector belonging to some space of states (usually a Sobolev space), u((.)) is an element of R-m is a control belonging to some space of controls (usually a Lebesgue space), f: [0, 1] x R-n x R-m --> R, y(0) is an element of R-n is the given initial value, I is a cost functional, g, h: [0, 1] --> R are some given measurable functions. In the classical case without deviations (i.e. when g(t) = h(t) = t) this setting involves only local operators such as differentiation and a Nemytskii operators, while the presence of deviations introduces a nonlocality drastically complicating the study of the problem. We emphasize the relationship of such problems with purely variational problems for integral functionals with argument deviation (2) I(v(1),...,v(k)) :=integral (Omega)phi (x,{v(i)(g(ij)(x))}(i,j=1)(k,l))dx, where Omega subset of R-n, phi: Omega x R-kl --> (R) over bar is an integrand, g(ij): Omega --> Omega are some measurable functions, satisfying a natural condition \e \ = 0 double right arrow \g(ij)(-1)(e)\ = 0 for every e is an element of Omega stands for the Lebesgue measure), the argument {v(i)}(i=1)(k) being defined over some Lebesgue space (L-P)(k). Our approach, as opposed that of J. Warga, R. Vinter, J. Rosenblueth is direct and does not use the introduction of special Young measures.