We study three stochastic differential games. In each game, two players control a process X = {Xt, 0 <= t < infinity} which takes values in the interval I = (0, 1), is absorbed at the endpoints of I, and satisfies a stochastic differential equation dX(t) = mu(X-t, alpha(X-t), beta(X-t)) dt + sigma(X-t, alpha(X-t), beta(X-t)) dW(t), X-0 = x is an element of I. The control functions alpha((.)) and)beta((.)) are chosen by players U and respectively. In the first of our games, which is zero-sum, player U has a continuous reward function u : [0, 1] -> R. In addition to alpha((.)), player a chooses a stopping rule tau and seeks to maximize the expectation of u(X-tau); whereas player B chooses beta((.)) and aims to minimize this expectation. In the second game, players U and B each have continuous reward functions u((.)) and upsilon((.)), choose stopping rules tau and rho, and seek to maximize the expectations of u(X-t) and upsilon(X-rho), respectively. In the third game the two players again have continuous reward functions u((.)) and upsilon((.)), now assumed to be unimodal, and choose stopping rules tau and rho. This game terminates at the minimum tau boolean AND rho of the stopping rules tau and rho, and players U, B want to maximize the expectations of u(X-tau boolean AND rho) and upsilon(X-tau boolean AND rho), respectively. Under mild technical assumptions we show that the first game has a value, and find a saddle point of optimal strategies for the players. The other two games are not zero-sum, in general, and for them we construct Nash equilibria.