Algorithmic decision-making has permeated health and care domains (e.g., automated diagnoses, fall detection, caregiver staffing). Researchers have raised concerns about how these algorithms are built and how they shape fair and ethical care practices. To investigate algorithm development and understand its impact on people who provide and coordinate care, we conducted a case study of a U.S.-based senior care network and platform. We interviewed 14 technologists, 9 paid caregivers, and 7 care coordinators to explore their interactions with the platform's algorithms. We find that technologists draw on a multitude of moral frameworks to navigate complex and contradictory demands and expectations. Despite technologists' espoused commitments to fairness, accountability, and transparency, the platform reassembles problematic aspects of care labor. By analyzing how technologists justify their work, the problems that they claim to solve, the solutions they present, and caregivers' and coordinators' experiences, we advance fairness research that focuses on agency and power asymmetries in algorithmic platforms. We (1) make an empirical contribution, revealing tensions when developing and implementing algorithms and (2) provide insight into the social processes that reproduce power asymmetries in algorithmic decision-making.