Prosodic prominence (realized with phonetic features such as increased intensity, duration, and pitch, among others) is thought to guide listeners' attention by focusing new information. This study investigates production and perception of prosodic prominence toward two types of addressees: a human and a voice assistant interlocutor. We examine how the language system adapts to this increasingly common technology, by testing whether prosodic prominence is subject to audience design when addressing an interlocutor that is consistently rated as having less communicative ability. Stimuli consisted of question-answer pairs, where California English speakers read identical sentences (e.g., "Jude saw the sun") in response to interlocutors' questions probing different foci (e.g., "Who saw the sun?"). Experiment 1 reveals consistent acoustic adjustments to mark focus on either the subject or the object of a sentence. In Experiment 2, we find that listeners reliably infer the intended information structure based on these acoustic adjustments. Across both experiments, we see no consistent difference in focus marking by type of interlocutor (human vs. voice assistant). Nonetheless, listeners associate particular features (slower speech rate) with speech directed at voice assistants. Taken together, our findings suggest that while speakers apply communicative strategies from human-human interaction when addressing voice assistants, listeners expect a device-specific register.