Aligning hand-held objects into mid-air positions and orientations is important for many applications. Task performance depends on speed and accuracy, and also on minimizing the user's physical exertion. Augmented reality head-mounted displays (AR HMDs) can guide users during mid-air alignments by tracking an object's pose and delivering visual instruction directly into the user's field of view (FoV). However, it is unclear which AR HMD interfaces are most effective for mid-air alignment guidance, and how the form factor of current AR HMD hardware (such as heaviness and low FoV) affects how users put themselves into tiring body poses during mid-air alignment. We defined a set of design requirements for mid-air alignment interfaces that target reduction of high-exertion body poses during alignment. We then designed, implemented, and tested several interfaces in a user study in which novice participants performed a sequence of mid-air alignments using each interface. Results show that interfaces that rely on visual guidance located near the hand-held object reduce acquisition times and translation errors, while interfaces that involve aiming at a faraway virtual object reduce rotation errors. Users tend to avoid focus shifts and to position the head and arms to maximize how much AR visualization is contained within a single FoV without moving the head. We found that changing the size of visual elements affected how far out the user extends the arm, which affects torque forces. We also found that dynamically adjusting where visual guidance is placed relative to the mid-air pose can help keep the head level during alignment, which is important for distributing the weight of the AR HMD.