Human-Machine Interfaces (HMIs) enable control of machines through physiological signals such as electromyography (EMG), which captures muscle activity. Accurately classifying hand movements from EMG signals remains challenging due to issues like inefficient feature extraction and poor generalization across diverse users, particularly amputees. Addressing these challenges, a novel framework, Hand Movement Classification using Deep Hybrid Model (HMC-DHM), has been developed to classify 13 distinct hand movements using a six-channel EMG system. The dataset includes 30 participants, equally representing trans-radial amputees and non-amputees. Movements classified include shoulder abduction/adduction, shoulder flexion/extension, elbow flexion/extension, hand open/close, and wrist supination/pronation at various angles. The HMC-DHM utilizes a dual-stage feature extraction strategy, combining time and frequency domain statistics with high-level temporal and spatial features extracted by a Hand Movement Convolutional Neural Network (HMNet). These features are fused and classified using a Tuned Random Forest Model (TRFM), optimized via Grid Search Cross-Validation. The model achieves accuracies of 91.35% for amputees, 94.23% for non-amputees, and 96.58% on combined datasets, demonstrating superior accuracy and generalization. This advancement facilitates precise control of prosthetic arms, marking significant progress in practical HMI applications and outperforming existing methods.