We propose a method for a single ground camera-based visual gesture control of a quadrotor mILJAV platform, making its flight more responsive and adaptive to its human controller as compared to a human controller using keypads or joysticks for controls. The proposed camera-based gesture control scheme provides an average accuracy of 100% gestures detected, as compared to accuracies obtained using expensive Kinect-based hardware, or processing intensive CNN-based pose estimation techniques with 97.5% and 83.3% average accuracies, respectively. A fog-based stabilization mechanism is additionally employed, which allows for nighttime stabilization of the mUAV, even in the presence of unbalanced payloads or unbalancing of the mUAV due to minor structural damages. This allows the use of the same mUAV without the need for frequent weight readjustments or mUAV calibration. This approach has been tested in real-time, both indoors as well as outdoors.