Formant estimation from vowel segments, is very useful for linguistic purposes. Traditionally, formants are estimated using classical signal processing methods and statistical models. The averaged continuous extraction of formant frequency along the vowel segments is taken as the formant of vowels. New approaches using neural networks to predict vowel formants on an annotated database, where the input is the acoustic features and output is the mean formant frequency value. Recently, the Bilinear Network (BL) and Temporal Attention-Augmented Bilinear Network (TABL) have proven to be very effective on financial time-series analysis task, compared to recurrent networks and convolution networks. Similar to our work, we explored how to extend the structure of BL and learn from TABL to produce better short-term modeling capability for vowel formant estimation. More specifically, we proposed to replace the attention mechanism with sigmoid gate and use a learnable parameter to dynamically integrate the first linear transformation output, thus learning better representation of BL. Experiments on the vowels test set of public VTR corpus showed that our approach significantly surpassed DNN, CNN, BL and achieved slightly better performance than the poweful TABL model in terms of mean absolute error and mean absolute percent error rate on F1, F2, F3 and overall.