GMI magnetometers are magnetic transducers based on Giant Magnetoimpedance sensor elements, which are ferromagnetic samples that present a large variation in their impedance as a function of the external magnetic field. The sensitivity of magnetic transducers is directly dependent on the sensitivity of their sensor elements. Consequently, one can optimize the overall sensitivity of the transducer by maximizing the sensitivity of their sensor elements, which is essential for ultra-weak magnetic fields measurements. In the case of GMI samples, the sensitivity is affected by several parameters, such as: sample length, external magnetic field, DC level and frequency of the excitation current; bias magnetic field, etc. Currently, this dependency is yet to be well modeled quantitatively, in function of all parameters that affect it. Thus, the search for the optimum point is usually empirical. Besides maximizing the sensitivity, it is imperative to ensure that this sensitivity is held almost constant under the desired sensor span. In other words, instead of purely maximizing the sensitivity, it is important to take into account the linearity either. Then, following that goal, this work aims at developing a neuro-genetic model capable of fitting the sensitivity of GMI samples and defining the set of parameters that lead to the optimal sensitivity, under a predefined sensor span. The proposed computational model is based on a MLP Neural Networks, for modeling the sensitivity of the GMI samples, and a Genetic Algorithm, responsible for determining the combination of parameters that allow maximizing the sensitivity, considering the imposed restrictions.