Providing low cost and rich information, visual sensors are becoming the top choice for automatic systems. Particularly in the field of navigations or SLAM technologies, extracting and matching features are the basic aspects. This paper addresses the required computational efficiency, computational resources and power-consumption problem of image feature detection and matching algorithm by designing a hardware-software co-design architecture for the implementation on a field-programmable gate array (FPGA) and a Nios II CPU. Given images data from the Nios II, features are extracted and matched by the scale-invariant feature transform (SIFT) algorithm and a linear exhaustive search (LES) method using an Altera DE2i-150 FPGA, respectively. The matched features are subsequently transferred back from the FPGA to Nios II. To show the effectiveness of the proposed approach, two images with affine transformations are provided. An object tracking system is also developed. Experimental results show that, taking the advantages of parallel computing of an FPGA, the overall computational time and the hardware resources usage of the proposed approach are greatly reduced, compared to a full-software implementation and other existing methods.