Many deep learning tasks, such as image classification, natural language processing, video analysis, and speech recognition, have been accomplished using DNNs (Deep Neural Networks). However, high-performance deep neural networks' success comes with an increase in computational and memory requirements. Field-Programmable Gate Arrays (FPGA) devices are ideal for deploying DNNs, and they have the appropriate qualifications due to their flexibility, power efficiency, and computing performance. However, DNNs are generally deployed on FPGA using a high-level language such as Python then, manually transformed to Hardware Description Language (HDL) and synthesized using a commercial tool. This method is time-consuming and requires HDL skills, which reduces the use of FPGAs. The paper proposes "DNN2FPGA," a generic design flow to implement the DNN models automatically on the FPG, which can overcome the implementation problem. The article reviews many related works and shows the proposed design flow and hardware implementation. Also, it compares our solution and other recent similar tools. We validate the proposed solution using two case study results: A Multi-Layer Perceptron (MLP) used to solve the classical XOR problem and DNN for MNIST dataset classification. Finally, we present the conclusion and future works. This paper presents a new generic design flow of implementing DNN models automatically from the high-level language to FPGA devices, which takes the model in graph presentation as input and automatically generates the FPGA's hardware implementations.