The amount of data generated in the modern world is so great that data lakes are now being used to store data. However, relational databases are currently the primary repository for the world's data. However, it is very time-consuming for a user to type each query every time, especially the queries that include complex keywords. Our proposed approach uses the interaction history by altering the preceding projected query to improve the generation quality, based on the finding that successive human language queries are frequently lin- guistically dependent, and their equivalent SQL queries overlap. This paper focuses on text-to-SQL conversion for cross-domain datasets. Our approach reuses results produced at the token level and considers SQL statements as sequences. Finally, we evaluate our approach on different datasets like the Sparc, Spider, and CoSQL datasets. It compared our proposed approach with existing famous algorithms like Seq2seq, and added attention and copying to the seq2seq model, SQLNet model, and TypeSQL model in terms of accuracy and F1 score.