With fog computing, new services and applications are enabled on the internet of things by providing computational services at the network edge. Fog computing is emerging as a transformative paradigm, linking edge devices with centralized cloud resources. It improves network efficiency, lowers latency, and increases computing power. Resource allocation and optimization are critical for fog computing to achieve optimal system performance, efficient resource usage, and smooth user experiences. Throughout the presentation, we discussed the architecture, framework, comparison of fog computing with cloud computing, resource allocation strategies, and the relevance of resource allocation to fog computing. Various methods of optimization and allocation are discussed, along with their application to fog-enhanced vehicle services and vehicular fog computing. For the purpose of allocating resources, minimizing latency, and optimizing quality of service (QoS), a variety of techniques have been applied, including game theory, convex optimization, reinforcement learning, and genetic algorithms. Additionally, we discuss how fog computing environment resource allocation works using game theory. The purpose of this paper is to review several articles in the field of fog environments and to provide a detailed comparison of each from a variety of perspectives. An overview of the main features of the reviewed articles was also presented in the form of a table. This study highlights the effectiveness of these strategies for improving system performance, reducing latency, optimizing resources, and reducing energy consumption. Lastly, we highlight future research directions and potential contributions in fog computing. Management of heterogeneity, ensuring real-time optimization, ensuring QoS and security concerns, promoting energy-efficient computing and sustainability, managing mobility, scheduling and self-adaptive scheduling, load balancing, offloading, reliability, sensor lifetime, multiagent reinforcement learning, optimal resource allocation, and quality of experience are discussed. The purpose of this survey is to give readers a detailed understanding of state-of-the-art methods, challenges, and possible future directions in resource allocation and optimization in fog computing. The aim of this research is to synthesize insights from the literature in order to provide valuable insight for researchers, practitioners, and stakeholders interested in advancing the field of fog computing.