This paper introduces a strategy to accelerate neighbor searching in agent-based simulations on GPU platforms. Because of their autonomous nature, agents can be processed by threads concurrently on GPU, and the overall simulation can be accelerated consequently. Each agent will simultaneously carry out a sense-think-act cycle in every time step. The neighbor searching is a crucial part in the sensing stage. Detecting and accessing neighbors is a memory intensive task and often becomes the major time consumer in an agent-based simulation. Our contribution, an enhanced neighbor sharing strategy, greatly speeds up this procedure when comparing with CPU implementations. The strategy is developed from a global-memory-only implementation, and then gradually improved by efficiently utilizing the much faster shared memory. In our case studies, speedups of 89.08 and 11.51 are obtained on an NVIDIA Tesla K20 GPU compared with the sequential implementation and OpenMP parallel implementation respectively on an Intel Xeon E5-2670 CPU.