Figure 5From: Fast, parallel implementation of particle filtering on the GPU architecture Memory allocations. Splitting a global memory data array to shared memory keeping local connectivity, where ‘s’ stands for the number of threads in each block and r stands for the size of neighbourhood. To fit the architectural details of the GPU and reduce computational time, we applied 1D topology instead of the proposed 2D grid with one-sided neighbourhood.Back to article page