A parallelization strategy for hybrid particle-field molecular dynamics (hPF-MD) simulations on multi-node multi-GPU architectures is proposed. Two design principles have been followed to achieve a massively parallel version of the OCCAM code for distributed GPU computing: performing all the computations only on GPUs, minimizing data exchange between CPU and GPUs, and among GPUs. The hPF-MD scheme is particularly suitable to develop a GPU-resident and low data exchange code. Comparison of performances obtained using the previous multi-CPU code with the proposed multi-node multi-GPU version are reported. Several non-trivial issues to enable applications for systems of considerable sizes, including large input files handling and memory occupation, have been addressed. Large-scale benchmarks of hPF-MD simulations for system sizes up to 10 billion particles are presented. Performances obtained using a moderate quantity of computational resources highlight the feasibility of hPF-MD simulations in systematic studies of large-scale multibillion particle systems. This opens the possibility to perform systematic/routine studies and to reveal new molecular insights for problems on scales previously inaccessible to molecular simulations.