In the existing workflow for 360-degree video coding, the original 360-degree video content needs to be converted onto a 2D plane using a projection format before being encoded by a video codec. Given the selected projection format, the samples on the projected 2D plane may correspond to distinctive sampling densities on the sphere. If the projected video is coded using a fixed quantization parameter (QP), then it is equivalent to applying different levels of quantization on the sphere because the sampling densities differ within the projected video. This could result in non-uniform reconstructed qualities for different spherical regions. In this paper, an adaptive quantization method is proposed to improve the 360-degree video coding efficiency. The proposed method allows adaptively adjusting the QP of each region on the 2D projected plane to modulate its reconstruction quality based on the spherical sampling density of the region. Additionally, to further improve the performance, one encoder-side method is proposed to derive the optimal Lagrangian multiplier based on the adjusted QP value for a better rate-distortion (RD) tradeoff during rate-distortion optimization (RDO). The proposed method is implemented based on the JVET 360-degree video coding software JEM- 6.0-360Lib. Experimental results demonstrate that significant coding gains can be achieved: based on the end-to-end weighted to spherically uniform PSNR (WS-PSNR) metric, the proposed method provides on average 5.0% BD-rate saving for the equirectangular projection and 2.6% BD-rate saving for the cubemap projection, respectively, compared to the fixed QP coding scheme.