Accurate segmentation of spectral-domain Optical Coherence Tomography (SD-OCT) images helps diagnose retinal pathologies and facilitates the study of their progression/remission. Manual segmentation is clinical-expertise dependent and highly time-consuming. Furthermore, poor image contrast due to high-reflectivity of some retinal layers and the presence of heavy speckle noise, pose severe challenges to the automated segmentation algorithms. The first step towards retinal OCT segmentation therefore, is to create a noise-free image with edge details still preserved, as achieved by image reconstruction on a wavelet-domain preceded by bilateral-filtering. In this context, the current study compares the effects of image denoising using a simple Gaussian-filter to that of wavelet-based denoising, in order to help investigators decide whether an advanced denoising technique is necessary for accurate graph-based intraretinal layer segmentation. A comparative statistical analysis conducted between the mean thicknesses of the six layers segmented by the algorithm and those reported in a previous study, reports non-significant differences for five of the layers (p > 0.05) except for one layer (p = 0.04), when denoised using Gaussian-filter. Non-significant layer thickness differences are seen between both the algorithms for all the six retinal layers (p > 0.05) when bilateral-filtering and wavelet-based denoising is implemented before boundary delineation. However, this minor improvement in accuracy is achieved at an expense of substantial increase in computation time (∼10s when run on a specific CPU) and logical complexity. Therefore, it is debatable if one should opt for advanced denoising techniques over a simple Gaussian-filter when implementing graph-based OCT segmentation algorithms.