We solve the problem of video object segmentation by investigating how to expand the role of convolution in convolutional neural networks. Based on the One-Shot Video Object Segmentation (OSVOS) which can successfully tackle the task of semi-supervised video object segmentation, we introduce U-shape architecture. We first build a Global Guidance Module (GGM) on the bottom-up path to provide location information of potentially significant objects for layers of different feature levels. Then we design a Multi-scale Convolution Module (MCM) to fully get feature information and a Feature Fusion Module (FFM) to make the coarse-level semantic information well fused with the finelevel features from the top-down pathway. GGM and FFM allow the high-level semantic features to be progressively refined, yielding detail enriched segmentation maps. The experimental results on DAVIS 2016 data set shows that our proposed approach can more accurately locate the segmentation objects with sharpened details and our model has improved on all indicators than OSVOS.