In this paper a method is proposed for estimating the orientation of industrial parts. Classical 2d images of the part are used to train a deep neural network and infer the part pose by computing its quaternion representation. Another innovative point of this work is in the use of synthetic data which are generated on the fly during the network training from a textured cad model placed in a virtual scene. This way overcomes the difficulties to obtain pose ground truth images from real images. At the same time, using the cad model with several lighting conditions and material reflectances allows to anticipate challenging industrial situations. As a first step of the method, the part is separated from the background using a semantic segmentation network. Then a depth image of the part is produced employing an encoder-decoder network with skip connections. Finally, the depth map is associated with the local pixels coordinates to estimate the part orientation with a fully connected network using a SO(3) metric loss function. This method can estimate the part pose from real images with visually convincing results suitable for any pose refinement processes.