Rapid digitization of whole-slide images (WSIs) with slide scanners, along with the advancements in deep learning strategies has empowered the development of computerized image analysis algorithms for automated diagnosis, prognosis, and prediction of various types of cancers in digital pathology. These analyses can be enhanced and expedited by confining them to relevant tumor region on the large-sized and multi-resolution WSIs. The detection of tumor-region-of-interest (TRoI) on WSIs can facilitate to automatically measure the tumor size as well as to compute the distance to the resection margin. It can also ease the process of identifying high-power-fields (HPFs), which are essential towards the grading of tumor proliferation scores. In practice, pathologists select these regions by visual inspection of WSIs, which is a cumbersome, time-consuming process and affected by inter- and intra- pathologist variability. State-of-the-art deep learning-based methods perform well on the TRoI detection task by using supervised algorithms, however, they require accurate TRoI and non-TRoI annotations to train the algorithms. Acquiring such annotations is a tedious task and incurs observational variability. In this work, we propose a positive and unlabeled learning approach that uses a few examples of HPF regions (positive annotations) to localize the invasive TRoIs on breast cancer WSIs. We use unsupervised deep autoencoders with Gaussian Mixture Model-based clustering to identify the TRoI in a patch-wise manner. The algorithm is developed using 90 HPF-annotated WSIs and is validated on 30 fully-annotated WSIs. It yielded a Dice coefficient of 75.21%, a true positive rate of 78.62% and a true negative rate of 97.48% in terms of pixel-bypixel evaluation compared to the pathologists annotations. Significant correspondence between the results of the proposed algorithm and the state-of-the-art supervised ConvNet indicates the efficacy of the proposed algorithm.