Visual privacy protection, i.e., obfuscation of personal visual information in video surveillance is an important and increasingly popular research topic. However, while many datasets are available for testing performance of various video analytics, little to nothing exists for evaluation of visual privacy tools. Since surveillance and privacy protection have contradictory objectives, the design principles of corresponding evaluation datasets should differ too. In this paper, we outline principles that need to be considered when building a dataset for privacy evaluation. Following these principles, we present new, and the first to our knowledge, Privacy Evaluation Video Dataset (PEViD). With the dataset, we provide XML-based annotations of various privacy regions, including face, accessories, skin regions, hair, body silhouette, and other personal information, and their descriptions. Via preliminary subjective tests, we demonstrate the flexibility and suitability of the dataset for privacy evaluations. The evaluation results also show the importance of secondary privacy regions that contain non-facial personal information for privacy- intelligibility tradeoff. We believe that PEViD dataset is equally suitable for evaluations of privacy protection tools using objective metrics and subjective assessments.