Privacy is a big concern in current video surveillance systems. Due to privacy issues, many strategic places remain unmonitored leading to security threats. The main problem with existing privacy protection methods is that they assume availability of accurate region of interest (RoI) detectors that can detect and hide the privacy sensitive regions such as faces. However, the current detectors are not fully reliable, leading to breaches in privacy protection. In this paper, we propose a privacy protection method that adopts adaptive data transformation involving the use of selective obfuscation and global operations to provide robust privacy even with unreliable detectors. Further, there are many implicit privacy leakage channels that have not been considered by researchers for privacy protection. We block both implicit and explicit channels of privacy leakage. Experimental results show that the proposed method incurs 38% less distortion of the information needed for surveillance in comparison to earlier methods of global transformation; while still providing near-zero privacy loss. 1. Introduction In order to perform privacy-preserving CCTV monitoring, video data should be transformed in such a way that the information leaking the identity is hidden, but the intended surveillance tasks can be accomplished. The traditional approach of data transformation has been to detect the regions of interest (RoI) in the images (e.g., human faces) and selectively obfuscate them. This approach is an unreliable solution as the RoI detectors may sometimes fail. For example, even if a face detector is able to correctly detect the face in 99 (out of 100) frames, the undetected faces in the remaining frame will reveal the identity of the person in the video and result in his/her privacy loss. In other set of works, global operations have been used for data transformation in which the whole video frame is transformed with same intensity, that is, same amount of blurring or quantization [1]. This approach is more appropriate in the context of data publication, where the published surveillance video is used by researchers for testing their algorithms. In contrast to the data publication scenario, CCTV monitoring scenario has different requirements. In the case of CCTV monitoring, a human operator is required to watch the surveillance video feeds; although automated techniques may run in the background as shown in Figure 1. The automatic analysis can be performed using the original data, which is not accessible for viewing, unlike data publication. The original data may be
References
[1]
M. Boyle, C. Edwards, and S. Greenberg, “The effects of filtered video on awareness and privacy,” in Proceedings of the ACM Conference on Computer Supported Cooperative Work, pp. 1–10, December 2000.
[2]
M. Saini, P. K. Atrey, S. Mehrotra, S. Emmanuel, and M. Kankanhalli, “Privacy modeling for video data publication,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME '10), pp. 60–65, July 2010.
[3]
A. Senior, S. Pankanti, A. Hampapur et al., “Enabling video privacy through computer vision,” IEEE Security and Privacy, vol. 3, no. 3, pp. 50–57, 2005.
[4]
D. A. Fidaleo, H. A. Nguyen, and M. Trivedi, “The networked sensor tapestry (nest): a privacy enhanced software architecture for interactive analysis of data in video-sensor networks,” in Proceedings of the 2nd ACM International Workshop on Video Sureveillance and Sensor Networks (VSSN '04), pp. 46–53, 2004.
[5]
J. Wickramasuriya, M. Datt, S. Mehrotra, and N. Venkatasubramanian, “Privacy protecting data collection in media spaces,” in Proceedings of the 12th ACM International Conference on Multimedia, pp. 48–55, usa, October 2004.
[6]
T. Koshimizu, T. Toriyama, and N. Babaguchi, “Factors on the sense of privacy in video surveillance,” in Proceedings of the 3rd ACM Workshop on Continuous Archival and Retrievalof Personal Experiences (CARPE '06), pp. 35–43, 2006.
[7]
B. Thuraisingham, G. Lavee, E. Bertino, J. Fan, and L. Khan, “Access control, confidentiality and privacy for video surveillance databases,” in Proceedings of the 11th ACM Symposium on Access Control Models and Technologies (SACMAT '06), pp. 1–10, June 2006.
[8]
P. Carrillo, H. Kalva, and S. Magliveras, “Compression independent object encryption for ensuring privacy in video surveillance,” in Proceedings of the IEEE International Conference on Multimedia and Expo (ICME '08), pp. 273–276, June 2008.
[9]
J. K. Paruchuri, S. C. S. Cheung, and M. W. Hail, “Video data hiding for managing privacy information in surveillance systems,” Eurasip Journal on Information Security, vol. 2009, Article ID 236139, 7 pages, 2009.
[10]
F. Z. Qureshi, “Object-video streams for preserving privacy in video surveillance,” in Proceedings of the 6th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS '09), pp. 442–447, 2009.
[11]
S. Moncrieff, S. Venkatesh, and G. West, “Dynamic privacy assessment in a smart house environment using multimodal sensing,” ACM Transactions on Multimedia Computing, Communications and Applications, vol. 5, no. 2, pp. 1–29, 2008.
[12]
T. Spindler, C. Wartmann, and L. Hovestadt, “Privacy in video surveilled areas,” in Proceedings of the ACM International Conference on Privacy, Security and Trust, pp. 1–10, 2006.
[13]
A. Elgammal, R. Duraiswami, D. Harwood, and L. S. Davis, “Background and foreground modeling using nonparametric kernel density estimation for visual surveillance,” Proceedings of the IEEE, vol. 90, no. 7, pp. 1151–1163, 2002.
[14]
H. Kruegle, CCTV Surveillance: Analog and Digital Video Practices and Technology, Butterworth-Heinemann, Boston, Mass, USA, 2006.
[15]
R. Kasturi, D. Goldgof, P. Soundararajan et al., “Framework for performance evaluation of face, text, and vehicle detection and tracking in video: data, metrics, and protocol,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 2, pp. 319–336, 2009.
[16]
E. Hjelm?s and B. K. Low, “Face detection: a survey,” Computer Vision and Image Understanding, vol. 83, no. 3, pp. 236–274, 2001.
[17]
Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.
[18]
S. Chikkerur, V. Sundaram, M. Reisslein, and L. J. Karam, “Objective video quality assessment methods: a classification, review, and performance comparison,” IEEE Transactions on Broadcasting, vol. 57, no. 2, pp. 165–182, 2011.
[19]
K. Seshadrinathan, R. Soundararajan, A. C. Bovik, and L. K. Cormack, “Study of subjective and objective quality assessment of video,” IEEE Transactions on Image Processing, vol. 19, no. 6, Article ID 5404314, pp. 1427–1441, 2010.
[20]
PETS, “Performance evaluation of tracking and surveillance,” 2000-2011, http://www.cvg.cs.rdg.ac.uk/slides/pets.html.
[21]
C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '99), vol. 2, pp. 246–252, June 1999.