Japanese
Echizen Laboratory, National Institute of Informatics
2-1-2, Hitotsubashi,
Chiyoda-ku, Tokyo, 101-8430, JAPAN
TEL:+81-3-4212-2516
FAX:+81-3-4212-2120

Research projects

Digital media-such as pictures, movies, and music-have become widely available due to their advantages over analog media. However, copyright infringement, information leaks, and alteration remain problematic with digital content because it can easily be modified, copied, and sent illegally over the Internet.

Our goal is to establish content security technologies and systems for fair use of digital content. We have developed and continue to develop fundamental content security technologies, such as information hiding and content anonymization, content security systems that provide copyright protection, information leakage protection, and authenticity assurance, and other security measures. Below are example research projects of our laboratory.

Security technologies for overcoming analog-hole problem

Privacy-enhancing technologies for resolving trade-offs between data anonymity and utility

Information hiding

Security technologies for overcoming analog-hole problem

Technologies for preventing unauthorized copying through encryption are widely used to prevent the disclosure of personal and confidential information and to protect the copyright of pictures and images. However, once digital information has been converted to analog format and shown on a display or a screen, a digital camera can capture the displayed analog information and invalidate the encryption (the analog-hole problem).

There are already frequent occurrences of copyright infringement in which digital cameras are used for unauthorized copying of footage shown on screens in movie theaters, and copies are then illegally sold or made available on movie distribution sites or as bootleg copies. In Japan alone, losses due to unauthorized copying of films is said to be 18 billion yen a year. There has also been a case in which national secrets were disclosed by an air traffic controller at Haneda Airport who used a digital camera to photograph displays showing flight information, including Air Force One flight plans, and then posted the photographs on his blog.

In addition, there are concerns about the increasingly high quality of unauthorized images due to continuing improvements in the functions of display devices and photographic equipment. The prevention of unauthorized copying of displays and screens is an essential countermeasure, requiring urgent steps to prevent information disclosure and to protect copyrights.

 

Technology to prevent unauthorized copying of screens (collaboration with the Kogakuin University)

We developed a technology that prevents unauthorized copying of films shown on a screen. This technology utilizes the differences in spectral sensitivity characteristics between human beings and imaging devices. By installing a near-infrared-ray light source, which superimposes noise on video images without affecting human vision, on the back of existing movie screens, unauthorized copying of the images shown on the screen can be prevented without adding any new functions to digital cameras.

The latest version of Adobe Flash Player is required to watch video.

Adobe Flash Player を取得

 

Technology to prevent unauthorized copying of displays (collaboration with the Kogakuin University)

We have also developed a new technology by applying this technology to the unauthorized copying of displays. In a similar manner, this technology facilitates copyright protection for picture and image content and prevents disclosure of confidential and personal information through the unauthorized copying of displays by simply equipping existing displays with a unit to prevent unauthorized copying, without adding any new functions to digital cameras. In addition to preventing disclosure of confidential and personal information through the unauthorized copying of displays, an issue that is becoming more important, it is expected that this new technology will have broad application for preventing unauthorized photographing of works of art, factory equipment, and other objects subject to photographic restrictions.


Interference effect on unauthorized copying of images (17-inch LCD, Left: without noise, right: with noise)

Reference

  1. NII Press Release, “Technology to prevent unauthorized copying of displays by utilizing differences in sensitivity between human beings and devices - Prevent disclosure of confidential and personal information through unauthorized copying of displays,” July 4, 2011
  2. NII Interview: Preventing Surreptitious Filming in the Divide Between Digital and Physical, NII Today, No. 51, pp. 2-3 (February 2011)
  3. T. Yamada, S. Gohshi, and I. Echizen, “iCabinet: Stand-alone implementation of a method for preventing illegal recording of displayed content by adding invisible noise signals,” Proc. of the ACM Multimedia 2011 (ACM MM 2011), pp. 771-772 (November 2011)
  4. T. Yamada, S. Gohshi, and I. Echizen, “Countermeasure of re-recording prevention against attack with short wavelength pass filter,” Proc. of the 2011 IEEE 18th International Conference on Image Processing (ICIP2011), pp. 2753-2756 (September 2011)
  5. T. Yamada, S. Gohshi, and I. Echizen, “Re-shooting prevention based on difference between sensory perceptions of humans and devices,” Proc. of the 17th International Conference on Image Processing (ICIP 2010), pp.993-996 (September 2010)

 

Privacy-enhancing technologies for resolving trade-offs between data anonymity and utility

Privacy Visor

Due to developments in the ubiquitous information society, computers, sensors and their networks are located in all places, and useful services can now be received at all times and in all spaces of our lives. On the other hand, however, there is now the actual problem that privacy information is easily disclosed as a result of the popularization of portable terminals with built-in cameras or GPS and other sensors. In particular, invasion of the privacy of photographed subjects is becoming a social problem due to photographs taken without the permission of the subjects and photos unintentionally captured in camera images by portable terminals with built-in cameras being disclosed by the photographer on SNS together with photographic information. As a result of developments in facial recognition technology in Google images, Facebook, etc. and the popularization of portable terminals that append photos with photographic information (geotags), such as photo location and time, as metadata when the photo is taken, information such as when and where photographed subjects were is revealed from the disclosed photo of the person concerned via photos taken and disclosed without their permission. Essential measures for preventing the invasion of privacy caused by photographs taken in secret and unintentional capture in camera images is now required. The possibility of unintentional capture in camera images resulting in the invasion of privacy has already been pointed out in Europe and other regions. It has been reported that, according to experiments conducted at Carnegie Mellon University (CMU), for close on a third of tested subjects who had agreed to being photographed for the experiment, their names could be identified by comparison with information of photos, etc. on disclosed SNSs, and, further, that there were also cases where the interests of the tested subjects and some social security numbers also were found out. Furthermore, due to concerns about the invasion of privacy from SNS facial recognition functions, the European Union (EU) has requested the invalidation of facial recognition in Facebook intended for European users.

Against this backdrop, we have become the first in the world to develop new technology for protecting photographed subjects from the invasion of privacy caused by photographs taken in secret and unintentional capture in camera images. This technology focuses in the differences on human visual sense and the spectral sensitivity characteristics of imaging devices on cameras, and facial detection of photographed subjects can be made to fail only when photos are being taken without the addition of any new functions to existing cameras. This is achieved by the photographed subject wearing a wearable device – a privacy visor – equipped with a near-infrared LED that appends noise to photographed images without affecting human visibility.

p-visor
Privacy Visor

p-visor face detection
Execution of face detection

PrivacyVisor
New version of the Privacy Visor without power supply
(designed by Tsuyoshi Ando, Airscape Architects Studio)

Reference

  1. NII Press Release, Privacy Protection Techniques Using Differences in Human and Device Sensitivity -Protecting Photographed Subjects against Invasion of Privacy Caused by Unintentional Capture in Camera Images- December 12, 2012
  2. BBC News(UK), Privacy visor blocks facial recognition software January 22, 2013
  3. BBC News(UK)BBC One, BBC Two, BBC News Channel and BBC World News Channel
    Click, Infrared glasses to thwart embarrassing Facebook photos  January 26-27, 2013
  4. NBC News, LED-powered 'privacy visor' thwarts facial recognition June 20, 2013
  5. TIME, Leery of Facial Recognition? These Glasses Might Help June 20, 2013
  6. T. Yamada, S. Gohshi, and I. Echizen, "Use of invisible noise signals to prevent privacy invasion through face recognition from camera images," Proc. of the ACM Multimedia 2012 (ACM MM 2012), pp.1315-1316, (October 2012)
  7. T. Yamada, S. Gohshi, and I. Echizen, “Privacy Visor: Method for Preventing Face Image Detection by Using Differences in Human and Device Sensitivity,” Proc. of the 14th Joint IFIP TC6 and TC11 Conference on Communications and Multimedia Security (CMS 2013), 10 pages, (September 2013)
  8. T. Yamada, S. Gohshi, and I. Echizen, “Privacy Visor: Method based on Light Absorbing and Reflecting Properties for Preventing Face Image Detection,” Proc. of the 2013 IEEE International Conference on Systems, Man, and Cybernetics (IEEE SMC 2013), 6 pages, (October 2013)
  9. NII Today, No.50, Get Control of Personal Data Back in Our Hands -PrivacyVisor raises discussion on arbitrary facial recognition- (June 2014)

  10. See also research achievements

Fingerprinting technologies for anonymizing data (collaboration with the Vienna University of Technology)

Another area of concern is the use of statistical data, including personal survey data that were collected with the assurance that they would be used only within the company or research institution, beyond organizational boundaries while maintaining anonymity to a certain degree.

For example, if medical data, which typically includes the patient’s name, address, age, disease, and medication, were made publically available without change, the patient could be identified. To prevent this, it is necessary to blur the patient’s attributes by deleting the name and address and by generalizing some of the information; for example, “Tokyo” could be generalized to “Japan,” and “age 32” could be generalized to “thirties.” Since more than one person would usually have the same general attribute, individuals could not be identified.

However, this generalization approach to ensuring anonymity impairs the value and accuracy of data. In other words, there is a trade-off between the level of data anonymity and the academic utility of the data. Data anonymity has traditionally been emphasized on the assumption that anonymized data could be made freely available. Nowadays, the degree of anonymization has been lowered to increase data utility while more emphasis has been placed on measures to prevent data leaks.

We have developed a method for identifying the source of leaked anonymized data that associates individual anonymization processes with user identification data. It is called “fingerprinting of anonymized data.” This approach capitalizes on the multiplicity of data anonymization processes, which means that there are many different processes for achieving the same degree of anonymization. Suppose that there are data consisting solely of birth date and gender. User A is provided with a data file containing “1971” and “male” while User B is provided with one containing “August 10, 1971” and “gender unknown.” Our method prepares, for each user, a set of data generated by an anonymization process that varies with the user that has the same level of anonymization as every other prepared set. In the event of data leakage, the association between the anonymization process and the user ID is used to help identify the person responsible. Moreover, awareness of this identification method among users should make them more careful about data management. That is, the anonymization processes themselves deter data leakage.

Application of this method to social networking services (SNSs) and blogs would enable the source of a privacy leak to be identified from an analysis of the text containing the leaked information. In this application, not only would the anonymization process used vary with the user, but the degree of anonymization would vary with the group.

Reference

  1. S. Schrittwieser, P. Kieseberg, I. Echizen, S. Wohlgemuth, and N. Sonehara. “Using Generalization Patterns for Fingerprinting Sets of Partially Anonymized Microdata in the Course of Disasters,” In International Workshop on Resilience and IT-Risk in Social Infrastructures (RISI 2011), Proc. of the 6th ARES conference (ARES 2011), IEEE Computer Society, pp. 645-649 (August 2011)
  2. S. Schrittwieser, P. Kieseberg, I. Echizen, S. Wohlgemuth, N. Sonehara, and E. Weippl, “An Algorithm for k-anonymity-based Fingerprinting,” Proc. of the 10th International Workshop on Digital Watermarking (IWDW 2011), LNCS, 14 pages, Springer (October 2011)
  3. H. Nguyen-Son, Q. Nguyen, M. Tran, D. Nguyen, H. Yoshiura, and I. Echizen, "New Approach to Anonymity of User Information on Social Networking Services," The 6th International Symposium on Digital Forensics and Information Security (DFIS-12), Proc. of the 7th FTRA International Conference on Future Information Technology (FutureTech2012), 8 pages (June 2012)

 

Privacy in business processes (collaboration with the University of Freiburg)

The objective is to enhance the trust model of the practice, whereas data owners have to trust data consumers that they follow the agreed-upon obligations for the processing of data owner’s data. By this project, data owner should control the enforcement of obligations concerning the disclosure of their personal data by data providers. Service providers should be able to prove the enforcement of obligations and so to show the usage of personal data according to the Japanese Act on the Protection of Personal Information and the European Data Protection Directive. This is supposed to support the exchange of personal data between Japanese and European service providers. An information system is being developed so that data owners are able to check the enforcement of these obligations. The foundation for this approach is information flow control mechanisms to trace the flow of personal data, e.g. by modified digital watermarking schemes.

As an ex post enforcement of privacy policies, our proposal for traceable disclosures of personal data to third parties is using data provenance history and modified digital watermarking schemes. The expected result is a novel privacy management, which presents new higher cryptographic protocols realizing a traceable linkage of personal data involving several disclosures of the same data by their data provenance history.

The concept is to tag every disclosure of given personal data between the two parties (signaling). Tagging gives data providers and consumers the proof they need to show that disclosure and receipt of given personal data are done according to the agreed-upon obligations (monitoring). The tag for personal data d consists of the data providers’ identity and data consumers’ identity In the used orchestration of services as well as the corresponding users identity and a pointer to the agreed-upon obligations, since they should be modifiable if the purpose of the data usage changes or the participating service providers change. The tag should stick to d, so that d*= d|tag can be disclosed while assuring the integrity of d*. If d* is disclosed again in compliance with the obligations, the tag has to be updated by the identity of the previous data consumer and by adding the identity of the new data consumer. The sequence of tags for the same personal data thus constitutes a disclosure chain, which represents the flow of these personal data.

This is one option for checking authenticity and confidentiality of data in adaptive ICT systems, where systems are on demand orchestrated for delivering in real time requested services. A check of confidentiality implies to take the interdependencies of the participating systems into account. Even in case of an information leakage by a covert channel, our approach should identify the data consumer where the information leakage has occurred and hence gives evidence if this orchestration is threatened by a covert channel.

Reference

  1. N. Sonehara, I. Echizen, and S. Wohlgemuth, “Isolation in Cloud Computing and Privacy-Enhancing Technologies: Suitability of Privacy-Enhancing Technologies for Separating Data Usage in Business Processes,” Business Information Systems Engineering (BISE)/WIRTSCHAFTSINFORMATIK", vol. 3, no. 3, pp. 155-162, Gabler (June 2011)
  2. S. Haas, S. Wohlgemuth, I. Echizen, N. Sonehara and G. Mueller, “Aspects of Privacy for Electronic Health Records”, International Journal of Medical Informatics, Special Issue: Security in Health Information Systems 80(2), pp.e26-e31, Elsevier, http://dx.doi.org/10.1016/j.ijmedinf.2010.10.001 (February 2011)
  3. S. Wohlgemuth, I. Echizen, N. Sonehara and G. Mueller, “Privacy-compliant Disclosure of Personal Data to Third Parties”, International Journal it - Information Technology 52(6), Oldenbourg, pp. 350-355 (December 2010)
  4. S. Wohlgemuth, I. Echizen, N. Sonehara, and G. Mueller, “Tagging Disclosures of Personal Data to Third Parties to Preserve Privacy,” Proc. the 25th IFIP TC-11 International Information Security Conference (IFIP SEC 2010), to be published in IFIP AICT series, Springer, pp.241-252 (September 2010) <One of the best papers>

 

Information hiding

Color picture Watermarking Survivability to Wide Range of Geometrical Transformations (collaboration with the University of Electro-Communications)

We have developed a robust video watermarking method that can embed watermarks immune to not only rotation, scaling, and translation but also to random geometric distortion and any of their combinations. The watermarks are embedded in two constituent planes (ex. the U and V planes) of a color picture and are detected by evaluating the correlation between the planes. It thus requires no searches for treating random distortion.

 

Maintaining picture quality of color picture watermarking based on human visual system (collaboration with the University of Electro-Communications)

We have proposed a method that takes into account the human visual system for processing color information to better maintain picture quality that a previously reported watermarking method. The method determines the watermark strength in the uniform color space (L*u*v* space), where human-perceived degradation of picture quality can be measured in terms of Euclidian distance, and embeds and detects watermarks in the YUV space, where detection is more reliable than that in the original method.

 

Reversible acoustic information hiding for integrity verification (collaboration with the Tokyo University of Information Sciences)

We have developed a reversible information hiding that can verify the integrity of acoustic data with probative importance from being illegally used. A hash function is used as a feature value to be embedded into original acoustic target data as a checksum of the data’s originality. We compute the target original signal with an Integer Discrete Cosine Transform (intDCT) that has low computational complexity. Embedding space in the DCT domain is reserved for feature values and extra payload data, enabled by amplitude expansion in high-frequency spectrum of cover data. Countermeasures against overflow/underflow have been taken with adaptive gain optimization.

Reference

  1. Y. Atomori, I. Echizen, and H. Yoshiura, “Picture Watermarks Surviving General Affine Transformation and Random Distortion,” International Journal of Innovative Computing, Information and Control, vol.6, no.3(B), pp.1289-1304 (March 2010)
  2. X. Huang, A. Nishimura, and I. Echizen, “A Reversible Acoustic Steganography for Integrity verification,” Proc. of the 9th International Workshop on Digital Watermarking (IWDW 2010), LNCS 6526, pp.305-316, Springer (October 2010)
  3. I. Echizen, Y. Atomori, S. Nakayama, and H. Yoshiura, “Use of Human Visual System to Improve Video Watermarking for Immunity to Rotation, Scale, Translation, and Random Distortion,” Circuits, Systems and Signal Processing (CSSP), vol.27, no.2, pp. 213-227 (April 2008)

Go to top of page