• Users Online: 615
  • Home
  • Print this page
  • Email this page
Home About us Editorial board Ahead of print Current issue Search Archives Submit article Instructions Subscribe Contacts Login 


 
 Table of Contents  
ORIGINAL ARTICLE
Year : 2015  |  Volume : 1  |  Issue : 1  |  Page : 33-37

Study on Accuracy of Judgments by Chinese Fingerprint Examiners


1 School of Criminal Science and Technology, People's Public Security University of China, Beijing, China
2 School of Criminal Science, University of Lausanne, Lausanne, Switzerland
3 College of Science, Northeastern University, Shenyang, China

Date of Web Publication29-May-2015

Correspondence Address:
Yaping Luo
School of Criminal Science and Technology, People's Public Security University of China, Beijing
China
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/2349-5014.157908

Rights and Permissions
  Abstract 

The interpretation of fingerprint evidence depends on the judgments of fingerprint examiners. This study assessed the accuracy of different judgments made by fingerprint examiners following the Analysis, Comparison, and Evaluation (ACE) process. Each examiner was given five marks for analysis, comparison, and evaluation. We compared the experts' judgments against the ground truth and used an annotation platform to evaluate how Chinese fingerprint examiners document their comparisons during the identification process. The results showed that different examiners demonstrated different accuracy of judgments and different mechanisms to reach them.

Keywords: Accuracy, fingerprint identification, forensic science


How to cite this article:
Liu S, Champod C, Wu J, Luo Y. Study on Accuracy of Judgments by Chinese Fingerprint Examiners. J Forensic Sci Med 2015;1:33-7

How to cite this URL:
Liu S, Champod C, Wu J, Luo Y. Study on Accuracy of Judgments by Chinese Fingerprint Examiners. J Forensic Sci Med [serial online] 2015 [cited 2019 Nov 17];1:33-7. Available from: http://www.jfsmonline.com/text.asp?2015/1/1/33/157908


  Introduction Top


To conduct the assessment of fingermarks and its subsequent comparison with prints, fingerprint examiners follow a protocol called ACE-V, an acronym that represents four phases of fingerprint examination: analysis, comparison, evaluation, and verification. [1] The analysis phase is an information gathering and preevaluation phase of the mark comprising mark quality, minutiae quantity, and degradation factors. The comparison phase is a side-by-side examination of information from the analysis phase against the information on a control print. The evaluation phase is the weighing phase to make the final decision of identification.

During the analysis phase, fingerprint examiners assess the "value" of the mark and then following the comparison phase they guide as to whether or not there is a common source between the marks and the prints. Both these tasks imply the experts' judgment as these tasks are mainly driven by the training and experience of the experts. In such a context, where human judgment takes such a leading role, the assessment of the accuracy of these decisions is pivotal to provide an empirical validation of the use of papillary marks in forensic science.

There is a growing body of data studying how proficient fingerprint experts are in their decisions. [2],[3],[4],[5],[6],[7],[8],[9] To our knowledge, such a measure of accuracy has not been undertaken in China.


  Materials and Methods Top


Approximately 40 fingerprint examiners from Beijing and Shanghai forensic science laboratories were invited to join in this study. The design of the experiment mimics the approach recently used by Neumann et al. [9] The data acquisition has been made using a web-based platform to provide the examiners with a fingerprint comparison platform, so as to allow them to annotate marks and prints and to express all the judgments made by fingerprint examiners. The data acquisition was anonymous. The annotation platform used is based on Picture Annotation System (PiAnoS, Platform for integrative analysis of omics data, Thibault Genessay, Lausanne, Switzerland) that has been translated in Chinese. All the data were stored into JSON text and translated into MS Excel data by a web-based program. The data were tabulated into an MS Excel sheet and then analyzed using R statistical software.

The platform provides an environment that allows fingerprint examiners to perform each fingerprint trial from the analysis to the comparison and evaluation phases. During the analysis phase, examiners were asked to do the following:

  1. Annotate the quality and minutiae of the mark,
  2. Choose the degradation aspects of the mark,
  3. Indicate the assessment of the three levels of details, and
  4. Provide a decision regarding the value of the mark for further examination. The mark could be declared value for identification (VID), value for exclusion only (VEO), or no value (NV).


During the comparison phase, the platform provided control print. The examiners were asked to annotate the corresponding minutiae [Figure 1] and provide their final decisions regarding the comparison. The examiners could choose between the following conclusions: identification, exclusion, or inconclusive.
Figure 1: Annotations in comparison phase.

Click here to view


Trial selection

Five fingerprint cases were selected where the ground truth was known. Two marks were originated from the same source, while the remaining three cases did not originate from the same source [Figure 2]. It is important to note that the trials had been selected to represent difficult cases, where we would expect the experts operating at the boundaries of the decision limits.
Figure 2: 5 Trial images.

Click here to view


The participants were given a short instruction on the use of the platform. Then they were invited to connect to the platform at a time of their choosing, without any limitation of time to conduct their examinations.


  Results and Discussion Top


Overall the results obtained are not different from the results obtained on a population of experts gathered mainly from the USA. [9] We will detail some results following the ACE-V protocol.

Reproducibility of judgments on three levels of details in the analysis phase

In this section, we will use the first trial to illustrate our findings. These observations apply to all the five trials.

The examiners were invited to assess the visibility of the three levels of details [level 1, level 2, and level 3, respectively, as defined by the Scientific Working Group on Friction Ridge Analysis, Study and Technology (SWGFAST)] [10] of each mark. The results on mark 1 are shown in [Figure 3] and we can observe the large variability in these assignments between the experts.
Figure 3: Three levels of evaluation distribution for level 1, level 2, and level 3, respectively.

Click here to view


The variability can be explained by the lack of clear definition of both the three levels of details used by the fingerprint community and on how to assess their legibility. This is no different from the observations made by [9] or [11] in relation to level 3 features.

Reproducibility of judgment on degradation aspects of mark in the analysis phase

Recognizing the degradation factors is critical to understand the complexity of a mark. The examiners were invited to assign (when relevant) degradation factors to each mark. [Figure 4] illustrates the distribution of the degradation aspects associated with the mark from the trial. Again, we can observe the variability in the judgments of factors that affect the understanding of mark 1. Actually, mark 1 was deposited on a rough surface with low pressure and was developed with black powder.
Figure 4: Degradation factors distribution of mark 1 (NA represents data absence).

Click here to view


Results of judgments on the value of the marks

[Table 1] gives the conclusion reached by all the responding experts at the end of the analysis phase. The marks from trials 2 and 5 were unanimously declared as being of VID. The marks associated with the trials 1, 3, and 4 led to significant variability between the examiners.
Table 1: Distribution of the Value Judgments for Five Trials


Click here to view


So, the examiners have the ability to keep consistency and reliability of mark evaluation for high quality marks.

The results also showed that there is a significant variability of evaluation for complex and low quality marks.

Results of quality and minutiae annotation

The quality annotations between the examiners for mark 1 show that there is a consistent quality assessment for mark 1 [Figure 5]. Green is used for high quality, red for low quality, and or ange for medium quality. The minutiae annotations between the examiners for mark 1 can be seen in the same figure, then the combination of all the examiners' annotations results are shown together.
Figure 5: Combination of quality assessment and minutiae annotation (mark 1).

Click here to view


Mark 4 is a close nonmatch (CNM) fingerprint that shows significant variability in the final conclusion. The figures show the annotations of quality and minutiae by three examiners who wrongly identified the source of the latent print, and these three examiners are ordered according to the number of years of experience. User 13 (1 year of experience) annotated 10 minutiae during the analysis phase, which were all located in the green area and none in the orange and the red areas. He kept the eight annotations during the comparison phase to support the identification conclusion [Figure 6]. User 09 (8 years of experience) annotated six minutiae in the green area in the analysis phase and kept all the annotations to support the conclusion [Figure 7]. One the contrary, user 15 (25 years of experience)
Figure 6: Annotations of User 13 (left: analysis phase, middle and right: comparison phase).

Click here to view
Figure 7: Annotations of User 09 (left: analysis phase, middle and right: comparison phase).

Click here to view


annotated the central area as the orange area and annotated 11 minutiae in the analysis phase and kept eight minutiae for comparison and made the final conclusion [Figure 8].
Figure 8: Annotations of User 15 (left: analysis phase, middle and right: comparison phase).

Click here to view


The reasons why examiners reached wrong conclusions can be explained as follows:

  1. CNM prints have similar ridge flow, ridge path, ridge sequence and minutia type, and configuration.
  2. Automated Fingerprint Identification System (AFIS) database has changed the risk to propose CNM to an examiner. As the database size is increasing, the examiners' experience alone is no longer sufficient to deal with CNM prints.
  3. There is no clear definition of how to explain "dissimilarity" during the comparison process.


This study is limited to a few examiners in China. It is clear that some factors such as time, pressure, and attitude can affect the accuracy of judgments by fingerprint examiners during the identification process.


  Conclusion Top


This research clearly shows that there is a lack of clear definition of the features used by the examiners during the assessment and analysis of a mark leading to a decision regarding its value. There is a need for communication in the fingerprint community to deal with the differences in definition. For example, the different mark value evaluation suggests the need to work toward agreed definitions for the VID, VEO, and NV.

This study is a kind of "white box" study that invites examiners to document the information used by them to make decisions. Further research is needed to understand how those judgments affect the final identification decision. We view this as a new way to conduct fundamental research in China.

 
  References Top

1.
Ashbaugh DR. Quantitative-Qualitative Friction Ridge Analysis: An Introduction to Basic and Advanced Ridgeology. Boca Raton: CRC Press; 1999. p. 64-8.  Back to cited text no. 1
    
2.
Ulery BT, Hicklin RA, Roberts MA, Buscaglia J. Measuring what latent fingerprint examiners consider sufficient information for individualization determinations. PLoS One 2014;9:e110179.  Back to cited text no. 2
    
3.
Ulery BT, Hicklin RA, Kiebuzinski GI, Roberts MA, Buscaglia J. Understanding the sufficiency of information for latent fingerprint value determinations. Forensic Sci Int 2013;230:99-106.  Back to cited text no. 3
    
4.
Ulery BT, Hicklin RA, Buscaglia J, Roberts MA. Repeatability and reproducibility of decisions by latent fingerprint examiners. PLoS One 2012;7:e32800.  Back to cited text no. 4
    
5.
Ulery BT, Hicklin RA, Buscaglia J, Roberts MA. Accuracy and reliability of forensic latent fingerprint decisions. Proc Natl Acad Sci U S A 2011;108:7733-8.  Back to cited text no. 5
    
6.
Langenburg G, Champod C, Genessay T. Informing the judgments of fingerprint analysts using quality metric and statistical assessment tools. Forensic Sci Int 2012;219:183-98.  Back to cited text no. 6
    
7.
Langenburg G. Performance study of the ACE-V process: A pilot study to measure the accuracy, precision, reproducibility, repeatability, and biasability of conclusions resulting from the ACE-V process. J Forensic Identific 2009;59:219-57.  Back to cited text no. 7
    
8.
Wertheim K, Langenburg G, Moenssens A. A report of latent print examiner accuracy during comparison training exercises. J Forensic Identific 2006;56:55-93.  Back to cited text no. 8
    
9.
Neumann C, Champod C, Yoo M, Genessay T, Langenburg G. Improving the Understanding and the Reliability of the Concept of "Sufficiency" in Friction Ridge Examination. Washington DC: National Institute of Justice; 2013. p. 10-5.  Back to cited text no. 9
    
10.
Scientific Working Group on Friction Ridge Analysis Study and Technology (SWGFAST), Standards for Examining Friction Ridge Impressions and Resulting Conclusions. Available from: http://www.swgfast.org/documents/examinations-conclusions/130427_Examinations-Conclusions_2.0.pdf. [Last accessed on 2013 Oct 06].  Back to cited text no. 10
    
11.
Anthonioz A, Egli N, Champod C, Neumann C, Puch-Solis R, Bromage-Griffiths A. Level 3 details and their role in fingerprint identification: A survey among practitioners. J Forensic Identific 2008;58:562-89.  Back to cited text no. 11
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6], [Figure 7], [Figure 8]
 
 
    Tables

  [Table 1]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Introduction
Materials and Me...
Results and Disc...
Conclusion
References
Article Figures
Article Tables

 Article Access Statistics
    Viewed1611    
    Printed75    
    Emailed0    
    PDF Downloaded257    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]