Data quality and clinical decision-making: do we trust machines blindly
This is a review of a 2009 article by Pesudovs & Applegate entitled; Data quality and clinical decision-making: do we trust machines blindly? 
This article touches on an important consideration of clinical decision support systems (CDSS) - which is the need for the user to be actively engaged in the process as opposed to passively accepting CDS outputs. The authors discuss CDS in an Optometry setting and the increasing reliance on technology particularly in the area of ocular imaging. They emphasize the importance of clinicians being able to distinguish when the data is reliable and 'trustworthy' and when to question its quality 
The authors discuss concerns that should be considered by practitioners making clinical decisions on the basis of data generated using ocular imaging machines. Particular concerns expressed are in the area of accuracy and precision of these technologically advanced machines. 
- A machine may give valid results in one aspect but this vote of confidence can not be extrapolated to other aspects of the machine's functioning.
- Authors cite the example of the Oculus Pentacam which accurately measures lens opacity, corneal curvature & central corneal thickness of the anterior segment of the eye but inaccurately measures pupil size and peripheral corneal thickness resulting in errors in the derived Pentacam-derived wavefront aberrations. As a result a higher number of these aberrations are reported than should be the norm.
- According to the authors Face Validity should be considered when evaluating data used in Clinical Decision Support i.e. the data should make sense to the user upon initial review. 
The authors make a distinction between the precision / reliability of the imaging machines to repeatedly produce the same results which does not in turn mean that these results are valid - just because they are consistently produced. They however feel that the issue of inaccuracy can be rectified as long as the machine in question is precise and hence this may be more desirable than a machine that is imprecise but accurate. This would be an important consideration for clinicians as to the level of confidence they can place in Clinical Decision Support Data generated by either type of machine described in this scenario. 
It appears to be a quandary as to what level of imprecision or inaccuracy is acceptable for clinicians when using new technology [] systems. One part of the dilemma faced is often the lack of comparable standards for accurately evaluating the results obtained with the new technology. Clinicians are encouraged by the authors to stringently consider the quality of data obtained from new technology and not simply accept it blindly in their eagerness to use it. Data quality is a key consideration and needs to be ensured via empirically based studies and testing in order to facilitate sound clinical decision making and not hinder it.
The article is an interesting look at clinical decision support systems from the perspective of machine derived information. It is plausible that some clinicians may be trusting of these machines especially if the are not conversant with the technology in use and if there is no precedent for comparison. Scarcity of empirical based studies makes it even more difficult and may lead to reluctance of clinical centers to be 'the first' to pilot validity & reliability studies of the new technology among their patients. Data quality is extremely important to clinical decision support. From the article it is not clear if the data generated by the Optometry devices were integrated into an Electronic Health Record []. Other data quality issues could arise if data integration of data generated from devices is not electronically but manually entered. 
- Pesudovs, K., & Applegate, R. A. (2009). Data quality and clinical decision-making: do we trust machines blindly? Clinical and Experimental Optometry, 92(3), 173–175. http://onlinelibrary.wiley.com/doi/10.1111/j.1444-0938.2009.00367.x/abstract