Artificial intelligence (AI) has been present in some guise within the field of radiology for over 50 years. The first studies investigating computer-aided diagnosis in thoracic radiology date back to the 1960s, and in the subsequent years, the main application of these techniques has been the detection and classification of pulmonary nodules. In addition, there have been other less intensely researched applications, such as the diagnosis of interstitial lung disease, chronic obstructive pulmonary disease, and the detection of pulmonary emboli. Despite extensive literature on the use of convolutional neural networks in thoracic imaging over the last few decades, we are yet to see these systems in use in clinical practice. The article reviews current state-of-the-art applications of AI and in detection, classification, and follow-up of pulmonary nodules and how deep-learning techniques might influence these going forward. Finally, we postulate the impact of these advancements on the role of radiologists and the importance of radiologists in the development and evaluation of these techniques.
Artificial intelligence (AI) is an umbrella term used to refer to techniques that perform tasks that mimic human cognitive functions and intelligence.
1Machine learning is a branch of AI that enables extraction of meaningful patterns from images. In the context of medical imaging, the idea of having a computer that performs repetitive tasks consistently and tirelessly is extremely appealing.
2Machine-based analysis of thoracic imaging dates as far back as the 1960s when Lodwick and colleagues coded imaging features from chest radiographs in patients with lung cancer to create a prognostic model predicting 1 and 5 year survival.
3In the intervening years, computer-aided detection/diagnosis (CAD) underpinned by machine-learning techniques
4became a leading research subject in radiology
5with many applications such as detection and distinction of benign and malignant pulmonary nodules,
6temporal subtraction for assessment of interval changes between scans,
8and detection of interstitial lung disease.
9The primary focus for the CAD developers had been to develop algorithms that improve reporter accuracy and provide computer output that acts as a “second opinion” to the reporting radiologists; however, despite a number of studies showing that CAD helped radiologists to improve their diagnostic accuracy, the technology failed to achieve clinical acceptance.
A new paradigm in computer-based diagnosis was ushered in in 2012 when Krizhevsky et al. used convolutional neural networks (CNNs) to carry out classification of the ImageNet dataset with hitherto unachieved accuracy and won the ImageNet Large-Scale Visual Recognition Challenge. Not only are these techniques a significant improvement on existing machine-learning techniques, in some cases, computers seem to be able to “see” patterns that are beyond human perception. This led to a spate of new work in applying this technique to classify medical images. Fig 1 demonstrates the rapid increase in the number of publications related to machine learning in radiology since 2012. This has coincided with a relative reduction in the papers mentioning CAD. Despite the shift in terminology and predictions of machines replacing radiologists, the basic promise of automated diagnosis remains just that, and significant work needs to be done to improve and validate these techniques before they can be deemed safe for routine clinical use.
The advent of deep learning has coincided with the availability of large curated datasets of thoracic imaging such as the Lung Image Database Consortium Image Database Resource Initiative (LIDC-IRDI),
11and data from large scale lung cancer screening trials such as the National Lung Screening Trial (NLST) in the US
12and the Dutch–Belgian Lung Cancer Screening trial (NELSON),
13which have provided researchers with some of the vast amount of data that deep-learning techniques require for the models to train and perform accurately. Because of the availability of well-curated and validated databases, thoracic radiology remains at the forefront of development of these new techniques. The early applications of CNNs in thoracic radiology remain similar to the domains explored by CAD scientists and in this article we discuss the current state of the art techniques in pulmonary nodule detection, characterisation, and follow-up.
The demonstration of a 20% reduction in lung cancer mortality in NLST and similar results from the NELSON
14trial are paving the way for systematic lung cancer screening in the USA and Western Europe. Pulmonary nodules, defined as a focal opacity <3 cm in diameter, are a common finding on lung computed tomography (CT) scans. In the NLST cohort, for example, a nodule was seen in 25.9% of scans; however, only 3.6% of these were malignant.
15Differentiating these benign and malignant nodules is a challenging task and relies on a combination of visual assessment and measurements carried out by the reporting radiologists. AI driven tools have the potential of automating a number of steps (Fig 2) in the repetitive and burdensome task of dealing with mostly normal images.
Detection of small pulmonary nodules is a difficult task, but one that is fundamental for early diagnosis of lung cancer. A volumetric CT examination of the chest contains over 9 million voxels. A 5-mm-diameter lung nodule occupies approximately 130 voxels or only 1.4×10−5 of the lung volume. Substantial variability has been reported in the sensitivity of radiologists in detecting these nodules, and this can be impacted by a variety of characteristics such as size, shape, location, density, and their relationship to the adjacent structures.
18In the NLST CT screening arm, 8.9% of cancers were missed on the initial CT.
19Concurrent reading of scans by two reporters has been shown to improve diagnostic sensitivity, but is time-consuming and impractical in daily practice.
20This underlines the need for machine-learning tools that assist radiologists in nodule identification, and this area has been amongst one of the most studied applications for CAD and has been shown to reduce interpretation time for each scan.
21More recently, a number of papers have also reported promising results in increasing nodule detection sensitivity using deep-learning techniques.
Nodule size is a strong predictor of malignancy,
23in the NELSON trial for example, individuals with nodule size <100 mm3 had the same background cancer risk (0.5%) as individuals with no nodules.
24Traditionally nodule size has been assessed via manual two-dimensional (2D) calliper measurement of the largest transverse diameter. More recently, screening trials and national and international guidelines on the management of nodules recommend measuring volume rather than diameter, as it is less prone to intra- and interobserver variability,
25better encapsulates the three-dimensional nature of a pulmonary nodule,
26is more sensitive to change in size, and therefore, is able to pick up changes suggestive of malignancy much sooner than 2D diameter measurements.
Reliable volumetric measurements rely on accurate nodule segmentation. Since the 1980s, various attempts have been made to develop CAD algorithms for nodule segmentation.
28The majority of currently available segmentation algorithms rely on “region growing” procedures from a user defined seed-point that connects all voxels above a given threshold. Solid intraparenchymal pulmonary nodules contrast sharply with air within the normal lung parenchyma and are therefore relatively easy to delineate via this method; however, the task is complicated when other structures of similar attenuation, such as vessels, airways, and pleura, are adjacent to the nodules and requires image processing steps that exploit morphological criteria to remove attached structures.
Segmenting subsolid nodules is more challenging than solid lesions as there is a reduced difference in attenuation between the nodule and the surrounding parenchyma and differentiating the solid component of these, often large, nodules from adjacent vessels is even harder; however, there has been promising recent research to suggest that these challenges can be successfully overcome.
Within the past few years, multiple software packages that provide manual, semi-automated, and automated volumetric analysis have become available. Although these packages provide reliable repeat measurements, there is a variation in the size measurements provided by different software packages. The variation is greater in irregular shaped and juxta-pleural nodules.
31Reduction of such variability in nodule volumetry was a key research recommendation in the British Thoracic Society pulmonary nodule management guidelines.
Recent literature in nodule segmentation reports better nodule segmentation using deep learning. Our group carried out volumetric segmentation of 7,927 nodules from the NLST cohort via a deep-learning model that was initiated with a single click point inside the nodule. These measurements were then used to assess the accuracy malignancy prediction using the Brock University Cancer Prediction Model. The method was compared with 2D measurements made by the NLST radiologists and improved the prediction value by 2.21% (AUC = 88.17 for volumetric analysis versus 85.96 for radiologist measurement).
Nodule size measurement: automatic or human-which is better for predicting lung cancer in a brock model?.
in: Pipavath S. Schiebler M. Chest (lung nodule). Radiological society of north America 2018 scientific assembly and annual meeting, 25–30 november, chicago IL. 2018
Date accessed: November 20, 2018
Moving forward, deep learning offers the possibility of doing away with nodule segmentation altogether as the CNN approach can handle segmentation in an implicit way within the algorithm.
The current diagnostic pathway for lung nodules relies on size and growth as the main differentiator between benign and malignant nodules; however, in addition to size, CT images also provide additional information, such as shape, spatial complexity, intensity patterns, and a range of other “texture” features. Traditional CAD algorithms relied on a selection of these hand-picked features to create a classifier that differentiates between benign and malignant nodules. In 2012, Lambin and colleagues coined the term “radiomics” to describe high-throughput extraction of these quantitative features from medical images to aid diagnosis, prognostication, and monitor response to therapy.
36Pathological studies have demonstrated that there is increased heterogeneity within malignant lung nodules, which is not appreciable on radiological studies by the naked eye but can be quantified with radiomics.
Although the use of the term radiomics has gained significant popularity in the literature within the last decade, the underlying principles are based on techniques that were first proposed in the 1970s.
38In the problem of differentiating benign and malignant nodules, the first step of the radiomics approach involves defining a large number of texture features that are believed to be of interest based on previous literature or expert opinion. Next, a training dataset, where the nodule has been segmented, is used to extract the texture features automatically. Finally, a smaller subset of such features that either individually or in combination perform the best at this task on the training data, are applied to a test dataset, which ideally has been kept separate from the training dataset.
34Unfortunately, due to the small size of available datasets that lack sufficient scanner and scan protocol variability often the training and testing datasets are not independent. This leads to the problem of “overfitting” where the high accuracy of the model cannot be replicated on unseen and independent datasets.
39This is demonstrated by the fact that a number of studies report different radiomic signatures that are all based on their particular cohort.
In the LungX Lung Nodule Classification Challenge where many participants employed radiomics techniques, areas under the receiver operating characteristic curves (AUCs) ranged from 0.5 to 0.68, and only three of the methods outperformed random chance with statistical significance.
47Another limitation of the radiomics-based approach is that texture features can change significantly with the stage of disease and the imaging protocol being employed and the relatively small datasets do not capture a sufficiently broad disease spectrum.
A study looking at robustness of radiomic features in different MRI sequences found that only 33% of features were stable.
44This underlines the need for robust reporting of the methodology employed to develop a radiomic signature; however, a systematic review investigating the repeatability and reproducibility of radiomic features in lung concluded that only seven out of 41 studies reported their image acquisition, pre-processing, and feature extraction methodology in sufficient detail for it to be replicable.
As mentioned earlier, CNNs trained using deep-learning techniques have come to dominate pattern detection, recognition, segmentation, and classification applications in both medical and non-medical fields. The key advancement that CNNs have over traditional CAD systems is their ability to self-learn previously unknown features, maximising classification with limited direct supervision. The most stunning example of this has been in retinal imaging where researchers were able to predict features such as gender (AUC=0.97) and smoking status (AUC=0.71) that were not previously thought to be quantifiable from fundus photographs.
Early work using CNNs in pulmonary nodule classification has been, in most cases, superior to the current reference standard CAD techniques.
- Sahu P.
- Yu D.
- Dasari M.
- et al.
A lightweight multi-section CNN for lung nodule classification and malignancy estimation.
IEEE J Biomed Health Inform. 2018 Nov 6; https://doi.org/10.1109/JBHI.2018.2879834
58In particular, they show a reduction in the number of false-positives, which has the potential for reducing unnecessary follow-up of benign nodules
60; however, before CNNs become available in clinical practice, there are a number of hurdles to overcome. As CNNs “self-learn” this makes it difficult to determine how a system came to a particular conclusion. Training CNNs requires a massive amount of validated data. To date, most of the relevant research has been conducted on the LIDC database. The LIDC database comprises 1,018 thoracic CT examinations with nodules measuring 3–30 mm in size and a mixture of primary lung cancers, metastatic disease, and benign nodules. Each nodule has been annotated by four experienced thoracic radiologists independently who also gave a subjective rating to each nodule >3 mm regarding the presence of calcification, internal structure, population, margin sharpness, texture, and spiculation. Where available, information regarding how the diagnosis was made was also provided. This ranged from unknown diagnosis, no change in radiological appearances over 2 years (suggesting a benign diagnosis), histology, or response to therapy. Unfortunately, only a subset of 72 cases (31 benign and 41 malignant) from the LIDC had “ground truth” diagnosis available in the form of histology or stable 2 year follow-up. This means that the best performance that these models can achieve is that of the radiologists rating the images.
Furthermore, the exact methodology varies for each study with variations in the number of convolution layers, layer depth, and CNN architecture. In addition, the way each study used the data in different ways makes it difficult to draw direct comparisons between the studies. Further research is required using much larger validated databases using comparable experimental protocols.
Comparison of images at different time points to detect interval changes forms a cornerstone for radiological diagnosis. In the context of pulmonary nodules, volume doubling time is a key indicator of malignancy. In addition, alteration in the morphology of nodules, such as spiculation, cavitation, and density change, also help distinguish between benign and malignant nodules. Often these changes are subtle and temporal subtraction has been proposed as a technique to increase radiologist sensitivity in detecting these changes.
61This involves registration of two scans from the same patient and subtraction of matched voxels from the earlier time point. Work in this area is still in the preliminary stages and prone to misregistration artefacts. In addition, a number of workflow steps, such as displaying and rearranging the images and retrieving additional comparison studies, requires extra time.
62Even if the scans are registered perfectly, nodule size can vary up to 25% based on a number of patient factors, such as position, heart pulsation, and inspiration levels.
Our group has previously shown that texture features are less impacted by such factors in a prospective study where 40 patients underwent two low-dose CT examinations within a 60-minute period.
63We subsequently tested the hypothesis that texture of benign nodules should remain stable compared to malignant ones. Initially, 1,500 features were extracted from a training dataset of 10,000 NLST participants and 20 that performed best with this dataset were tested on an independent in-house cohort of 123 malignant and 120 benign nodules. We found that texture features of malignant nodules do indeed change over time and compared to the current reference standard of volume doubling time, which had an AUC of 0.602, the radiomics approach yielded an AUC of 0.802.
There were a couple of major limitations in our study. Firstly, nodule segmentation was carried out manually, which is prone to inconsistencies. If one time point contains vessels next to a nodule and one does not, then this can impact texture features significantly. This is likely to have affected both groups equally and hence balanced out. More significantly, as this was a real-world retrospective analysis, there was variation between the scanning protocols and some of the changes in texture features may have been due to difference in the phase of contrast enhancement. This is a relatively systematic bias as benign nodules tend to be followed up using low-dose non-contrast protocols.
Importance of radiologist's role in developing useful AI tools
The radiologist's role in the development and evaluation of new AI tools cannot be underestimated. A major reason for the lack of uptake of CAD tools in radiology thus far has been their failure to integrate with the radiologist's workflow. Radiologists need to play a central role in identifying key applications of the emerging techniques and how they might reduce their workload rather than increasing it. Deep learning research so far has focused on single imaging techniques. Radiologists are best placed to direct how information from complementary imaging techniques along with clinical data may be combined to provide richer diagnostic information.
The biggest success story of deep learning in medical imaging to date is Google Research's effort in fundal imaging.
48Their database included imaging, clinical, and biochemical data from 284,335 patients across two continents. The curation and validation of such databases is a mammoth task and not one that is possible without radiologists collaborating on a massive scale. There have been attempts to crowdsource labels for imaging data
- Poplin R.
- Varadarajan A.V.
- Blumer K.
- et al.
Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning.
Nat Biomed Eng. 2018; 2 (Available at:): 158-164
65; however, for radiologists already drowning in their collective workloads, finding time to do this is unlikely to be on the top of the priority list.
- Royal College of Radiologists
Clinical radiology UK workforce census 2015 report.
A way to facilitate development of good-quality databases that can be exploited by data mining techniques is structured reporting that uses a standard format and common lexicon. This approach has already been shown to improve report accuracy,
68is perceived to be clearer
69and more informative by the referring clinicians
70as well as increasing clinical impact compared to free text reports.
71The Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement recommends the use of structured reporting for any research studies.
72A further benefit of this approach would be to provide structured data, which are much better suited for computational analysis and can act as labels for accompanying imaging.
Constantly increasing imaging volume and complexity in the past few decades has reduced the time available for evaluating the clinical and biochemical context of imaging findings. This has diminished the role of radiologists from being diagnosticians to merely an image analyst with the clinical interpretation of the findings being left to other physicians.
73AI encompasses many powerful tools with the potential to dramatically increase the information radiologists extract from images and automate many mundane detection, characterisation, and quantification tasks.
Due to the size of the problem, work done so far, and availability of relatively large databases, nodule management is likely to be one of the first areas to be impacted by machine learning tools that can automatically detect, measure, and risk stratify nodules. If trained correctly, these tools could play an active role in reducing radiology workload. The importance of integrating these tools into clinical workflow cannot be overstated, and it is this factor that has been one of the major reason for a lack of adoption of CAD tools that are already available.
As AI research progresses, we anticipate that there will be many more machine learning powered solutions available for radiologists. Their safe and effective application will require radiologists to develop familiarity with the concepts, strengths, and limitations of the computer-assisted tools at their disposal.
Conflict of interest
Timor Kadir is the Chief Technology Officer of Optellum Fergus Gleeson is a share holder in Optellum.
F.G. is funded by the National Consortium of Intelligent Medical Imaging and by the NIHR Oxford Biomedical Research Centre .
- Deep learning: a primer for radiologists.RadioGraphics. 2017; 37: 2113-2131
- Machine learning for medical imaging.RadioGraphics. 2017; 37: 505-515
- The coding of roentgen images for computer analysis as applied to lung cancer.Radiology. 1963; : 81185-81200
- Machine learning and radiology.Med Image Anal. 2012; 16: 933-951
- Diagnostic imaging over the last 50 years: research and development in medical imaging science and technology.Phys Med Biol. 2006; 51: R5-R27
- A computer-aided diagnosis for evaluating lung nodules on chest CT: the current status and perspective.Korean J Radiol. 2011; 12: 145-155
- Temporal subtraction method for lung nodule detection on successive thoracic CT soft-copy images.Radiology. 2014; 271: 255-261
- Quantitative computed tomography assessment of lung structure and function in pulmonary emphysema.Eur Respir J. 2001; 18: 720-730
- Automated 3D interstitial lung disease epsilonxtent quantification: performance evaluation and correlation to PFTs.J Digit Imaging. 2014; 27: 380-391
- Computer-aided diagnosis in thoracic CT.Semin Ultrasound CT MR. 2005; 26: 357-363
- The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): a completed reference database of lung nodules on CT scans.Med Phys. 2011; 38: 915-931
- Reduced lung-cancer mortality with low-dose computed tomographic screening.N Engl J Med. 2011; 365: 395-409
- NELSON lung cancer screening study.Cancer Imaging. 2011; 11 (Spec No AS79-84)
- Effects of volume CT lung cancer screening: mortality results of the NELSON randomised-controlled population based trial.in: de Koning H.J. Van Der Aalst C. Ten Haaf K. IASLC 19th world conference on lung cancer. International Association for the Study of Lung Cancer, Toronto2018
- Predicting malignant nodules from screening CTs.J Thorac Oncol. 2016; 11: 2045-2047
- Lung nodule and cancer detection in computed tomography screening.J Thorac Imaging. 2015; 30: 130-138
- Characterizing search, recognition, and decision in the detection of lung nodules on CT scans: elucidation with eye tracking.Radiology. 2015; 274: 276-286
- National lung screening trial: variability in nodule detection rates in chest CT studies.Radiology. 2013; 268: 865-873
- Computed tomographic characteristics of interval and post screen carcinomas in lung cancer screening.Eur Radiol. 2015; 25: 81-88
- The impact of trained radiographers as concurrent readers on performance and reading time of experienced radiologists in the UK lung cancer screening (UKLS) trial.Eur Radiol. 2018; 28: 226-234
- Computer-aided detection of lung nodules on multidetector CT in concurrent-reader and second-reader modes: a comparative study.Eur J Radiol. 2013; 82: 1332-1337
- Deep learning aided decision support for pulmonary nodules diagnosing: a review.J Thorac Dis. 2018; 10: S867-S875
- Probability of cancer in pulmonary nodules detected on first screening CT.N Engl J Med. 2013; 369: 910-919
- Detection of lung cancer through low-dose CT screening (NELSON): a prespecified analysis of screening test performance and interval cancers.Lancet Oncol. 2014; 15: 1342-1350
- Are two-dimensional CT measurements of small noncalcified pulmonary nodules reliable?.Radiology. 2004; 231: 453-458
- The utility of automated volumetric growth analysis in a dedicated pulmonary nodule clinic.J Thorac Cardiovasc Surg. 2011; 142: 372-377
- Pulmonary nodules: growth rate assessment in patients by using serial CT and three-dimensional volumetry.Radiology. 2012; 262: 662-671
- Automatic 3D pulmonary nodule detection in CT images: a survey.Comput Methods Programs Biomed. 2016; 124: 91-107
- Morphological segmentation and partial volume analysis for volumetry of solid pulmonary lesions in thoracic CT scans.IEEE Trans Med Imaging. 2006; 25: 417-434
- Use of volumetry for lung nodule management: theory and practice.Radiology. 2017; 284: 630-644
- A comparison of six software packages for evaluation of solid lung nodules using semi-automated volumetry: what is the minimum increase in size to detect growth in repeated CT examinations.Eur Radiol. 2009; 19: 800-808
- British thoracic society guidelines for the investigation and management of pulmonary nodules.Thorax. 2015; 70 (ii1-ii54)
- Nodule size measurement: automatic or human-which is better for predicting lung cancer in a brock model?.in: Pipavath S. Schiebler M. Chest (lung nodule). Radiological society of north America 2018 scientific assembly and annual meeting, 25–30 november, chicago IL. 2018 (Available at:)http://archive.rsna.org/2018/18022203.htmlDate accessed: November 20, 2018
- Lung cancer prediction using machine learning and advanced imaging techniques.Transl Lung Cancer Res. 2018; 7: 304-312
- Towards automatic pulmonary nodule management in lung cancer screening with deep learning.Sci Rep. 2017; : 746479
- Radiomics: extracting more information from medical images using advanced feature analysis.Eur J Cancer. 2012; 48: 441-446
- Integrating radio imaging with gene expressions toward a personalized management of cancer.IEEE Trans Hum Mach Sys. 2014; 44: 664-677
- Textural features for image classification.IEEE Trans Syst Man Cy. 1973; 3 (SMC-): 610-621
- False discovery rates in PET and CT studies with texture features: a systematic review.PLoS One. 2015; 10e0124165
- Radiomic features analysis in computed tomography images of lung nodule classification.PLoS One. 2018; 13e0192002
- Development and clinical application of radiomics in lung cancer.Radiat Oncol. 2017; 12 (017-0885-x): 154
- Radiomics analysis of pulmonary nodules in low-dose CT for early detection of lung cancer.Med Phys. 2018 Apr; 45: 1537-1549
- Radiomics of pulmonary nodules and lung cancer.Transl Lung Cancer Res. 2017; 6: 86-91
- Robustness and reproducibility of radiomics in magnetic resonance imaging: a phantom study.Invest Radiol. 2019; 54: 221-228
- Repeatability and reproducibility of radiomic features: a systematic review.Int J Radiat Oncol Biol Phys. 2018; 102: 1143-1158
- LUNGx challenge for computerized lung nodule classification.J Med Imaging (Bellingham). 2016; 3044506
- LUNGx challenge for computerized lung nodule classification: reflections and lessons learned.J Med Imaging (Bellingham). 2015; 2020103
- Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning.Nat Biomed Eng. 2018; 2 (Available at:): 158-164
- Computer-aided classification of lung nodules on computed tomography images via deep learning technique.Onco Targets Ther. 2015; : 82015-82022
- 3D multi-view convolutional neural networks for lung nodule classification.PLoS One. 2017; 12e0188290
- Pulmonary nodule classification in lung cancer screening with three-dimensional convolutional neural networks.J Med Imaging (Bellingham). 2017; 4041308
- Using multi-level convolutional neural network for classification of lung nodules on CT images.Conf Proc IEEE Eng Med Biol Soc. 2018; : 2018686-2018689
- A lightweight multi-section CNN for lung nodule classification and malignancy estimation.IEEE J Biomed Health Inform. 2018 Nov 6; https://doi.org/10.1109/JBHI.2018.2879834
- Knowledge-based collaborative deep learning for benign-malignant lung nodule classification on chest CT.IEEE Trans Med Imaging. 2019 Apr; 38: 991-1004
- A generalized deep learning-based diagnostic system for early diagnosis of various types of pulmonary nodules.Technol Cancer Res Treat. 2018; (171533033818798800)
- Highly accurate model for prediction of lung nodule malignancy with CT scans.Sci Rep. 2018; 8 (018-27569-w): 9286
- Pulmonary nodule classification with deep convolutional neural networks on computed tomography images.Comput Math Methods Med. 2016; (20166215085)
- Pulmonary nodule classification with deep residual networks.Int J Comput Assist Radiol Surg. 2017; 12: 1799-1808
- The utilisation of convolutional neural networks in detecting pulmonary nodules: a review.Br J Radiol. 2018; 91: 20180028
- Convolutional neural network-based PSO for lung nodule false positive reduction on CT images.Comput Methods Programs Biomed. 2018; : 162109-162118
- Effect of temporal subtraction images on radiologists' detection of lung cancer on CT: results of the observer performance study with use of film computed tomography images.Acad Radiol. 2004; 11: 1337-1343
- Automatic classification of lung nodules on MDCT images with the temporal subtraction technique.Int J Comput Assist Radiol Surg. 2017; 12: 1789-1798
- Pulmonary nodules: assessing the imaging biomarkers of malignancy in a “coffee-break”.Eur J Radiol. 2018; : 10182-10186
- Assessment of CT texture analysis as a tool for lung nodule follow-up.in: Ridge C. Shepard J. Chest (lung nodule). In radiological society of north America 2017 scientific assembly and annual meeting, 26 november–1 december, chicago IL. 2017 (Available at:)
- Effectively crowdsourcing radiology report annotations.in: Proceedings of the sixth international workshop on health text mining and information analysis. 2015: 109-114
- Clinical radiology UK workforce census 2015 report.2016 (Available at:)Accessed: November 2018)
- Quality management in musculoskeletal imaging: form, content, and diagnosis of knee MRI reports and effectiveness of three different quality improvement measures.AJR Am J Roentgenol. 2015; 204: 1069-1074
- Performance of ACR lung-RADS in a clinical CT lung screening program.J Am Coll Radiol. 2015; 12: 273-276
- Structured reporting of CT examinations in acute pulmonary embolism.J Cardiovasc Comput Tomogr. 2017; 11: 188-195
- Journal club: structured radiology reports are more complete and more effective than unstructured reports.AJR Am J Roentgenol. 2014; 203: 1265-1271
- Impact of a structured report template on the quality of MRI reports for rectal cancer staging.AJR Am J Roentgenol. 2015; 205: 584-588
- STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies.Radiology. 2015; 277: 826-832
- Adapting to artificial intelligence: radiologists and pathologists as information specialists.JAMA. 2016; 316: 2353-2354
- Computer-aided diagnosis: how to move from the laboratory to the clinic.Radiology. 2011; 261: 719-732
Published online: June 12, 2019
© 2019 Published by Elsevier Ltd on behalf of The Royal College of Radiologists.