Using Artificial Intelligence to Detect COVID-19 and Community-acquired Pneumonia Based on Pulmonary CT: Evaluation of the Diagnostic Accuracy.
- 作者列表："Li L","Qin L","Xu Z","Yin Y","Wang X","Kong B","Bai J","Lu Y","Fang Z","Song Q","Cao K","Liu D","Wang G","Xu Q","Fang X","Zhang S","Xia J","Xia J
:Background Coronavirus disease 2019 (COVID-19) has widely spread all over the world since the beginning of 2020. It is desirable to develop automatic and accurate detection of COVID-19 using chest CT. Purpose To develop a fully automatic framework to detect COVID-19 using chest CT and evaluate its performance. Materials and Methods In this retrospective and multicenter study, a deep learning model, the COVID-19 detection neural network (COVNet), was developed to extract visual features from volumetric chest CT scans for the detection of COVID-19. CT scans of community-acquired pneumonia (CAP) and other non-pneumonia abnormalities were included to test the robustness of the model. The datasets were collected from six hospitals between August 2016 and February 2020. Diagnostic performance was assessed with the area under the receiver operating characteristic curve, sensitivity, and specificity. Results The collected dataset consisted of 4352 chest CT scans from 3322 patients. The average patient age (±standard deviation) was 49 years ± 15, and there were slightly more men than women (1838 vs 1484, respectively; P = .29). The per-scan sensitivity and specificity for detecting COVID-19 in the independent test set was 90% (95% confidence interval [CI]: 83%, 94%; 114 of 127 scans) and 96% (95% CI: 93%, 98%; 294 of 307 scans), respectively, with an area under the receiver operating characteristic curve of 0.96 (P < .001). The per-scan sensitivity and specificity for detecting CAP in the independent test set was 87% (152 of 175 scans) and 92% (239 of 259 scans), respectively, with an area under the receiver operating characteristic curve of 0.95 (95% CI: 0.93, 0.97). Conclusion A deep learning model can accurately detect coronavirus 2019 and differentiate it from community-acquired pneumonia and other lung conditions. © RSNA, 2020 Online supplemental material is available for this article.
: 背景冠状病毒疾病2019 (COVID-19) 自2020年初开始在全球范围内广泛传播。期望开发使用胸部CT自动且准确地检测COVID-19。目的开发一种利用胸部CT检测COVID-19的全自动检测框架，并对其性能进行评价。材料和方法在这项回顾性和多中心研究中，开发了一个深度学习模型，即COVID-19检测神经网络 (COVNet)，以从容积胸部ct扫描中提取视觉特征来检测COVID-19。包括社区获得性肺炎 (CAP) 和其他非肺炎异常的ct扫描以测试模型的稳健性。数据集是在2016年8月至2020年2月期间从六家医院收集的。用受试者工作特征曲线下面积、灵敏度和特异性评估诊断性能。结果收集的数据集包括来自4352例患者的3322个胸部ct扫描。患者平均年龄 (± 标准差) 为49岁 ± 15岁，男性稍多于女性 (分别为1838和1484; P = .29)。在独立测试集中检测COVID-19的每次扫描敏感性和特异性分别为90% (95% 置信区间 [CI]: 83%，94%; 114次扫描中的127) 和96% (95% CI: 93%，98%; 294次扫描中的307)，受试者工作特征曲线下面积为0.96(P <.001)。在独立测试集中检测CAP的每次扫描灵敏度和特异性分别为87% (152次扫描中的175次) 和92% (239次扫描中的259次)，受试者工作特征曲线下面积为0.95 (95% CI: 0.93，0.97)。结论深度学习模型可以准确检测冠状病毒2019，并将其与社区获得性肺炎和其他肺部疾病相鉴别。©Rsrsna，2020在线补充材料可用于本文。
METHODS:OBJECTIVES:The aim was to evaluate the image quality and sensitivity to artifacts of compressed sensing (CS) acceleration technique, applied to 3D or breath-hold sequences in different clinical applications from brain to knee. METHODS:CS with an acceleration from 30 to 60% and conventional MRI sequences were performed in 10 different applications in 107 patients, leading to 120 comparisons. Readers were blinded to the technique for quantitative (contrast-to-noise ratio or functional measurements for cardiac cine) and qualitative (image quality, artifacts, diagnostic findings, and preference) image analyses. RESULTS:No statistically significant difference in image quality or artifacts was found for each sequence except for the cardiac cine CS for one of both readers and for the wrist 3D proton density (PD)-weighted CS sequence which showed less motion artifacts due to the reduced acquisition time. The contrast-to-noise ratio was lower for the elbow CS sequence but not statistically different in all other applications. Diagnostic findings were similar between conventional and CS sequence for all the comparisons except for four cases where motion artifacts corrupted either the conventional or the CS sequence. CONCLUSIONS:The evaluated CS sequences are ready to be used in clinical daily practice except for the elbow application which requires a lower acceleration. The CS factor should be tuned for each organ and sequence to obtain good image quality. It leads to 30% to 60% acceleration in the applications evaluated in this study which has a significant impact on clinical workflow. KEY POINTS:• Clinical implementation of compressed sensing (CS) reduced scan times of at least 30% with only minor penalty in image quality and no change in diagnostic findings. • The CS acceleration factor has to be tuned separately for each organ and sequence to guarantee similar image quality than conventional acquisition. • At least 30% and up to 60% acceleration is feasible in specific sequences in clinical routine.
METHODS:BACKGROUND:The main surgical techniques for spontaneous basal ganglia hemorrhage include stereotactic aspiration, endoscopic aspiration, and craniotomy. However, credible evidence is still needed to validate the effect of these techniques. OBJECTIVE:To explore the long-term outcomes of the three surgical techniques in the treatment of spontaneous basal ganglia hemorrhage. METHODS:Five hundred and sixteen patients with spontaneous basal ganglia hemorrhage who received stereotactic aspiration, endoscopic aspiration, or craniotomy were reviewed retrospectively. Six-month mortality and the modified Rankin Scale score were the primary and secondary outcomes, respectively. A multivariate logistic regression model was used to assess the effects of different surgical techniques on patient outcomes. RESULTS:For the entire cohort, the 6-month mortality in the endoscopic aspiration group was significantly lower than that in the stereotactic aspiration group (odds ratio (OR) 4.280, 95% CI 2.186 to 8.380); the 6-month mortality in the endoscopic aspiration group was lower than that in the craniotomy group, but the difference was not significant (OR=1.930, 95% CI 0.835 to 4.465). A further subgroup analysis was stratified by hematoma volume. The mortality in the endoscopic aspiration group was significantly lower than in the stereotactic aspiration group in the medium (≥40-<80 mL) (OR=2.438, 95% CI 1.101 to 5.402) and large hematoma subgroup (≥80 mL) (OR=66.532, 95% CI 6.345 to 697.675). Compared with the endoscopic aspiration group, a trend towards increased mortality was observed in the large hematoma subgroup of the craniotomy group (OR=8.721, 95% CI 0.933 to 81.551). CONCLUSION:Endoscopic aspiration can decrease the 6-month mortality of spontaneous basal ganglia hemorrhage, especially in patients with a hematoma volume ≥40 mL.
METHODS:OBJECTIVE:The primary purpose of this study was to evaluate the effectiveness of a three-dimensional (3D) software tool (smart planes) for displaying fetal brain planes, and the secondary purpose was to evaluate its accuracy in performing automatic measurements. MATERIAL AND METHODS:This prospective study included singleton fetuses with a gestational age (GA) greater than 18 weeks. Transabdominal two-dimensional ultrasound (2DUS) and 3D smart planes images were respectively used to obtain the basic planes of the fetal brain, with five parameters measured. The images, by either two-dimensional (2D) manual or 3D automatic operation, were reviewed by two experienced sonographers. The agreements between two measurements were analyzed. RESULTS:A total of 226 cases were included. The rates of successful detection by automatic display were as high as 80%. There was substantial agreement between the measurements of the biparietal diameter, head circumference and transcerebellar diameter, but poor agreement between the measurements of cisterna magna and lateral ventricle width. CONCLUSIONS:Smart Planes might be valuable for the rapid evaluation of fetal brain, because it simplifies the evaluation process. However, the technology requires improvement. In addition, this technology cannot replace the conventional manual US scans; it can only be used as an additional approach.