ObjectivesTo evaluate the effect of super-resolution deep-learning-based reconstruction (SR-DLR) on the image quality of coronary CT angiography (CCTA).MethodsForty-one patients who underwent CCTA using a 320-row scanner were retrospectively included. Images were reconstructed with hybrid (HIR), model-based iterative reconstruction (MBIR), normal-resolution deep-learning-based reconstruction (NR-DLR), and SR-DLR algorithms. For each image series, image noise, and contrast-to-noise ratio (CNR) at the left main trunk, right coronary artery, left anterior descending artery, and left circumflex artery were quantified. Blooming artifacts from calcified plaques were measured. Image sharpness, noise magnitude, noise texture, edge smoothness, overall quality, and delineation of the coronary wall, calcified and noncalcified plaques, cardiac muscle, and valves were subjectively ranked on a 4-point scale (1, worst; 4, best). The quantitative parameters and subjective scores were compared among the four reconstructions. Task-based image quality was assessed with a physical evaluation phantom. The detectability index for the objects simulating the coronary lumen, calcified plaques, and noncalcified plaques was calculated from the noise power spectrum (NPS) and task-based transfer function (TTF).ResultsSR-DLR yielded significantly lower image noise and blooming artifacts with higher CNR than HIR, MBIR, and NR-DLR (all p < 0.001). The best subjective scores for all the evaluation criteria were attained with SR-DLR, with significant differences from all other reconstructions (p < 0.001). In the phantom study, SR-DLR provided the highest NPS average frequency, TTF50%, and detectability for all task objects.ConclusionSR-DLR considerably improved the subjective and objective image qualities and object detectability of CCTA relative to HIR, MBIR, and NR-DLR algorithms.