Authors
Rohit Kalla , Balaji Srinivasan , Ganapathy Krishnamurthi
Published In
IEEE Access, vol. 13, p. 175292-175308

We introduce a fully unsupervised framework designed to reconstruct X-ray CT images from truncated projections without requiring prior truncation correction. By incorporating a Radon projection layer as the final layer of a deep learning model and using a projection-based loss function, our method effectively removes truncation-related artifacts, particularly ring artifacts, at a significantly faster rate than existing reconstruction approaches. We demonstrate the reconstruction process with small-scale images and then enhance our framework to reconstruct large-scale or arbitrary-scale images from truncated projections. For large-scale image reconstruction, our approach uses fully connected layers in a distributed manner, enabling memory-efficient reconstruction even with limited GPU resources. Existing iterative methods handle cases with mild truncation, whereas our framework achieves high reconstruction quality and effectively eliminates ring artifacts when truncation is substantial and affects clinically relevant areas. We test the effectiveness of our proposed framework using PSNR, SSIM, and MAE ± SD metrics, where in cases of high-degree truncation, it consistently yields higher PSNR and SSIM values and lower MAE ± SD, demonstrating its ability to reduce ring artifacts while preserving reconstruction quality in comparison to other standard algorithms.