An integrated deep learning framework for CT-based mesoscopic segmentation and quantitative analysis of 3D-printed concrete

Abstract

Three-dimensional printed concrete (3DPC) technology offers a rapid and efficient approach to enhancing infrastructure engineering. However, the quantitative analysis of 3DPC mesostructures, especially for accurate segmentation of computed tomography (CT) images, remains challenging due to the limitations of conventional threshold-based segmentation methods, which often rely on manual parameter tuning and lack robustness under complex imaging conditions. This study addresses this gap by developing an advanced deep learning framework for the semantic segmentation of 3DPC CT images. Four representative deep learning models—Fully Convolutional Networks (FCN), U-Net, DeepLabv3+, and PointRend—were evaluated for their performance on 3DPC CT image segmentation. Among these models, U-Net exhibited superior performance across multiple metrics, including pixel accuracy, mean Intersection over Union (mIoU), Frequency Weighted Intersection over Union (FwIoU), and recall. To further enhance segmentation fidelity, the selected U-Net model was augmented through the integration of transfer learning and the incorporation of attention mechanisms. Experimental validation confirmed that the proposed enhancements improved segmentation performance, with notable gains of 3.1% in mean recall, 5.1% in mean intersection over union, and 1.9% in pixel accuracy, underscoring the effectiveness of the methodology. In addition, a macroscopic statistical evaluation method was introduced to assess segmentation quality from a geometric perspective, confirming that the enhanced U-Net model accurately preserved feature size distributions and reduced total area errors to 1.33% for voids and 5.31% for unhydrated regions. The proposed method significantly improves segmentation accuracy and processing efficiency for 3DPC CT images, providing a robust solution for the intelligent analysis of 3DPC mesostructures.

Publication DOI: https://doi.org/10.1016/j.jobe.2026.115869
Divisions: College of Engineering & Physical Sciences > Smart and Sustainable Manufacturing
College of Engineering & Physical Sciences > School of Infrastructure and Sustainable Engineering > Civil Engineering
College of Engineering & Physical Sciences
Aston University (General)
Funding Information: This work was supported by the Open Research Fund of the State Key Laboratory of Hydroscience and Engineering (Grant No. SKLHSE-KF-2025-D-07).
Additional Information: Copyright © 2026, Elsevier. This accepted manuscript version is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International https://creativecommons.org/licenses/by-nc-nd/4.0/
Uncontrolled Keywords: Attention mechanism,Computed tomography (CT) image analysis,Deep learning,Semantic segmentation,Three-dimensional printed concrete (3DPC),Architecture ,Civil and Structural Engineering,Building and Construction,Safety, Risk, Reliability and Quality,Mechanics of Materials
Publication ISSN: 2352-7102
Last Modified: 01 Apr 2026 07:17
Date Deposited: 26 Mar 2026 11:17
Full Text Link:
Related URLs: https://www.sci ... 690X?via%3Dihub (Publisher URL)
https://www.sco ... ns/105032836800 (Scopus URL)
PURE Output Type: Article
Published Date: 2026-04-01
Published Online Date: 2026-03-16
Accepted Date: 2026-03-15
Authors: Zhao, Qiliang
Huang, Yuxiang
Wang, Bowen
Zhang, Qiuchi
Antwi-Afari, Maxwell Fordjour (ORCID Profile 0000-0002-6812-7839)
Zhao, Weijian
Sun, Bochao

Download

[img]

Version: Accepted Version

Access Restriction: Restricted to Repository staff only until 16 September 2026.

License: Creative Commons Attribution Non-commercial No Derivatives


Export / Share Citation


Statistics

Additional statistics for this record