Level: Ph.D.
Areas of research : Artificial intelligence, Automation, BIM.

Scan to BIM

Abstract

With the prevalence of devices equipped with 3D data acquisition technologies (i.e. color and depth sensors), understanding 3D scenes from scanned data was seen as a significant attraction. In addition, deep neural networks (DNNs) have shown remarkable performance in various 3D applications, such as the construction of building information models (BIMs) from digitized 3D data (i.e. from digitization to BIM). However, several problems, such as the lack of labeled data for DNN training, poor extraction of feature descriptors for object recognition and inefficient approaches to parametric reconstruction, call into question the effective use of DNNs for digitization to BIM. To overcome the aforementioned problems, in this research we investigate a two-stage framework for performing BIM-based scene reconstruction from digitized data. In particular, in the first stage, we investigate a semi-supervised object detector with a geometry-aware point cloud (PC) feature extraction backbone. Secondly, we aim to develop an Industry Foundation Class (IFC) object extraction approach to match the object detected in the scanned scene with pre-registered IFC parametric objects. The proposed framework exploits the remarkable performance of DNNs to understand parametric scenes from scanned data, and enables “as-is” BIM reconstruction without the need for traditional labor-intensive methods.

Areas of research : Artificial intelligence, Automation, BIM.

Project results

1. Conduct a thorough literature review to identify research gaps and seek plausible solutions. 2. Propose a data-efficient, semi-supervised method to alleviate the problem of training the backbone of the DNN-based IFC object detector with a limited amount of labeled data.
3. Investigate state-of-the-art strategies for learning scale-preserving feature descriptors from digitized data for more robust 3D object detection. 4. Proposal of a parametric object retrieval method for end-to-end IFC-based scene reconstruction. 5. Validate the effectiveness of the proposed methods by carrying out a comprehensive evaluation of their performance against state-of-the-art rivals.

Project contributions

The first contribution expected from this research will be to generate reliable 3D pseudo-samples and enable the DNN-based 3D object detector backbone to explicitly learn feature descriptors from the pseudo-samples. In addition, robust feature extraction plays an essential role in the speed of convergence of DNNs and, consequently, in the robustness of object detection. The second expected contribution of this research will be the realization of a more efficient feature extraction with superior performance to existing approaches for IFC object detection. With regard to 3D IFC scene reconstruction, the third contribution will be the introduction of a DNN-based IFC object search approach, which not only outperforms conventional rule-based methods in terms of accuracy and adaptability to diverse multi-storey indoor environments, but also facilitates the IFC scene reconstruction procedure.

Research team

The project team :

Partners : PLANIFIKA.
Team

The project team

Partners

This project was supported by :

Similar research

Explore our research in more depth by exploring these related studies and resources:

Scroll to Top