LDAC2023 - 11th Linked Data in Architecture and Construction Workshop (15-16 June 2023)

LDAC aims at providing a forum for technical discussion on the topic of handling data in architecture and construction. A large part of the workshop is concerned with the use of (semantic) web technologies. Research is welcomed on the use of ontologies and vocabularies for representing building data, 3D geometry, product data, geospatial data, infrastructure data, HVAC data, sensor data, and so forth.


In 2023, the workshop will again be organised as a fully stand-alone event, in combination with a summer school and in one single location in Italy, namely Matera. As in the past years, the workshop covers work from three perspectives: research, technical discussions and industry developments. In short, LDAC welcomes stakeholders from research and practice and serves as a common communication platform to advance and innovate the use of linked data in architecture and construction.


KEYNOTES

  • Connect Sensors to Perception via Semantic Stream
  • Danh Le Phuoc
    Technical University Berlin

  • Abstract: In this talk, I will present how to use semantic streams to connect sensory data to perception systems for robots, drones or cars to understand their surroundings, e.g roads, buildings and physical objects. Semantic stream representation mimics the semantic memory and episodic memory of the human cognitive system. The semantic memory refers to our brain’s repository of general world knowledge. Whereas the episodic memory refers to our “episodic memory system”, which encodes, stores, and allows access to “episodic memories”, e.g. recollection of personally experienced events situated within a unique spatial and temporal context. In this context, semantic and episodic memories are represented as semantic and stream graphs to integrate and fuse various kinds of sensory observations, e.g, images, videos and point clouds, into interlinked sub-symbolic and symbolic data streams at different levels of semantic abstractions. During the talk, I will share my experience in building multimodal data fusion pipelines for autonomous vehicles via a declarative programming model based on semantic streams. This programming model enables developers to write semantic stream fusion programs composed of if-then rules associated with stream data fusion operations for both reasoning and learning tasks.
  • Presentation (PDF)
  • Recording (MP4)
  • Data-driven AI vs. Model-driven AI: Which one should we trust more?

  • Francesca Lisi
    Università degli Studi di Bari "Aldo Moro"

  • Abstract: Artificial Intelligence (AI) is currently gaining an increasing attention, also from the media, thanks to an impressive number of successfull applications in a wide variety of domains. Overall, it is nowadays considered a disruptive technology that is already transforming our everyday life. Most of these applications of AI are heavily driven by data, and in doing so, they suffer from several limits, the most prominent of which is the so-called bias, namely the presence of undesirable prejudices and stereotypes in the data. This is typical of black-box systems where machine learning algorithms are trained with a huge amount of data coming from not carefully chosen and/or curated sources. Even cutting-edge applications such as ChatGPT and other generative AI tools are prone to these shortcomings peculiar to Data-driven AI, and raise several issues as regards their reliability. In order to address these issues, ethical guidelines for a trustworthy AI have been recently defined by the AI High-level Expert Group of the European Commission. The guidelines encompass requirements such as transparency which call for alternative AI approaches where the emphasys is more on the model than on the data. Model-driven AI indeed represents the attempt to capture our understanding of how the world works through explicit representation and rules. Over the last 20 years, the focus on models has lead in AI to the concept of ontologies (and variants such as linked data) for the semantic interoperability. The advantage of a logic-based knowledege representation is that it can be formally verified and validated by means of automated reasoning procedures. Notably, we can check its compliance with given properties of interest. The most interesting AI applications are yet to come, and will rely on the combination of data-driven and model-driven approaches. As an illustrative example I will share my experience in declarative pattern mining for a domain where transparency is a strong requirement.
  • Presentation (PDF)
  • Recording (MP4)
  • Shape and Semantics for Urban Modelling – the Role of Geometry in City Digital Twins
  • Michela Mortara
    CNR - IMATI

  • Abstract: This talk will describe the current computer graphics approaches to construct a digital 3D representation of an urban context from real data. The geometric model is a set of undifferentiated elements, but represents specific urban entities with attributes, relations, functionality and meaning. Identifying the salient elements and linking semantic information to their geometric counterpart leverages automatic reasoning (multi-disciplinary optimization, monitoring, planning, simulation, prediction) on the city and its processes, as far as knowledge about land and urban morphology is concerned. The main focus will be on the acquisition of real 3D data, the reconstruction process and the semantic annotation of the 3D digital model. Examples of use cases that the geometric layer of the urban digital twin can answer to will be discussed from ongoing projects with Matera and Catania. Challenges and future directions will conclude the talk.

PLENARY SESSIONS

Plenary sessions include research papers, with the following presentations:

Plenary session 1 - Digital Twinning and Asset Management (Thu 15/06. 10:30 - 12:30)

  • Linked data for the life cycle assessment of built assets
  • Calin Boje, Sylvain Kubicki, Tomas Navarrete Gutierrez and Thomas Beach
  • Abstract: Life Cycle Assessment (LCA) is a scientific method for the quantification of environmental impacts on a product system, which is important for sustainable design and management of our built environment. Conducting LCA on buildings requires access to highly contextualized information which can be sourced from the Building Information Model (BIM) or monitoring systems in place. The interoperability between LCA domain tools and BIM tools is lacking. Our motivation lies in semantically bridging LCA and built environment domains by adopting a Semantic Web (SW) technologies. This would result in increased interoperability on the web, increased automation of information pipelines and more explainable impacts of complex contexts. In this paper we introduce the work in progress under the SemanticLCA ontology where we modelled several use cases for LCA of built assets. To demonstrate this, we showcase one case study at the building level, highlighting the semantic alignments between BIM models, LCA data and sensing devices. The paper discusses the implementation challenges and offers suggestions on how such an ontology can be used in the future.
  • Full paper (PDF)
  • Presentation (PDF)
  • Lessons Learned from Designing and Using bcfOWL
  • Oliver Schulz, Jyrki Oraskari and Jakob Beetz
  • Abstract: The bcfOWL ontology has been developed as part of the EU Horizon 2020 BIM4Ren project to enable communication between BIM Collaboration Format (BCF) Issues and Linked Building Data (LBD) concepts described on the Semantic Web. This paper evaluates the current approach in bcfOWL based on its use in the BIM4Ren project. The ontology serves as an interlanguage for component-based communication, providing a gateway to the systems of different domains. During its use in the project, new insights into the usability of the ontology were gained. We discuss these findings and provide suggestions to complement the original design principles of bcfOWL, targeting the LBD domain. Our work should guide future research in component-based communication in building-related projects and provide helpful considerations for future ontology designs. We also discuss potential areas of improvement for bcfOWL, including versioning, Ontology Design Patterns and validation. Overall, bcfOWL aims to improve querying capabilities and connectivity with Linked Building Data, making it a valuable tool for building-related projects.
  • Full paper (PDF)
  • Presentation (PDF)
  • Towards usable ICDD containers for ontology-driven data linking and link validation
  • Philipp Hagedorn, Madhumita Senthilvel, Hans Schevers and Lucas Verhelst
  • Abstract: Delivering data and documents of a building throughout its lifecycle is a common task in construction planning, engineering, and maintenance. The international standard Information Containers for linked Document Delivery (ICDD) has been created to link data and documents together for integrated delivery. This paper describes two independent software prototypes for creating these hybrid linksets in ICDD containers and validating them with the Shapes and Constraint Language (SHACL). This research focuses on ways to link data modeled in the Resource Description Framework (RDF) to non-RDF resources like documents and entities within documents. It argues that current ICDD mechanisms for linking to RDF resources are cumbersome and do not exploit the advantages of Linked Data. It also argues that the ICDD specification regarding the usage of Linked Data is very restrictive, thereby blocking the full benefits of Linked Data. The paper presents two strategies to alleviate these barriers to using ICDD; one approach is within the ICDD standard but adds additional agreements on top of the current ICDD standard. The other approach is technically outside the ICDD standard. Eventually, conclusions and recommendations are drawn from the presented implementations and strategies.
  • Full paper (PDF)
  • Presentation (PDF)
  • dstv: An ontology-based extension of the DSTV-NC standard for the use of linked data in the automation of steel construction
  • Lukas Kirner, Jyrki Oraskari, Victoria Jung and Sigrid Brell-Cokcan
  • Abstract: To meet the demands of automated steel construction, there is a need for innovative ways to link process data, measured deviations, and tolerances. Our current research in robotic steel fabrication aims to tackle this challenge by creating an adaptable information model interface that can seamlessly incorporate cross-process considerations required for precise and efficient fabrication beyond current Building Information Modeling (BIM). The goal is to improve existing information interfaces and increase the utilization of flexible and partially automated robot concepts in steel construction. Our approach uses existing standards and product interfaces such as DSTV-NC in steel construction, which we convert and enhance through an ontology that includes tolerances and process parameters. The outcomes of our study contribute to the development of automated systems in construction and support small and medium-sized enterprises in steel construction by addressing challenges related to skills shortages, productivity, and occupational safety.
  • Full paper (PDF)
  • Presentation (PDF)
  • Development of a National Scale Digital Twin for Domestic Building Stock
  • Cathal Hoare, Tareq Alqazzaz, Usman Ali, Shushan Hu and James O'Donnell
  • Abstract: The operation of buildings accounted for 40% of global energy consumption and 27% of greenhouse gas emissions (GHG) in 2022. Access to integrated information sources about a building stock is key to supporting policy and decision makers as they pursue green house gas reductions. However, over time, information has evolved into functional silos which accordingly limits the ability of experts in functional areas to exchange data and implement broader decision support systems. This paper describes the creation of a national scale digital twin for a national domestic building stock and is achieved through the use of semantic technologies to create a homogeneous knowledge graph from multiple heterogeneous data sources. The utility of the digital twin is demonstrated by the development of a virtual surveyor. This tool is used to predict building features such as window u-values for buildings that have not been surveyed as part of the national EPC scheme. In turn, these values are used to enrich the digital twin.
  • Full paper (PDF)
  • Presentation (PDF)
  • Towards a U.S. National Bridge and Infrastructure Data Dictionary: An Introduction
  • Aaron Costin and Marina Muller
  • Abstract: Building Information Modeling (BIM) has been gaining more acceptance outside of the traditional built environment. The U.S. transportation industry has recently adopted IFC as the standard data schema for the exchange of electronic engineering data. This is significant because the transportation agencies are progressing toward BIM as the successor to the standard plan set for highway infrastructure. Unfortunately, as previous studies indicate, IFC is currently limited in modeling the full capacity of bridges and infrastructure data exchange. Utilizing IFC P-Sets and ontology models have been the current workaround to enable the full information exchange. The challenge with properly modeling such ontologies is that the breadth of knowledge needing to be captured, from all the stakeholder perspectives and all bridge and infrastructure elements, is a large undertaking without the direct support from the industry. Additionally, each state agency has their own processes, terminology, and culture that further complicate the challenge. To mitigate these challenges, research has been ongoing to create a national data dictionary to support current US national efforts and promote alignment across the various state agencies. This paper presents an overview of the research and the methodology in creating a bridge and infrastructure data dictionary. Current limitations and open challenges are presented. As this work is still ongoing, the goal is to continue the development of the data dictionary in a collaborative effort to ensure that it can be extended to include other transportation structures.
  • Full paper (PDF)
  • Presentation (PDF)

Plenary Session 2 - Data Dictionaries and Smart Buildings (Thu 15/06. 13:45 - 15:45)

  • Semantic bSDD: Improving the GraphQL, JSON and RDF Representations of buildingSmart Data Dictionary
  • Vladimir Alexiev, Mihail Radkov and Nataliya Keberle
  • Abstract: The buildingSmart Data Dictionary (bSDD) is an important shared resource in the Architecture, Engineering, Construction, and Operations (AECO) domain. It is a collection of datasets (“domains”) that define various classifications (objects representing building components, products, and materials), their properties, allowed values, etc. bSDD defines a GraphQL API, as well as REST APIs that return JSON and RDF representations. This improves the interoperability of bSDD and its easier deployment in architectural Computer-Aided Design (CAD) and other AECO software. However, bSDD data is not structured as well as possible, and data retrieved via different APIs is not identical in content and structure. This lowers bSDD data quality, usability and trust. We conduct a thorough comparison and analysis of bSDD data. Based on this analysis, we suggest enhancements to make bSDD data better structured. The complete list of suggestions can be found at https://bsdd.ontotext.com/README.html. We implement many of the suggestions by refactoring the original data to make it better structured/interconnected, and more “semantic”. We provide a SPARQL endpoint using Ontotext GraphDB, and GraphQL endpoint using Ontotext Platform Semantic Objects. Our detailed work is available at https://github.com/Accord-Project/bsdd (open source) and https://bsdd.ontotext.com (home page, schemas, data, sample queries).
  • Full paper (PDF)
  • Presentation (HTML)
  • Presentation (PDF)
  • The semantic link between domain-based BIM models
  • Wojciech Teclaw, Madsholten Rasmussen, Nathalie Labonnote, Jyrki Oraskari and Eilif Hjelseth
  • Abstract: Over the past few years, the construction industry has undergone technological advancements to improve efficiency and productivity. One of the latest innovations is using semantic web technologies to address interoperability issues and achieve machine interpretability of data. Despite several implementations of Industry Foundation Classes (IFC) to graph model converters, there has been no analysis of the semantic linkages between duplicated elements. This study aims to fill this gap by providing a semantic framework for linking elements in the graph representations of IFC models. This is achieved by reviewing commonly used ontologies, IFC to semantic technology converters, and using the owl:sameAs predicate. The study presents a methodology for enerating additional links between duplicated elements in IFC model graph representations using selected geometrical features to address interoperability issues. The methodology is tested on domain-based IFC models and efficiently links models into a federated source of information about interdisciplinary Building Information Modelling (BIM) models. The study’s findings are expected to enhance the interoperability and semantic capabilities of BIM models, promoting collaboration and improving the efficiency of the construction industry.
  • Full paper (PDF)
  • Making Urban Energy Use More Intelligible Using Semantic Digital Twins
  • Sander R. de Meij, Alex J.A. Donkers, Dujuan Yang and Matthijs Klepper
  • Abstract: There is great potential in urban energy modeling for mitigating the effects of increasing energy consumption in cities. However, there is limited integration of traditional building information and urban data in general. Therefore, this project suggests a novel data integration structure, the Neighborhood Energy Ontology (NEO). This ontology aims to connect urban data from different domains and scales to provide more intelligible insight to the end user. In order to assist with this goal, a dashboard was created which allows the end-user to interact with the data and come to new insights. It is suggested hat the created ontology, in combination with the dashboard, is a suitable proof-of-concept to show how semantic solutions can aid in improving the potential of urban energy modeling to mitigate the adverse effects of increasing urbanization.
  • Full paper (PDF)
  • Presentation (PDF)
  • Modular Knowledge integration for Smart Building Digital Twins
  • Isaac Fatokun, Arun Raveendran Nair, Thamer Mecharnia, Maxime Lefrançois, Victor Charpenay, Fabien Badeig and Antoine Zimmermann
  • Abstract: It is accepted in the Linked Data for Architecture and Construction (LDAC) community that generating knowledge graphs (KGs) from the BIM model of a building enables higher level use cases such as integration with geographic information systems, operational system integration, semantic digital twins (DTs), or automatic compliance checking. However, existing approaches generate a large, monolithic knowledge graph that is difficult to integrate with other knowledge such as Thing Descriptions (TDs) of Internet of Things (IoT) devices, or information about office occupants and room occupancy schedules. In this work, we describe a set of three modular knowledge graphs that enable knowledge integration for the semantic DT of our building at Mines Saint-Étienne, leveraging the principles of Linked Building Data: (1) KGLBD is automatically generated from the Revit model of our building, (2) KGFOAF is semi-automatically generated from the employee directory of Mines Saint-Étienne, and (3) KGTD is automatically generated from the ETS5 project file describing the KNX network in our building using the W3C TD ontology, and points to real-time and historical data. Our approach offers an alternative with respect to the state of the art such that: (1) relevant bits of the building’s KG can be accessed using a simple REST-like interface, where each small KG contains links to other entities that themselves are identified by an IRI and have a small KG accessible; (2) Knowledge potentially served by different servers can be integrated in the same solution; (3) simple access control can be implemented for some parts of the global KG.
  • Full paper (PDF)
  • Presentation (PDF)
  • Metadata Schema Generation for Data-driven Smart Buildings
  • Lasitha Chamari, Joep van der Weijden, Lolke Boonstra, Stefan Hoekstra, Ekaterina Petrova and Pieter Pauwels
  • Abstract: A smart building is a combination of advanced information systems originating from different domains. Domains such as design and construction, maintenance, energy management, automation & control have complex yet important relationships, and ensuring their connectivity is crucial for building operations. Semantic web technologies can be used to model and link these domains and their relationships using domain ontologies. To that end, there are a number of smart building ontologies that are available in each domain. However, the process of generating a metadata schema by using those ontologies for a given building is not investigated adequately. Further, such tools that generate those metadata schemas are rare. Therefore, this study presents a semi-automatic metadata schema generator using an ontology database and a text search engine. The proposed approach is applied to a campus building. Building Automation System metadata was used in the metadata schema generator. Finally, this study shows how the generated metadata scheme can be used to efficiently query and visualize time-series data for developing data-driven smart building applications.
  • Full paper (PDF)
  • Presentation (PDF)
  • Learning partial correlation graph for multivariant sensor data and detecting sensor communities in smart buildings (short paper)
  • Xiang Xie, Manuel Herrera, Tejal Shah, Mohamad Kassem and Philip James
  • Abstract: The storage and processing of massive time series data collected from smart buildings consume considerable computational resources. However, major information redundancy can be found in the smart building data. This paper proposed a partial correlation graph based approach to map the dependencies among sensors and detect the sensor communities in which the sensors are strongly “net” correlated. Specifically, the sparse partial correlation estimation method is used to learn the partial correlation graph. The Louvain algorithm is used to detect the communities of sensors by optimising the graph modularity. The case study demonstrates that the proposed method can identify spare sensors in the detected sensor communities and thus enhance the computational feasibility of smart building applications.
  • Full paper (PDF)
  • Presentation (PDF)

Plenary Session 3 - compliance checking (Fri 16/06. 10:30 - 12:30)

  • Linked data for a construction big data platform (short paper)
  • Davide Simeone
  • Abstract: In the challenge towards a data-driven vision of the construction sector, companies are facing the criticalities of complexity and volume of data produced, shared, and elaborated during the distinct phases of a project. This research presents a solution to these issues, proposing a construction big data platform based on Linked Data to organize and integrate information related to the different disciplines. The Linked Data approach is critical for this process because of the ability to create connections between multiple data models – as the ones adopted in each discipline – providing a homogenous formalization of data and making it available for different applications and analytics.
  • Full paper (PDF)
  • Presentation (PDF)
  • Don’t Shoehorn, but Link Compliance Checking Data
  • Ruben Kruiper, Ioannis Konstas, Alasdair J.G. Gray, Farhad Sadeghineko, Richard Watson and Bimal Kumar
  • Abstract: Wouldn’t it be great if we could automatically check whether a Building Information Model (BIM) complies with all the relevant building regulations? Despite a plethora of motivations and a long history of research, the Automated Compliance Checking (ACC) problem is far from solved. We argue that a general solution to ACC may not be feasible based on three fundamental difficulties: (1) semantic parsing of regulatory texts, (2) a mismatch in requirements for representing a building project and representing the building elements that ACC rules refer to, and (3) the lack of a strategy to align ACC rules to each other and to building representations. We identify the need for tools that support the use building regulations for their diverse group of users, e.g., not only during Compliance Checking. Our conclusion is that a Linked Data approach is particularly suited to the development of such support tools.
  • Full paper (PDF)
  • Presentation (PDF)
  • Validation of building models against legislation using SHACL
  • Emma Nuyts, Jeroen Werbrouck, Ruben Verstraeten and Louise Deprez
  • Abstract: Building information is commonly available in a machine-readable format, whereas normative knowledge is represented in a human-readable way, making human intervention mandatory. A solution for this manual checking procedure is Automated Compliance Checking (ACC). Commercial systems rely on hard-coded requirements, which are almost as error-prone and time-consuming as manually checking the compliance, or focus solely on one specific software package. Hence, this research focuses on Semantic Web technologies to enable a faster and more transparent rule-checking process. The requirements from the legislation will be defined using the Shapes Constraint Language (SHACL). This language facilitates defining constraints in the Terse RDF Triple Language (Turtle), making one set of constraints both human- and machine-readable. Additionally, SHACL enables compliance checking to be combined with quality constraints, ensuring that all necessary data is present in the building project. The proposed methodology is applicable to any type of prescriptive building legislation, provided that the data is defined in the building graph.
  • Full paper (PDF)
  • Presentation (PDF)
  • Leveraging Word Embeddings and Transformers to Extract Semantics from Building Regulations Text
  • Odinakachukwu Okonkwo, Amna Dridi and Edlira Vakaj
  • Abstract: In the recent years, the interest to knowledge extraction in the architecture, engineering and construction (AEC) domain has grown dramatically. Along with the advances in the AEC domain, a massive amount of data is collected from sensors, project management software, drones and 3D scanning. However, the construction regulatory knowledge has maintained primarily in the form of unstructured text. Natural Language Processing (NLP) has been recently introduced to the construction industry to extract underlying knowledge from unstructured data. For instance, NLP can be used to extract key information from construction contracts and specifications, identify potential risks, and automate compliance checking. It is considered impractical for construction engineers and stakeholders to author formal, accurate, and structured building regulatory rules. However, previous efforts on extracting knowledge from unstructured text in AEC domain have mainly focused on basic concepts and hierarchies for ontology engineering using traditional NLP techniques, rather than deeply digging in the nature of the used NLP techniques and their abilities to capture semantics from the building regulations text. In this context, this paper focuses on the development of a semantic-based testing approach that studies the performance of modern NLP techniques, namely word embeddings and transformers, on extracting semantic regularities within the building regulatory text. Specifically, this paper studies the ability of word2vec, BERT, and Sentence BERT (SBERT) to extract semantic regularities from the British building regulations at both word and sentence levels. The UK building regulations code has been used as a dataset. The ground truth of semantic regulations has been manually curated from the well-established Brick Ontology to test the performance of the proposed NLP techniques to capture the semantic regularities from the building regulatory text. Both quantitative and qualitative analyses have been performed, and the obtained results show that modern NLP techniques can reliably capture semantic regularities from the building regulations text at both word and sentence levels, with an accuracy that reaches 80% at the word-level, and hits 100% at the sentence-level.
  • Full paper (PDF)
  • Presentation (PDF)
  • Taking stock: a Linked Data inventory of Compliance Checking terms derived from Building Regulations (short paper)
  • Ruben Kruiper, Ioannis Konstas, Alasdair J.G. Gray, Farhad Sadeghineko, Richard Watson and Bimal Kumar
  • Abstract: Compliance Checking (CC) would be a lot easier if we could automatically map between (1) terms that occur in building regulations and (2) elements of buildings and building products. However, the terminology used in the regulations is vastly different from the terminology found in Building Information Models (BIM). We are therefore forced to somehow shoehorn the vocabulary of regulatory terms into a set of classes that may well be several orders of magnitude smaller. This paper aims to reduce the gap between terms found verbatim in the regulations, and the classes that exist in Linked Data Vocabularies in Architecture and Construction. We explore the automated extraction of domain terminology from building regulations, and interlink the resulting terms with existing controlled vocabularies like Uniclass. The resulting Knowledge Graph (KG) can be used to suggest relevant and related domain terminology, which improves collecting the inventory of Linked Data terms required for CC.
  • Full paper (PDF)
  • Presentation (PDF)
  • Terrestrial Laser Scanning for Surveying and 3D Modelling of Underground Built Heritage: A Case Study of Hypogea in the Sassi of Matera
  • Nicla Maria Notarangelo, Nicola Capece, Gilda Manfredi, Nicodemo Abate, Nicola Masini, Aurelia Sole and Ugo Erra
  • Abstract: This study explores the potential of Terrestrial Laser Scanner (TLS) technology for surveying and generating accurate three-dimensional (3D) models of Underground Built Heritage (UBH), using a hypogea complex in the Sassi of Matera (Italy) as a case study. This urban ecosystem, built through excavation and regeneration, features a vast array of underground structures, with complex geometries and intricate details. The survey conducted using TLS technology and the reconstruction using Reality Capture (RC) software produced a highly detailed 3D model of hypogea that serve as a basis for semantic enriched Building Information Modeling (BIM). The results demonstrate the potential of advanced techniques through a workflow that combines TLS and RC to achieve adequate UBH representations and fill the gap in knowledge and documentation, which hinder management, exploitation, and valorization.
  • Full paper (PDF)
  • Presentation (PDF)

INDUSTRY TRACK

The Industry Track includes in-progress on-topic developments presented by company representatives:

Industry Track Session 1 (Thu 15/06. 16:15 - 17:30)

  • Asset Information Management for a Comunications Network in Ireland
  • Aonghus O’Keeffe and David Torrado
  • Abstract: An infrastructure project in Ireland required installation of new power and communications cables. Running of these new cables necessitated works for new and existing ducts and chambers. Historically, as-built records from such projects would comprise documents in unstructured data formats (e.g., PDF, CAD). However, the client, a major public asset owner, sought to collate structured asset information that could be used for improving future decision making. The client did not have an overarching data strategy at the outset of the project. Further, the client did not have an asset information management system (AIMS) for duct, chamber and cable asset information. As such, the presenters proposed a vendor-neutral, standards-based approach to asset information management, with machine-readable rules and instance data using LD/SW technologies. This approach was intended to futureproof the data such that it could be consumed by any adopted AIMS. Further, demonstration of the approach itself was intended to inform the stakeholders of the potential benefits and disbenefits of its adoption across other areas of the asset network or other asset types. [...]
  • Extended Abstract (PDF)
  • Presentation (PDF)
  • bhOWL: BHoM with Semantic Web Technologies
  • Alessio Lombardi, Diellza Elshani, Thomas Wortmann, Al Fisher
  • Abstract: Architecture, Engineering and Construction (AEC) projects require multidisciplinary solutions resulting in several disciplinary representations for one physical asset. However, interoperability issues between software often hinder disciplinary data integration, leading to the late recognition of violated design constraints. Building Habitat object Model (BHoM) is an open-source software framework initiated by Buro Happold, and it provides a unified data model for building design and construction information. Semantic Web technologies can link data effectively, and integrating BHoM and Semantic Web can enhance information exchange efficiency and accuracy in the building industry. To achieve this integration, Buro Happold and the Institute for Computational Design and Construction, Chair for Computing in Architecture (ICD/CA) from the University of Stuttgart, have been working on a joint research project. This work covers translating BHoM's object model to Semantic Web formal languages and its integration with design software as well as graph databases. [...]
  • Extended Abstract (PDF)
  • Presentation (PDF)
  • An open endpoint and framework for the development of linked data for building energy systems
  • James Allan, JongGwan An, Reto Fricker, Sascha Stoller, Philipp Heer
  • Abstract: The NEST building is a research and innovation building in Switzerland. This resource includes an endpoint to a linked data graph about the building and its energy systems (https://graphdb.nestcloud.ch/). The data collected is available for researchers and industry professionals to develop models and test applications. Our objective is to provide a linked data resource that industry professionals and researchers can use to create applications and evaluate the performance of different linked data modelling methods. Our primary motivation is to improve the energy performance of the building, which includes both simulation and control of the energy systems. We have diverse datasets containing information about the building and its energy systems. The datasets include BIM models, sensor metadata, engineering schematics and 3D city models. There are also real-time data streams and historical data from sensors and actuators installed in the building. [...]
  • Extended Abstract (PDF)
  • Presentation (PDF)
  • The use of the Semantic Web Technologies to provide portfolio-level end of life analysis of the Dutch transport infrastructure
  • Esra Bektas
  • Abstract: The Netherlands struggles with revitalizing its transport infrastructure and building stock due to physical ageing. Changing demand and (both man-made and climate-)threats give rise to new requirements. To meet such changes and keep structures up-to-date, asset managers often need to plan more frequent or larger interventions that does influence economic performance of the structures. To make the transport infrastructure and land use fit for the future, asset managers need to carefully analyze their portfolio and plan their interventions on a systematic way. For that, it becomes important for asset managers to identify civil structures not only based on their physically deterioration but also their functionally limitations, and/or economically ineffectiveness. This means having an integral view on the structures’ performances. For that view, it is essential to understand variables to describe changing demand on the area, object characteristics (designed and changed), and the way such changes effect on objects and their components. [...]
  • Extended Abstract (PDF)
  • Presentation (PDF)
  • Graph and Graphics: Combining two powerhouses into one machine
  • Philipp Dohmen, Emmanouil Argyris, Markus Färber, Michael Reeßing
  • Abstract: In our industry, BIM models are a valuable source but will never be the “single source of truth”, as software vendors like to claim. However, combining a graph database that stores geometry as glTF and providing it in a capable 3D viewer can enable a powerful data-driven approach to work with 3D models. Shifting the focus to data and making geometry just another representation of information could be a real game changer. In this approach, 3D models are not stored as separate files but as part of the graph database alongside other types of data, making it easier to manage, query, and analyse the 3D models as part of a larger data ecosystem. [...]
  • Extended Abstract (PDF)

Industry Track Session 2 (Fri 16/06. 13:45 - 14:30)

  • Using semantic rules for generating SPARQL from semantic mark-up
  • Nick Nisbet
  • Abstract: This industry presentation will show the detailed use of a methodology using semantic rules for mapping semantic markup to SPARQL. The methodology uses parsing tables that are themselves expressed using semantic markup, demonstrating that knowledge can be captured and reviewed separately from execution tools and environments. Automated compliance checking depends on the capture of complex logical dependencies. This presentation shows the detail of a methodology using semantic rules for mapping regulatory and requirements documents to SPARQL, by first annotating the document with a semantic mark-up called RASE. This demonstrates that knowledge can be captured and reviewed separately from execution tools and environments. SPARQL has been chosen as a target as complement ot several other knowledge and query representations already demonstrated, and in particular to show integration with the existing tools for visualization and testing of semantic representations of buildings. SPARQL has also been classified within a ‘gold standard’ of description logics’. [...]
  • Extended Abstract (PDF)
  • Presentation (PDF)
  • Implementing and managing mappings for data transformation using SHACL Rules
  • Lucas Verhelst
  • Abstract: Large asset managing organizations that manage assets typically have multiple systems that store and use similar or related information. Data management is a growing challenge for these organisations, which is why some of them have started to develop common data models in the form of ontologies. Implementing and managing mappings for data transformation using SHACL rules, RML, and Liquid templates can be a powerful solution for creating a single source of truth from different source systems. This allows organizations to move away from software with hard-coded mappings and towards a data-driven platform. The software can then be generic, and the mappings to the common data model can be reused for multiple purposes. However, there are also some challenges to consider when implementing such a platform. One of these challenges is that there is currently little off-the-shelf software available to process the SHACL rules, which means that organizations are depending on a few implementations which have their issues. [...]
  • Extended Abstract (PDF)
  • Presentation (PDF)
  • Semantisation of Rules for Automated Compliance Checking
  • Edlira Vakaj, Maxime Lefrançois, Amna Dridi, Thomas Beach, Mohamed Gaber, Gonçal Costa Jutglar, He Tan
  • Abstract: The Architecture, Engineering, and Construction (AEC) industry is subject to numerous regulations and standards that govern the design, construction, and maintenance of buildings and infrastructure. These regulations often involve complex language and technical jargon, which can be difficult to understand and apply in practice. Semantisation, or the process of transforming natural language into machine-readable data with explicit meaning, can address this challenge by creating structured representations of regulations that can be exchanged and processed by computers. This can enable automated compliance checking, facilitate communication between stakeholders, and improve the efficiency and effectiveness of regulatory enforcement. [...]
  • Extended Abstract (PDF)
  • Presentation (PDF)