Research Results (Current Stage)
The current results are primarily methodological and demonstrative rather than final comfort performance outcomes. The implemented system successfully generates synchronized, multi-dimensional time-series data combining calibrated sound pressure levels, sound event classification probabilities, timestamps, and device identifiers.
Visualization strategies were developed to represent acoustic conditions through stacked area plots, categorical bar plots, and superimposed SPL curves, enabling clearer interpretation of temporal acoustic states.
The system demonstrates the ability to couple energetic (SPL) and semantic (sound event) information, allowing differentiation between acoustically similar but perceptually different conditions.
Three-dimensional visualization within a digital twin environment was also implemented, linking acoustic indicators to spatial locations and enabling exploratory spatial reasoning. Cloud-based storage and dashboard integration further confirm functional interoperability between sensing, processing, and digital twin layers.
However, comfort classification models, validated KPIs, and physically accurate sound propagation visualization remain under development.
Expected Results
The expected outcomes include validated acoustic comfort KPIs derived from the integration of psychoacoustic indicators, sound event detection outputs, and temporal analysis. The research anticipates establishing statistically supported relationships between objective acoustic features and participant-reported comfort levels through scenario-based experiments.
Future results aim to deliver improved sound event detection performance using domain-specific datasets, refined localization methods, and personalized comfort modeling.
Ultimately, the research expects to produce a real-time, privacy-conscious, digital twin, embedded acoustic comfort assessment system capable of supporting spatially informed, data-driven decision-making in indoor environments.
Academic Contributions:
This research proposes an integrated framework combining real-time acoustic sensing, psychoacoustic indicators, sound event detection, and digital twin visualization.
It advances acoustic comfort assessment by linking energetic, perceptual, and semantic dimensions within a unified methodology.
The work also contributes synchronization methods, privacy-conscious processing pipelines, and the foundation for human-interpretable acoustic comfort KPIs.
Planned dataset creation and model refinement further support research reproducibility and future studies.
Industry Contributions:
For the industry, the system enables real-time, spatially contextualized acoustic monitoring integrated into digital twins.
It supports data-driven decision-making for workspace design, facility management, and indoor environmental quality optimization.
Expected Contributions:
Expected outcomes include validated acoustic comfort KPIs, personalized comfort modeling, improved sound event detection models, and deployable digital twin-based tools for adaptive, human-centered indoor acoustic management.