Zing Forum

Reading

Innovative Application of Multimodal Explainable AI Framework in Real Estate Valuation

This project, developed by Jessie Calix, a student at IE University, addresses the problem of traditional automated valuation models (AVMs) relying solely on structured data and lacking interpretability. It combines real estate images and tabular data, and uses SHAP and Grad-CAM technologies to provide explainability.

多模态AI可解释AI房地产估值SHAPGrad-CAMResNet-50XAIAVM瑞士房地产数据集
Published 2026-04-10 20:33Recent activity 2026-04-10 20:56Estimated read 6 min
Innovative Application of Multimodal Explainable AI Framework in Real Estate Valuation
1

Section 01

【Main Floor】Introduction to the Innovative Application of Multimodal Explainable AI Framework in Real Estate Valuation

This project, developed by Jessie Calix, a student at IE University, proposes a multimodal explainable AI framework to address the issues of traditional Automated Valuation Models (AVMs) relying solely on structured data and lacking interpretability. The framework combines real estate images and tabular data, and uses SHAP and Grad-CAM technologies to provide prediction explanations. Key findings include that visual features contribute an average of 54.2% importance in individual predictions.

2

Section 02

Research Background: Limitations of Traditional Real Estate Valuation Models

Traditional AVMs have two major limitations: first, they rely only on structured data (such as area, number of rooms) and ignore visual information like decoration and lighting; second, as "black-box" models, they cannot explain prediction logic, lacking the transparency and credibility required in commercial scenarios. Jessie's graduation project was developed to address these two core issues.

3

Section 03

Dataset and Research Design

The study uses the Swiss Real Estate Dataset, which contains 11,105 rental property entries. Each entry is accompanied by property images and detailed tabular data. Tabular data provides structured features (number of rooms, area, location, etc.), while images contain visual information such as decoration level and lighting conditions that are difficult to capture with structured data, providing an ideal foundation for multimodal learning.

4

Section 04

Technical Architecture: Multimodal Fusion and Interpretability Design

The technical architecture consists of three parts:

  1. Visual feature extraction: Use ResNet-50 to extract image features, reduce dimensionality via PCA, then fuse with tabular data;
  2. Multimodal fusion: Compare models using only tabular data, only images, and fused data to verify the advantages of the multimodal approach;
  3. Interpretability mechanism: SHAP (global/local feature contribution explanation) and Grad-CAM (image attention area visualization) complement each other to enhance model transparency.
5

Section 05

Experimental Results and Key Findings

Experimental results show:

  • The image-only model has the worst performance (RMSE=514 CHF, R²=0.16), the tabular-only model is optimal (RMSE=267 CHF, R²=0.774), and the multimodal model's performance is close to the tabular-only model but adds an interpretability dimension;
  • Visual features contribute an average of 54.2% in individual predictions;
  • In the "identical twins" case, properties with the same structure have valuation differences due to visual factors (decoration, landscape), verifying the importance of visual information.
6

Section 06

Project Structure and Implementation Process

The project is organized using Jupyter Notebooks and runs in the following order:

  • 01_eda.ipynb: Exploratory Data Analysis;
  • 02_visual_feature_extraction.ipynb: ResNet-50 feature extraction + PCA;
  • 03_model_training.ipynb: Model training and evaluation;
  • 04_shap_explainability.ipynb: SHAP attribution analysis;
  • 05_identical_twins.ipynb: Case study + Grad-CAM. GPU acceleration is recommended for feature extraction.
7

Section 07

Research Significance and Application Prospects

Academic contributions: Provide empirical support for the application of multimodal learning in real estate valuation, and offer methodological references for the commercial application of explainable AI technologies; Practical value: Improve valuation accuracy, enhance decision transparency, support manual review, and build customer trust; Extended applications: Can be extended to multimodal scenarios such as used car valuation, art appraisal, and insurance pricing.

8

Section 08

Conclusion: Exploration in the Era of AI Transparency

Although this project is small in scale, it touches on core issues of AI applications: multimodal learning and interpretability. While pursuing model performance, it achieves transparency in the decision-making process, providing an accurate and credible solution for real estate valuation. This concept of balancing performance and interpretability is worth referencing in a wider range of AI applications.