Zing Forum

Reading

SciCore-Mol: A Plug-and-Play Architecture for Endowing Large Language Models with Molecular Cognitive Capabilities

The Tsinghua team proposes the SciCore-Mol framework, which enables large language models (LLMs) to gain professional molecular understanding and generation capabilities through three plug-and-play modules (GVP encoder, diffusion generator, and reaction Transformer). This allows LLMs to handle chemical tasks professionally while maintaining their general capabilities.

SciCore-Mol分子认知LLM增强GVP编码器扩散模型化学AI药物发现OpenBMB清华大学
Published 2026-05-07 13:40Recent activity 2026-05-07 13:50Estimated read 6 min
SciCore-Mol: A Plug-and-Play Architecture for Endowing Large Language Models with Molecular Cognitive Capabilities
1

Section 01

【Main Post/Introduction】SciCore-Mol: A Plug-and-Play Architecture for Endowing LLMs with Molecular Cognitive Capabilities

The OpenBMB team at Tsinghua University proposes the SciCore-Mol framework. Through three plug-and-play modules—the GVP encoder, diffusion generator, and reaction Transformer—large language models (LLMs) can acquire professional molecular understanding and generation capabilities while retaining their general capabilities. This addresses the issues of information loss and semantic noise when LLMs process molecular structures, enabling specialized handling of chemical tasks.

2

Section 02

Background: Core Challenges of LLMs in Processing Molecular Structures

Large language models (LLMs) excel in natural language processing, but they face fundamental limitations when dealing with topological geometric data like molecular structures: Molecular structures require preserving 3D spatial information, whereas LLMs are adept at discrete symbol sequences. Converting molecules into linear text formats such as SMILES results in information loss and semantic noise, which severely impacts reasoning accuracy in fields like drug discovery and materials science.

3

Section 03

Core Architecture: Three Plug-and-Play Professional Modules

The three-module design of SciCore-Mol:

  1. GVP Encoder: Uses graph neural networks to capture 3D spatial information of molecules (bond lengths, bond angles, atomic positions) and preserve topological geometric features;
  2. Diffusion Generator (LDMol): Generates new molecules that meet specific conditions (target properties, structural constraints) using a latent diffusion model;
  3. Reaction Transformer: A numerically sensitive module trained on large-scale reaction data to predict products, yields, and reaction conditions.
4

Section 04

Training Mechanism: Two-Stage Deep Alignment and Fusion

SciCore-Mol adopts a two-stage training approach to ensure fusion between modules and LLMs:

  • Independent Pre-training: The GVP encoder aligns molecular embeddings to the LLM's hidden space via an MLP adapter; the reaction Transformer is trained on reaction data;
  • Joint Fine-tuning: The LLM calls modules using the <mol> token, and module outputs are fused with LLM representations at the hidden layer to retain general reasoning capabilities. Modules support on-demand plug-and-play and can be extended to add new functions.
5

Section 05

Experimental Evidence: Significant Improvement in Chemical Task Performance

SciCore-Mol's effectiveness has been verified in multiple benchmark tests:

  • ChemBench: Outperforms baselines in tasks like molecular property prediction and reaction product prediction;
  • SMolInstruct: Meets the requirements for molecular instruction-following ability;
  • MMLU Subset: General knowledge capabilities remain unchanged;
  • ADMET: Accurately predicts drug metabolism and toxicity. These results demonstrate a balance between professional and general capabilities.
6

Section 06

Technical Implementation: Open Source and Engineering Details

Built on open-source LLMs such as Qwen3-8B, SciCore-Mol supports DeepSpeed ZeRO-3 distributed training. It provides complete scripts: automated training, multi-benchmark evaluation tools, and module splitting tools. Environment requirements include Python 3.10 and CUDA 12.1; 8x A800/A100 GPUs are recommended. Dependency management uses uv, and it supports GVP-GNN and FlashAttention.

7

Section 07

Application Prospects: Expansion Directions for Scientific AI

SciCore-Mol can be extended to protein structure prediction (by integrating AlphaFold-like modules), crystallography analysis, material simulation, and other fields. The pharmaceutical industry can use natural language interaction to let the model generate molecules that meet target properties, driving an upgrade in the drug discovery paradigm.

8

Section 08

Summary: Innovative Paradigm of General + Professional

SciCore-Mol solves the cognitive contradiction of LLMs processing molecular data through a plug-and-play architecture, retaining the advantages of general reasoning while endowing professional molecular capabilities. This 'general foundation + professional modules' design provides an important reference for the development of scientific AI.