Section 01
[Introduction] PiscesL1: Key Highlights of the Multimodal MoE Large Model That Runs on a Single RTX 4090 GPU
PiscesL1 is the first version of the PiscesLx series developed by the Dunimd team. It uses the Yv architecture and supports understanding of 6 modalities: text, images, audio, video, documents, and agents. Through the Mixture of Experts (MoE) architecture and hardware optimizations, it achieves local operation on a single RTX 4090 GPU. Additionally, the model is open-source, providing researchers and developers with a low-cost opportunity to explore multimodal AI and agent technologies.