阿里达摩院-多模态-OFA

发布时间 2023-04-17 09:31:07作者: 宋岳庭

OFA: UNIFYING ARCHITECTURES, TASKS, AND MODALITIES
THROUGH A SIMPLE SEQUENCE-TO-SEQUENCE LEARNING
FRAMEWORK
https://arxiv.org/pdf/2202.03052.pdf
阿里达摩院
https://github.com/OFA-Sys/OFA

Wang P , Yang A , Men R , et al. OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework[J]. 2022.

OFA属于M6系列:
[18] Junyang Lin, Rui Men, An Yang, Chang Zhou, Ming Ding, Yichang Zhang, Peng Wang, Ang Wang, Le Jiang, Xianyan Jia, et al. M6: A chinese multimodal pretrainer. arXiv preprint arXiv:2103.00823, 2021.
[19] Zhu Zhang, Jianxin Ma, Chang Zhou, Rui Men, Zhikang Li, Ming Ding, Jie Tang, Jingren Zhou, and Hongxia Yang. M6-ufc: Unifying multi-modal controls for conditional image synthesis. arXiv preprint arXiv:2105.14211, 2021.
[20] An Yang, Junyang Lin, Rui Men, Chang Zhou, Le Jiang, Xianyan Jia, Ang Wang, Jie Zhang, Jiamang Wang, Yong Li, et al. Exploring sparse expert models and beyond. arXiv preprint arXiv:2105.15082, 2021.
[21] Junyang Lin, An Yang, Jinze Bai, Chang Zhou, Le Jiang, Xianyan Jia, Ang Wang, Jie Zhang, Yong Li, Wei Lin, et al. M6-10t: A sharing-delinking paradigm for efficient multi-trillion parameter pretraining. arXiv preprint arXiv:2110.03888, 2021.

https://m6.aliyun.com/#/

跨自然语言、图像的多模态AI模型

推荐理由生成

M6是中文社区最大的跨模态预训练模型,模型参数达到十万亿以上,具有强大的多模态表征能力。M6通过将不同模态的信息经过统一加工处理,沉淀成知识表征,为各个行业场景提供语言理解、图像处理、知识表征等智能服务