
姓名:李明勇
学位:博士
职称:教授
导师:硕导
邮箱:limingyong AT xhfyl.net
研究方向:
1.基于深度学习的多模态、高维大数据处理;
2.机器学习及应用;
3.智慧教育及应用。
招生专业(欢迎报考,希望有志于研究深造的同学加入!)
学硕:软件工程、计科
专硕:电子信息(计算机技术、人工智能)
个人简介:
本科毕业于华中师范大学,博士毕业于东华大学。IEEE member,ACM member,中国计算机学会CCF会员,重庆市人工智能学会会员,重庆市人工智能学会智慧教育专委会委员,重庆市高教会电子商务专委会秘书长,重庆自然科学基金评审专家。指导学生竞赛获国家奖10项,包括三创赛全国一等奖,中国机器人及人工智能大赛全国一等奖。致力于基于深度学习的跨模态多媒体信息理解以及智慧教育的研究。共主持国家级省部级科研项目14项,其中重点项目3项;以第一作者或通讯作者公开发表SCI或CCF目录论文37篇,其中一二区12篇,CCF B类7篇。获得发明专利授权5项,获得软件著作权30余项;主编教材2部。获得第七届重庆市教育科学优秀成果奖三等奖(排名第一)。
担任CCF会议(ICONIP 2019,ICIC 2023) PC member;
担任多个SCI期刊审稿人:T-ITS, IPM, ESWA, EAAI, KBS, TCSVT, Neural computing & applications, Applied intelligence等。
担任多个CCF会议审稿人:
CCF-A: ACM MM;
CCF-B: ICMR , ICME, ICASSP等;
CCF-C:ICTAI, ICONIP, ICIC, ICANN, IJCNN, MMM, MM Asia, Apweb, KSEM等
指导研究生,3人获得优秀硕士学位论文,3人考取了知名高校的博士(国防科大,西安交大,重庆大学等)。
主要讲授课程
《Web前端开发技术》、《HTML5程序设计》、《移动开发技术》、《计算机应用基础》《人工智能及应用》、《智慧课堂教学》、《教育大数据与学习分析》等。
科研项目:
[1]2022年,重庆市自然科学基金面上项目(CSTB2022NSCQ-MSX1417),2022-2025,主持,结题;
[2]2022年,重庆市教委科技重点项目(KJZD-K202200513),2022-2026,主持,在研;
[3]2023年,重庆市社科规划项目(2023BS085),2023-2026,主持,在研;
[4]2019年,重庆市教委科技项目(KJQN201900520),2019-2022,主持,结题;
[5] 2020年,中央高校博士创新基金(CUSF-DH-D-2020092),2020-2022,主持,结题;
[6] 2019年,重庆市教委人文社会科学研究项目(19SKGH035),2019-2021,主持,结题;
[7] 2022年,重庆市教委人文社会科学研究项目(22SKGH100),2022-2024,主持,结题;
[8] 2021年,重庆师范大学博士启动基金,2021-2023,主持,结题;
[9] 2021年,重庆市教科规划项目(2021-GX-320),2021-2024,主持,结题;
[10] 参研,国家自然科学基金(61370205),结题;
[11] 2018年,教育部人文社科项目(18YJA880061),主研,3/7,结题;
[12] 2016年,重庆市教委科技项目(KJ1600332),2016-2019,主持,结题;
[13] 2017年,重庆市教科规划重点项目(2017-GX-116),2017-2020,主持,结题;
[14] 2019年,重庆市研究生教改项目(yjg193093),2019-2022年,主持,结题;
科研论文:(*号代表通讯作者)
期刊(近五年)
[1] Mingyong Li (1/4), et al. Semantic-guided autoencoder adversarial hashing for large-scale cross-modal retrieval.Complex & Intelligent Systems, 8, pages1603–1617 (2022). (SCI, 中科院2区)
[2] Mingyong Li, Longfei Ma, Yewen Li, and Mingyuan Ge, et al. CCAH: A CLIP-Based Cycle Alignment Hashing Method for Unsupervised Vision-Text Retrieval.International Journal of Intelligent Systems, 2023. (SCI, 中科院2区Top)
[3] Li Mingyong, Li Yewen, Ge Mingyuan & Ma Longfei. CLIP-based fusion-modal reconstructing hashing for large-scale unsupervised cross-modal retrieval.International Journal of Multimedia Information Retrieval, 2023. (SCI, CCF-B类会议(ICMR)推荐发表期刊)
[4] Li Mingyong, Ge Mingyuan. Enhanced-Similarity Attention Fusion for Unsupervised Cross-Modal Hashing Retrieval. Data Science and Engineering. 2025(中科院2区,CCF-C)
[5] Mingyong Li, Jiabao Fan, Ziyong Lin. Non-Co-Occurrence Enhanced Multi-Label Cross-Modal Hashing Retrieval Based on Graph Convolutional Network.IEEE Access, Volume: 11, pages 16310 - 16322 (2023). (SCI, 中科院3区)
[6] Mingyong Li (1/6), et al. An Analysis Model of Learners' Online Learning Status Based on Deep Neural Network and Multi-Dimensional Information Fusion.CMES-Computer Modeling in Engineering & Sciences, 2022. (SCI)
[7] Mingyong Li*, et al. FL-MGVN: Federated learning for anomaly detection using mixed gaussian variational self-encoding network. Information Processing & Management, 2022. (SCI, 中科院1区TOP,CCF-B)
[8] Mingyong Li (共一), et al. Federated Multidomain Learning With Graph Ensemble Autoencoder GMM for Emotion Recognition.IEEE Transactions on Intelligent Transportation Systems,2020. (SCI, 中科院1区TOP,CCF-B)
[9] Mingyong Li (1/5), et al. Research on fault diagnosis of time-domain vibration signal based on convolutional neural networks.Systems Science & Control Engineering, Volume 7, Pages 73-81,2019. (EI期刊)
[10] Qiqi Li, Longfei Ma, Zheng Jiang, Mingyong Li*, et al. TECMH: Transformer-based Cross-modal Hashing For Fine-grained Image-text Retrieval.CMC-Computers, Materials & Continua, 2023. (SCI, 中科院3区)
[11] Li Mingyong, Zheng Jiang, Zongwei Zhao, Longfei Ma. A PERT-BiLSTM-Att model for network public opinion text sentiment analysis.Intelligent Automation & Soft Computing (IASC), 2023. (SCI)
[12] honggang zhao, Mingyue Liu, Mingyong Li*. Feature fusion and metric learning network for Zero-Shot Sketch-based Image Retrieval.Entropy, 2023. (SCI, 中科院3区)
[13] Jie Zhang, Ziyong Lin, Xiaolong Jiang, Mingyong Li* . Hierarchical modal interaction balance cross-modal hashing for unsupervised image-text retrieval. Multimed Tools Appl (2024). (SCI, 中科院3区)
[14] Honggang Zhao, Guozhu Jin, Xiaolong Jiang, Mingyong Li* ,SDE-RAE:CLIP-based realistic image reconstruction and editing network using stochastic differential diffusion,Image and Vision Computing,Volume 139,2023,104836,ISSN 0262-8856, (SCI, 中科院2区)
[15] Mingyue Liu, Honggang Zhao, Longfei Ma & Mingyong Li* . Modal interaction-enhanced prompt learning by transformer decoder for vision-language models. Int J Multimed Info Retr 12, 19 (2023). (SCI, CCF-B类会议(ICMR)推荐发表期刊)
[16] Li, Yewen, Mingyuan Ge, Mingyong Li*. 2023. "CLIP-Based Adaptive Graph Attention Network for Large-Scale Unsupervised Multi-Modal Hashing Retrieval" Sensors 23, no. 7: 3439. (SCI, 中科院2区)
[17] Lin, Z., Jiang, X., Zhang, J, Mingyong Li*. Dual-matrix guided reconstruction hashing for unsupervised cross-modal retrieval. Int J Multimed Info Retr 14, 4 ,2025. (SCI, CCF-B类会议(ICMR)推荐发表期刊)
CCF会议论文(近五年)
[1] Mingyong Li, Hongya Wang. Unsupervised Deep Cross-Modal Hashing by Knowledge Distillation for Large-scale Cross-modal Retrieval.ICMR '21: Proceedings of the 2021 International Conference on Multimedia Retrieval, 2021, pages 183–191. (CCF-B,多媒体信息检索旗舰会议)
[2] Mingyong Li, Zongwei Zhao, Xiaolong Jiang, and Zheng Jiang. 2024. CLIP-ProbCR:CLIP-based Probability embedding Combination Retrieval. In Proceedings of the 2024 International Conference on Multimedia Retrieval (ICMR '24). Association for Computing Machinery, New York, NY, USA, 1104–1109. (CCF-B,多媒体信息检索旗舰会议)
[3] Mingyuan Ge; Yewen Li; Honghao Wu; Mingyong Li* ."JM-CLIP: A Joint Modal Similarity Contrastive Learning Model for Video-Text Retrieval," ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Korea, Republic of, 2024, pp. 3010-3014 (CCF-B,音频及信号处理领域顶会)
[4] Honggang Zhao, Chunling Xiao, Jiayi Yang, Guozhu Jin, and Mingyong Li*. 2024. MccSTN: Multi-Scale Contrast and Fine-Grained Feature Fusion Networks for Subject-driven Style Transfer. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 11090–11100, Torino, Italia. ELRA and ICCL. (CCF-B,自然语言处理领域顶会)
[5] Mingyong Li (1/5), et al. Triplet Deep Hashing with Joint Supervised Loss for Fast Image Retrieval.2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI),04-06 November 2019. (CCF-C)
[6] Longfei Ma, Honggang Zhao, Zheng Jiang, and Mingyong Li*. 2024. Multi-view–enhanced modal fusion hashing for Unsupervised cross-modal retrieval. In Proceedings of the 5th ACM International Conference on Multimedia in Asia (MMAsia '23). Article 51, 1–7. (CCF-C)
[7] Jinyu Hu, Mingyong Li* & Jiayan Zhang . (2023). Graph Convolutional Network Semantic Enhancement Hashing for Self-supervised Cross-Modal Retrieval.ICANN 2023. vol 14257. (CCF-C)
[8] Mingyuan Ge, Jianan Shui, Junyu Chen & Mingyong Li*. (2024). Pseudo-label Based Unsupervised Momentum Representation Learning for Multi-domain Image Retrieval.MultiMedia Modeling. MMM 2024. vol 14556. (CCF-C)
[9] Ding, S., Shui, J., Li, X., Mingyong Li*. (2025). Dual-Attention Fusion Network with Edge and Content Guidance for Remote Sensing Images Segmentation. ICPR 2024, vol 15330. (CCF-C)
[10] Zhang, J., Mingyong Li*. (2025). Joint Modal Heterogeneous Balance Hashing for Unsupervised Cross-Modal Retrieval. ICPR 2024. vol 15326. (CCF-C)
[11] Lin, Z., Jiang, X., Zhang, J.,Mingyong Li*. (2024). Modal Fusion-Enhanced Two-Stream Hashing Network for Cross Modal Retrieval. ICANN 2024. ICANN 2024. vol 15021.(CCF-C)
[12] Shu, X., Mingyong Li*. (2024). Semantic Preservation and Hash Fusion Network for Unsupervised Cross-Modal Retrieval.Web and Big Data. APWeb-WAIM 2024. vol 14965. (CCF-C)
[13] Shui, J., Ding, S., Mingyong Li*, Ma, Y. (2024). Entity Semantic Feature Fusion Network for Remote Sensing Image-Text Retrieval. Web and Big Data. APWeb-WAIM 2024. vol 14965. (CCF-C)
[14] Wu and Mingyong Li*, "Global Similarity Relocation Hashing for Unsupervised Cross-modal Retrieval," 2024 International Joint Conference on Neural Networks (IJCNN), Yokohama, Japan, 2024, pp. 1-8.(CCF-C)
[15] Zhu, Y., Mingyong Li*. (2025). Multi-modal Information Multi-angle Mining for Multimedia Recommendation. In: Ide, I., et al. MultiMedia Modeling. MMM 2025. vol 15522. (CCF-C)
发明专利:
[1]李明勇; 马龙飞 ; 一种深度无监督跨模态哈希检索方法, 2023-09-19, 中国, ZL 2022113822348 ( 发明专利),授权
[2]李明勇; 戈明远 ; 一种基于GCN的深度无监督跨模态检索方法, 2023-12-12, 中国, ZL 2022113899797 ( 发明专利),授权
[3]李明勇; 李业文 ; 一种基于模态融合重建哈希的深度无监督跨模态检索方法, 2024-1-26, 中国, ZL 202211340310.9 ( 发明专利),授权
[4]李明勇; 赵宗伟;赵洪刚;蒋晓龙; 一种基于CLIP的概率嵌入组合检索方法, 2024-4-30, 中国, ZL 202310579804.0 ( 发明专利),授权
获奖:
[1]李明勇(1/2); 基于深度神经网络和多维信息融合的在线学习行为分析研究, 重庆市第七届教育科学研究优秀成果奖,2024
学生培养
实验室设施设备充足,拥有A100, A6000, RTX3090, RTX3080TI, RTX4060ti, RTX3060等多台深度学习工作站,拥有良好的氛围和传承,多名研究生考博深造,获得包括优硕论文、国奖、一等奖学金等荣誉。
考博:
李业文,国防科大;
赵洪刚,西安交大;
戈明远,重庆大学;
优硕:
[1]韦情敏,获得2020年校级优秀硕士学位论文.
[2]李齐齐,获得2023年校级优秀硕士学位论文,市级优秀硕士学位论文.
[3]马龙飞,获得2024年校级优秀硕士学位论文.
国奖:
[1] 安紫烨,获得2020年国家奖学金、两次一等奖学金.
[2] 李齐齐,获得2022年国家奖学金、一等奖学金.
[3] 戈明远、李业文、马龙飞、刘明月4人获得2023年国家奖学金.
[4] 黎想、张捷2人获得2024年国家奖学金.
奖学金:
2024年:
23级7名研究生获得一等奖学金(高溢华,丁帅棚,舒新胜,水迦南,祝伊杰,朱亲泽,杨佳怡);
22级2名研究生获得一等奖学金(林子勇,季煜程,另2人获国奖);
2023年:
21级6名研究生获得一等奖学金(赵洪刚,蒋政,赵宗伟,范嘉宝,胡晋宇,张家焱);
22级5名研究生获得一等奖学金(吴宏浩,张捷,林子勇,季煜程,黎想);
2022年:
21级6名研究生获得一等奖学金(戈明远,李业文,赵洪刚,范嘉宝,蒋政,刘明月);
22级 3名研究生获得一等奖学金;
20级 4名研究生获得一等奖学金(彭霜,唐丽蓉,谭桠,李齐齐);
2021年:4名研究生获得一等奖学金;
2020年:2名研究生获得一等奖学金;