International Open Access Journal Platform

logo
open
cover
Current Views: 81832
Current Downloads: 42692

Environment and Resource

ISSN Print:2707-2398
ISSN Online:2707-2401
Contact Editorial Office
Join Us
DATABASE
SUBSCRIBE
Journal index
Journal
Your email address

基于LangChain + LLM的镁合金知识库问答系统构建

Building a Magnesium Alloy Knowledge Base Questionanswering System based on LangChain + LLM

吴强, 储逸尘, 李岩开, 余大亮

Environment and Resource / 2026,8(2): 195-203 / 2026-04-28 look13 look8
  • Information:
    重庆科技大学冶金与动力工程学院,重庆
  • Keywords:
    Large language models; Model fine-tuning; Machine learning
    大语言模型; 模型微调; 机器学习
  • Abstract: Driven by national carbon-neutrality targets and the rapid development of high-end equipment manufacturing, magnesium alloys have attracted increasing attention as a promising class of lightweight structural materials due to their low density and high specific performance. However, the design of magnesium alloy materials requires adherence to a complex design process in which each step is closely interlinked. Furthermore, the knowledge involved in materials R&D is not only complex but also highly specialized. Traditional design methods that rely on manual experience and fragmented information retrieval face significant bottlenecks in terms of R&D efficiency and knowledge reuse. To improve the scientific reliability and accuracy of large language models (LLMs) in magnesiumalloy– related question answering and decision-support tasks, this study adopts DeepSeek-R1-Distill-Qwen-14B as the base model and systematically investigates the effectiveness of two parameter-efficient fine-tuning strategies, LoRA and P-Tuning v2, in a domain-specific question-answering setting. In addition, the potential performance gains achieved by combining the two methods are further examined. Three fine-tuning schemes—LoRA, P-Tuning v2, and their joint application—are evaluated through repeated inference experiments conducted twenty times on the same test set. Model performance is assessed using four metrics: Accuracy, F1 score, ROUGE, and BLEU. The results indicate that models fine-tuned with either LoRA or P-Tuning v2 consistently outperform the unfine-tuned baseline across all evaluated metrics, while the jointly fine-tuned model achieves the best overall performance. Further analysis revealed that this performance improvement primarily stems from the complementary effects of these two fine-tuning methods. LoRA focuses more on simplifying and adjusting model parameters, while P-Tuning v2 guides the model to better focus on task-relevant features by incorporating prompts across multiple layers. Without altering the model’s core parameters, the two methods simultaneously optimize the model from different angles, thereby achieving superior results. Based on these findings, the jointly fine-tuned model is integrated into the LangChain framework, providing a practical model foundation and methodological reference for the development of magnesium-alloy knowledge-base question-answering systems. 在国家双碳目标和高端装备制造快速发展阶段,镁合金因低密度及优异比性能被认为是实现结构轻量化的重要材料。但是,镁合金材料设计需要遵循复杂的设计流程并且每个流程间的关系联系非常紧密,而材料研发知识不仅复杂而且专业性也相对较高,传统依赖人工经验的设计方法和分散检索方式在研发效率和知识复用方面面临明显瓶颈。为提升大语言模型在镁合金领域知识问答与决策任务中的科学性与正确性,本文以DeepSeek-R1-Distill-Qwen-14B为基础模型,系统对比参数高效微调技术LoRA与P-Tuningv2在镁合金领域问答任务中的性能提升效果,同时进一步评估两者联合微调的协同增益。具体实验设计上,本文分别采用LoRA、P-Tuningv2以及联合微调三种微调方案,并在同一测试集上进行20次重复测试实验,使用Accuracy、F1分数、ROUGE与BLEU四种性能指标对模型性能进行评估。结果发现,相较于未经微调的基础模型,运用LoRA与P-Tuningv2技术微调的模型在各项关键性能指标均有显著提升。其中联合微调策略下的模型性能提升最优,体现出显著的协同增强效应。进一步分析发现,这种性能提升主要来自两种微调方法之间的互补作用。LoRA更侧重对模型参数进行简化调整,而P-Tuningv2则通过在多层加入提示信息,引导模型更好地关注任务重点。在不改变模型主要参数的前提下,两种方法从不同角度同时优化模型,从而取得了更好的效果。基于上述结果,本文将联合微调后的模型集成至LangChain框架,为后续面向镁合金知识库问答系统的实际应用提供模型基础与方法参考。
  • DOI: 10.35534/er.0802026 (registering DOI)
  • Cite: 吴强,储逸尘,李岩开,等.基于LangChain + LLM的镁合金知识库问答系统构建[J].环境与资源,2026,8(2):195-203.
Already have an account?
+86 027-59302486
Top