全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

基于RAG本地电商知识库的DeepSeek电商模型构建与优化研究
Research on Construction and Optimization of DeepSeek E-Commerce Model Based on RAG Local E-Commerce Knowledge Base

DOI: 10.12677/ecl.2025.1451414, PP. 1346-1359

Keywords: 电商文案,智慧电商,RAG检索技术,大语言模型
E-Commerce Copywriting
, Smart E-Commerce, RAG Retrieval Technology, Large Language Model

Full-Text   Cite this paper   Add to My Lib

Abstract:

本文面向电商业客户对话及文案写作需求,提出基于RAG增强型检索技术的电商本地知识库与轻量化本地部署大语言模型结合,建立一种电商大语言模型智能助理。采用了Nomic-embed模型和数据处理方法,建立本地电商知识库的向量数据,通过设置采样和检索参数,结合DeepSeek-R1大语言模型实现对话与文案内容的推理。通过Dify框架和Ollama参数的试验,提出了基于大语言模型参数的模型部署调优方法。新搭建的本地电商大语言模型助理可有效解决在线大语言模型工具响应速度慢、信息传输不安全及推理内容不特定、不精确等问题,可为电商业提升工作效率提供新的途径。
This paper addresses the needs of e-commerce businesses in customer dialogue and copywriting by proposing an intelligent e-commerce assistant that integrates a RAG-enhanced retrieval technology-based local knowledge base with a lightweight, locally deployed large language model. The Nomic-embed model and data processing methods are employed to construct vector data for the local e-commerce knowledge base. By configuring sampling and retrieval parameters and combining them with the DeepSeek-R1 large language model, the system enables reasoning for dialogue and content generation. Through experiments with the Dify framework and Ollama parameters, a method for optimizing model deployment based on large language model parameters is proposed. The newly developed local e-commerce large language model assistant effectively addresses issues such as slow response times, insecure data transmission, and non-specific or imprecise reasoning content commonly found in online large language model tools. This approach offers a new pathway for improving operational efficiency in the e-commerce industry.

References

[1]  张元鸣, 姬琦, 徐雪松, 程振波, 肖刚. 基于知识图谱关系路径的多跳智能问答模型研究[J]. 电子学报, 2023, 51(11): 3092-3099.
[2]  孟令鑫, 才华, 付强, 易亚希, 刘广文, 张晨洁. 基于关系记忆与路径信息的多跳知识图谱问答算法[J]. 吉林大学学报(理学版), 2024, 62(6): 1391-1400.
[3]  段雨希, 邱芹军, 田苗, 等. 面向地质图的知识图谱构建及智能问答应用[J]. 地质科学, 2024, 59(2): 588-602.
[4]  DeepSeek-AI, Guo, D.Y., et al. (2025) DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. arXiv: 2501.12948.
[5]  Wang, Z.T., Yuan, H.T., Dong, W., Cong, G. and Li, F.F. (2024) CORAG: A Cost-Constrained Retrieval Optimization System for Retrieval-Augmented Generation. arXiv: 2411.00744.
[6]  Xiong, G.Z., Jin, Q., et al. (2025) RAG-Gym: Optimizing Reasoning and Search Agents with Process Supervision. arXiv: 2502.13957.
[7]  Nussbaum, Z., Morris, J.X., Duderstadt, B. and Mulyar, A. (2024) Nomic Embed: Training a Reproducible Long Context Text Embedder. arXiv: 2402.01613.
[8]  Li, S.Y., Ning, X.F., Wang, L.N., et al. (2024) Evaluating Quantized Large Language Models. arXiv: 2402.18158.
[9]  李国杰. DeepSeek引发的AI发展路径思考[J]. 科技导报, 2025, 43(3): 14-19.
[10]  李斯伦. 大语言模型视域下电商客服人才培养思考[J]. 市场周刊, 2024(36): 151-154.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133