|
生成式AI商业数据获取行为的规制路径——以ChatGPT为例
|
Abstract:
数据被公认为推动技术进步的基石与核心所在,而以ChatGPT为典型的生成式AI技术在商业数据获取领域遭遇了不可忽视的合规风险。由于生成式AI缺乏相应的法律地位,这意味着研发者、设计制造者、提供者以及使用者都需要承担起对应的合规风险责任。为了合理分担这一责任,链式责任机制被建议采用。在此基础上,鉴于《生成式人工智能服务管理暂行办法》的存在,对于数据获取和惩罚机制,有必要进行更为详尽、细致的限制与要求设定。此外,全面引入“监管沙盒”制度并建立专门的机构或委员会也具有重要意义。通过这些举措,不但能够规范行业秩序,为数据的合法使用提供坚实保障,还能在推动技术创新的同时,确保其安全、合规地运行。面对生成式AI技术带来的挑战,我们必须持续探索并完善相关的制度与机制。只有这样,才能实现技术的健康发展与高效应用,进而充分释放数据的潜在价值,推动技术的不断进步,营造出共赢的良好局面。同时,这也有助于构建一个公平、有序、可持续发展的技术生态环境。
Data is recognized as the cornerstone and core driving force for technological progress, while the generative AI technology represented by ChatGPT encounters notable compliance risks in the field of commercial data acquisition. Due to the lack of corresponding legal status for generative AI, it means that developers, designers, manufacturers, providers, and users all need to undertake the corresponding compliance risk responsibilities. To reasonably share this responsibility, the chain responsibility mechanism is proposed. Based on this, in view of the “Interim Measures for the Management of Generative Artificial Intelligence Service”, it is necessary to set more detailed restrictions and requirements for data acquisition and punishment mechanisms. In addition, the comprehensive introduction of the “regulatory sandbox” system and the establishment of specialized agencies or committees are also of great significance. Through these measures, it can not only standardize the industry order and provide solid guarantees for the legal use of data, but also ensure its safe and compliant operation while promoting technological innovation. In short, in the face of the challenges brought about by generative AI technology, we must continuously explore and improve the relevant systems and mechanisms. Only in this way can we achieve the healthy development and efficient application of technology, and then fully release the potential value of data and promote the continuous progress of technology, creating a good situation of win-win. At the same time, it also helps to build a fair, ordered, and sustainable development of the technological ecosystem.
[1] | 刘霜, 张潇月. 生成式人工智能数据风险的法律保护与规制研究——以ChatGPT潜在数据风险为例[J]. 贵州大学学报(社会科学版), 2023, 41(5): 87-97. |
[2] | 毕文轩. 生成式人工智能的风险规制困境及其化解: 以ChatGPT的规制为视角[J]. 比较法研究, 2023(3): 155-172. |
[3] | 苏志甫. 数据要素时代商业数据保护的路径选择及规则构建[J]. 信息通信技术与政策, 2022(6): 14-26. |
[4] | 赵丹, 沈澄. 数据抓取不正当竞争纠纷的司法审查要素考察与反思[J]. 科技与法律(中英文), 2023(2): 52-59. |
[5] | 生成式人工智能服务管理暂行办法[J]. 中华人民共和国公安部公报, 2023, (5): 2-5. |
[6] | 王大志, 张挺. 风险、困境与对策: 生成式人工智能带来的个人信息安全挑战与法律规制[J]. 昆明理工大学学报(社会科学版), 2023, 23(5): 8-17. |
[7] | 徐继敏. 生成式人工智能治理原则与法律策略[J]. 理论与改革, 2023(5): 72-83. |
[8] | 梁正, 何嘉钰. 确保网络数据安全 应对新一代人工智能治理挑战[J]. 中国信息安全, 2023(6): 22-24. |
[9] | 袁曾. 生成式人工智能的责任能力研究[J]. 东方法学, 2023(3): 18-33. |