-
Authors:
郑雯嘉
-
Information:
武汉工程大学,武汉
-
Keywords:
Algorithmic recommendation; Ervice provider; Duty of care
算法推荐; 服务提供者; 注意义务
-
Abstract:
While algorithmic recommendation technology improves the efficiency of information distribution, it also gives rise to complex disputes regarding the identification of copyright infringement liability. This paper focuses on the duty of care rules for providers of algorithmic recommendation services. By analyzing the conflicts between technical logic and legal rules, it proposes a hierarchical path for liability determination. The core of current disputes lies in whether algorithmic recommendations constitute “active recommendation behavior” in the legal sense, and whether technological empowerment will inevitably increase the platform’s duty of care. Judicial practice shows that the traditional “notice-and-takedown” rule and “red flag standard” are facing challenges: the personalized and dynamic characteristics of algorithmic recommendations make the spread of infringing content more concealed and widespread. Moreover, platforms deeply intervene in the content distribution chain through traffic tilting and precise pushing, and profit from it, which has essentially exceeded the applicable boundaries of the principle of technological neutrality. This paper puts forward the rule of law principle of “matching liability with capability” and constructs a two-tier framework consisting of basic obligations and enhanced obligations. Basic obligations require platforms to fulfill the responsibility of promptly removing “obviously infringing” content. However, the “red flag standard” needs to incorporate the dimension of “algorithmic salience” — when infringing content is distributed on a large scale and precisely through algorithms, and the platform directly benefits from it, the platform can be presumed to have “should have known” about the infringement. Enhanced obligations, on the other hand, target high-risk scenarios such as the protection period of popular works and repeated infringements, requiring platforms to assume proactive prevention and control responsibilities, including manual review and algorithm adjustment. The hierarchical division of liability needs to dynamically adapt to the platform scale, work type, and recommendation intensity. Large-scale platforms, film and television content, and strong intervention-based recommendation models are required to fulfill higher obligations.
算法推荐技术在提升信息分发效率的同时,也引发了版权侵权责任认定的复杂争议。本文聚焦算法推荐服务提供者的注意义务规则,通过剖析技术逻辑与法律规则的冲突,提出分层化的责任认定路径。当前争议核心在于算法推荐是否构成法律意义上的“主动推荐行为”,以及技术赋能是否必然加重平台注意义务。司法实践表明,传统“通知—删除”规则与“红旗标准”面临挑战:算法推荐的个性化、动态化特征使侵权内容传播更具隐蔽性和广泛性,而平台通过流量倾斜、精准推送深度介入内容分发链条并从中获利,已实质突破技术中立原则的适用边界。本文提出“责任与能力相匹配”的法治原则,构建“基础义务与增强义务”的双层框架:基础义务要求平台对“明显侵权”内容履行及时删除责任,但“红旗标准”需纳入“算法显著性”维度——当侵权内容通过算法获得大规模精准分发且平台直接获益时,可推定平台“应知”。增强义务则针对热播作品保护期、重复侵权等高危场景,要求平台承担主动防控责任,包括人工审核与算法调整。责任分层需动态适配平台规模、作品类型及推荐强度,大型平台、影视类内容及强干预型推荐模式需履行更高义务。
-
DOI:
https://doi.org/10.35534/pss.0709123
-
Cite:
郑雯嘉.算法推荐服务提供者的注意义务规则[J].社会科学进展,2025,7(9):725-731.