Loading

Responsible Disclosure in the Age of Generative AI: A Normative Model for Dual-Use Risk
Fahd Malik1, Muhammad Raza ul Haq2

1Fahd Malik, Department of Digital Business, Transformation & Innovation, IE Business School, Madrid, Spain.

2Muhammad Raza ul Haq, Department of Information Technology, Zain, Riyadh, Saudi Arabia.  

Manuscript received on 03 October 2025 | Revised Manuscript received on 11 October 2025 | Manuscript Accepted on 15 October 2025 | Manuscript published on 30 October 2025 | PP: 13-20 | Volume-14 Issue-11, October 2025 | Retrieval Number: 100.1/ijitee.L115514121125 | DOI: 10.35940/ijitee.L1155.14111025

Open Access | Editorial and Publishing Policies | Cite | Zenodo | OJS | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: The rapid growth of generative artificial intelligence (AI) systems such as large language models (LLMs) has created a profound disclosure dilemma: when should potentially dangerous models or findings be shared openly, withheld, or released in a controlled manner? Traditional norms of open science and opensource software emphasize transparency, reproducibility, and collective progress, yet the dual-use nature of frontier LLMs raises unprecedented challenges. Unrestricted disclosure can enable malicious use cases such as cyberattacks, automated disinformation campaigns, large-scale fraud, or even synthetic biology misuse. In contrast, excessive secrecy risks undermining trust, slowing scientific progress, and concentrating power in a small number of actors. This paper develops a normative model for responsible disclosure that integrates utilitarian, deontological, and virtue-ethical reasoning to justify a proportional approach ratherthan binary openness orsecrecy. We introduce a Disclosure Decision Matrix that evaluates four key dimensions: risk severity, exploitability, mitigation availability, and public benefit of transparency. It then recommends one of three courses of action: full release, staged or controlled release, or temporary restriction until safeguards mature. The contribution is twofold. First, it provides a principled ethical framework that links philosophical justification directly to operational disclosure practices, bridging the gap between theory and governance. Second, it translates this framework into actionable criteria that policymakers, research institutions, and developers can consistently apply across evolving AI systems. By combining ethical reasoning with practical decision tools, the findings underscore that responsible disclosure in AI is neither absolute secrecy nor unqualified openness but a dynamic, proportional strategy responsive to both technological advances and societal risks.

Keywords: Generative AI; Large Language Models; Responsible Disclosure; Dual-Use Risk; Disclosure Decision Matrix
Scope of the Article: Artificial Intelligence and Methods