Transnational AI relations involve intricate interactions between competition and cooperation that cannot be adequately explained through a simplistic binary framework of “competition versus cooperation.” Introducing two key attributes of risk-“risk level” and “risk identifiability”-helps facilitate a more nuanced typology and analysis of transnational AI relations. High-risk scenarios often encompass military deployments, critical infrastructure management, and sensitive data processing, potentially resulting in severe and immediate consequences. Conversely, low-risk scenarios usually involve commercial applications and societal services, where risks remain relatively manageable. Highly identifiable risks typically relate to technological issues, whereas low identifiability risks emerge from complex interactions and long-term indirect consequences. Based on comparative analyses of AI risk attributes across various technological categories and application contexts, transnational AI relations can be categorized into four distinct types: regulatory prevention, collaborative development, competition-confrontation, and cautious isolation. Furthermore, differences in how actors perceive risk levels can lead to asymmetrical cooperative preferences and security practices, increasing governance complexity and the probability of frictions. Through discussions of illustrative cases such as military modernization competition, risk management of strategic weaponry, cooperation in green industries, competition involving large language models, and governance of computing power energy consumption, the validity of this risk-based theoretical framework is demonstrated and supported.