Stefan Pasch (Goethe University Frankfurt) has posted National Culture and AI Governance: The Cultural Origins of Differences in AI Regulation on SSRN. Here is the abstract:
Despite widespread agreement on the importance of responsible artificial intelligence (AI), countries differ markedly in how strongly AI is governed, ranging from no government action to non-binding guidance and fully enforceable legal regulation. Yet the sources of these differences remain poorly understood. This study examines whether and how national cultural value systems help explain cross-national variation in the institutional strength of AI governance frameworks. Using cross-national data from the Global Index on Responsible AI (GIRAI), we measure AI governance across nine core governance areas, including transparency, accountability, human oversight, and safety. We capture governance strength as the degree of legal and institutional consolidation of these measures, distinguishing between non-binding instruments (such as strategies and ethical guidelines) and enforceable regulatory obligations. We then relate this measure to national cultural value systems using Hofstede’s cultural dimensions. Based on cross-sectional regression analyses covering 50 countries, we find that individualism and uncertainty avoidance are positively associated with more institutionalized and enforceable AI governance, while indulgence is negatively related. Importantly, when distinguishing between different levels of AI governance strength, we find that national culture is associated only with enforceable legal regulation, whereas softer forms of AI governance, such as non-binding guidelines, show no systematic relationship with cultural values. The findings highlight the cultural foundations of enforceable AI governance and help explain why countries differ in their willingness to translate responsible AI principles into binding institutional commitments.
