Summary
In recent years, the balance of power in the world has no longer been measured solely by military or economic strength. Instead, it is increasingly shaped, more subtly and profoundly, by artificial intelligence and social media. We are witnessing a qualitative shift in the structure of geopolitical power: a shift managed not from military operations rooms, but from data centres, recommendation algorithms, and artificial reasoning models.
This transformation reflects the growing recognition that control over digital infrastructures and algorithmic systems has strategic implications for states, societies, and cultural identities (Zuboff, 2019; Kwet, 2020). As algorithmic systems increasingly shape information flows and collective perception, technological power becomes inseparable from political and cultural influence.
Social Media and Algorithmic Influence on Collective Consciousness
Social media platforms that initially appeared as tools for communication and expression have evolved into highly influential—and potentially dangerous—spaces if approached without critical awareness. They are no longer neutral platforms; rather, they operate as instruments of soft power that shape awareness, reorder priorities, and influence values and behaviours.
Today, these platforms affect:
- public mood and social discourse,
- value systems and behavioural patterns,
- political perception and decision-making.
More critically, these platforms are deeply intertwined with artificial intelligence systems that learn from user interactions and then reshape users’ experiences according to algorithmic logic. Scholars have described this phenomenon as algorithmic mediation, where automated systems increasingly determine what individuals see, think about, and discuss (Beer, 2017; Gillespie, 2018).
The Cultural Implications of Relying on a Single AI Ecosystem
Reflecting more deeply, it becomes problematic to rely on a single artificial intelligence system to answer questions that extend beyond educational or professional domains into intellectual, ethical, and spiritual dimensions.
Human societies do not share identical histories, cultures, or philosophical traditions. When minds are consistently fed answers framed within a purely Anglo-Saxon intellectual context, societies are not merely consuming knowledge—they are importing a worldview, a system of values, and a model of human life that may not reflect their own cultural identities.
This concern aligns with the growing discussion around digital colonialism, where technological platforms developed in dominant economies influence knowledge production and cultural narratives globally (Kwet, 2020).
The Responsibilities of States, Institutions, and Society
The responsibility for addressing these challenges does not lie with individuals alone. It extends across multiple layers of society.
Key stakeholders and their roles
Key stakeholders and their roles
| Stakeholder | Responsibility |
| States | Supporting scientific research and building national innovation ecosystems |
| Technology companies | Investing in the development of local AI technologies and platforms |
| Families | Cultivating critical and ethical awareness among younger generations |
| Schools and universities | Integrating AI literacy and critical digital thinking into education |
The goal should not merely be to become smart users of artificial intelligence, but to become creators, theorists, and competitors in the global AI ecosystem.
Governments, in particular, must recognise that AI is no longer a technological luxury; it has become part of national security in its broadest sense. AI capabilities influence economic resilience, cultural autonomy, and sovereign decision-making (Bostrom, 2014; OECD, 2019).
Technological Sovereignty and the Chinese Example
In this context, the Chinese experience provides a notable case study. China has not limited itself to using foreign technological tools; instead, it has developed its own artificial intelligence capabilities and social media platforms—such as TikTok and DeepSeek—in alignment with its cultural, political, and developmental priorities.
Whether one agrees or disagrees with the Chinese model, it demonstrates a clear awareness of the risks of digital dependency and a deliberate effort to build technological sovereignty.
This approach illustrates how national AI ecosystems can serve as instruments of strategic autonomy and long-term geopolitical influence (Lee, 2018).
The Future Battlefield: Minds and Algorithms
The author concludes that future conflicts will not be decided solely on land or in the air. Instead, they will be determined in the realm of human cognition and the algorithms that shape it.
Those who fail to recognise this transformation risk becoming consumers of others’ visions rather than architects of their own futures. Artificial intelligence is therefore not an inevitable destiny but a strategic choice.
The real challenge lies not in learning prompt engineering alone but in designing the algorithms themselves:
- Who defines their logic?
- Who determines their boundaries?
- Who shapes their biases?
- Who decides the limits of the questions before answers are generated?
These questions highlight the deeper issue of algorithmic governance, which determines how technological systems influence societies and political decision-making (Gillespie, 2018).
Closing
This report highlights a fundamental shift in the nature of sovereignty and global competition. In the age of algorithms, technological capability is no longer merely a driver of economic development; it is a central pillar of cultural autonomy, political independence, and national security.
Societies that rely exclusively on external algorithmic systems risk importing not only technological tools but also foreign epistemologies and value frameworks. Building indigenous AI capabilities, fostering critical digital literacy, and designing culturally aware algorithmic systems are therefore strategic imperatives.
Ultimately, the future will belong not simply to those who use artificial intelligence, but to those who design its logic and shape its boundaries.
References
- Beer, D. (2017) The social power of algorithms. Information, Communication & Society, 20(1), pp. 1–13. https://doi.org/10.1080/1369118X.2016.1216147
- Bostrom, N. (2014) Superintelligence: paths, dangers, strategies. Oxford: Oxford University Press.
- Gillespie, T. (2018) Custodians of the internet: platforms, content moderation, and the hidden decisions that shape social media. New Haven: Yale University Press.
- Kwet, M. (2020) ‘Digital colonialism: US empire and the new imperialism in the global south’, Race & Class, 60(4), pp. 3–26. https://doi.org/10.1177/0306396818823172
- Lee, K.-F. (2018) AI superpowers: China, Silicon Valley, and the new world order. Boston: Houghton Mifflin Harcourt.
- OECD (2019) OECD principles on artificial intelligence. Available at: https://oecd.ai/en/ai-principles (Accessed: 15 March 2026).
- Zuboff, S. (2019) The age of surveillance capitalism: the fight for a human future at the new frontier of power. New York: PublicAffairs.