Blog and news
December 2, 2025

How AI is reshaping social structures

From Human Networks to Human–Machine Networks: How AI is Reshaping Social Structures

As a sociologist in the academic institution of computer science, I study how technologies intersect with the social world: how they redistribute power, reshape relationships, and reconfigure institutions. My approach is less about the technical details of technology and more about understanding the social patterns and networks that emerge when a new technology like artificial intelligence (AI) becomes embedded in everyday life. In my research I look to understand these effects through computational social science methods such as social network perspectives.

We often think of AI in terms of innovation or automation. But AI is not only a tool - it is a structural force that is remodelling how people connect, how organisations function, and how inequalities evolve.

How is AI changing social relations?

One way to think about AI as a structural force is through the lens of social networks. Classic sociology has shown us that opportunities often travel through weak ties rather than close-knit groups (Granovetter, 1973). In the digital age, AI-driven recommendation systems, from job platforms to social media feeds, increasingly determine whose voices are heard and whose remain invisible.

This matters because algorithms do not simply mirror our networks but reorganise them. Who we collaborate with at work, what kind of information we see online, and which job applications reach the next stage are now mediated by machine learning systems. These decisions can shape careers, friendships, and even democratic participation.

The rise of human–machine networks

In the past, technologies like telephones or the internet altered how people connected, but the technologies themselves did not act within those relationships. AI changes this by generating text, making decisions, and even initiating interactions on behalf of its users.

We are already seeing this shift in practice. Generative AI systems are used as collaborators in the creative industries, producing first drafts of articles, music, or design concepts that humans refine. In workplaces, autonomous AI agents schedule meetings, respond to emails, or triage information, operating less like passive tools and more like network actors. In education, students experiment with AI tutors that provide instant feedback, creating learning dynamics that involve both peers and machines.

This represents the emergence of human–machine networks. However, these networks differ from human ones in important ways. For instance, AI does not age, does not carry emotion, and is not embedded in generational memory. It “remembers” in different ways, storing information that can be retrieved, recombined, or scaled at speed. As a result, human–machine networks are likely to generate new patterns of collaboration and hierarchy, as well as new forms of dependence and vulnerability.

These networks force us to revisit sociological assumptions. If social ties have traditionally been seen as human relationships, what happens when some ties are with machines? What does trust mean when extended to an algorithm? Who is accountable when a network outcome is co-produced by human and machine agents? And will those with privileged access to advanced AI systems occupy more powerful positions in these hybrid networks?

These are not abstract questions, but go to the heart of how we will live and work together in a future where humans and AI co-exist and co-produce.

Inequality, the Matthew Effect, and the future of work and society

In the same way that the emergence of human–machine networks does not just alter social ties in the abstract, it also changes the way work itself is organised. Workflows that were once structured around human collaboration are now increasingly co-produced with AI. From automated scheduling to generative design tools, machines no longer sit outside the workflow but inside it, acting as collaborators, supervisors, and sometimes decision-makers.

This shift transforms the distribution of advantage within networks. Network sociology’s “Matthew Effect” reminds us that small advantages accumulate into long-term disparities (Merton, 1968). In human–machine workflows, those who can adapt quickly - learning how to prompt, supervise, and integrate AI systems - are likely to benefit disproportionately. Meanwhile, workers without access to advanced AI tools, or without the skills to co-produce effectively with them, risk marginalisation.

This reorganisation also creates new forms of work. AI introduces roles such as “AI managers” or “prompt engineers,” but also new categories of invisible labour such as the curation, correction, and oversight needed to keep systems functional. Entire professions may evolve around supervising or coordinating machine agents, while others may fragment as tasks are redefined by algorithmic processes.

In this sense, new forms of inequality are configured through human–machine networks. Advantage will flow to those embedded in AI-rich networks, while disadvantage will cluster around those excluded from them. Workplaces are no longer just human communities, but hybrid ecosystems where authority, trust, and solidarity are redistributed between people and machines.

The future of work, then, is not only about how many jobs are lost or gained. It is about how new workflows and forms of work emerge from human–machine interaction, and how these changes reshape the social structures of opportunity and inequality.

Sociology tells us that technology is never neutral. AI, like past innovations, will reflect and amplify the structures into which it is introduced. The challenge now is to ensure that those structures are designed to promote fairness, trust, and collective wellbeing.

References

Granovetter, Mark. 1973. “The Strength of Weak Ties.” American Journal of Sociology 78(6):1360–1380.

Merton, Robert K. 1968. “The Matthew Effect in Science.” Science 159(3810):56–63.

Lei, Ya-Wen. 2021. The Gilded Cage: Technology, Development, and State Capitalism in China. Princeton University Press.

Yang, Longqi, David Holtz, Sonia Jaffe, Siddharth Suri, Shilpi Sinha, Jeffrey Weston, Connor Joyce, Neha Shah, Kevin Sherman, Brent Hecht, and Jaime Teevan. 2021. “The Effects of Remote Work on Collaboration among Information Workers.” Nature Human Behaviour 6:164–176.

OECD. 2019. Artificial Intelligence in Society. OECD Publishing.

OECD. 2025. Introducing the OECD AI Capability Indicators (beta). OECD Publishing.

Gstrein, O.J. 2024. “General-Purpose AI Regulation and the European Union AI Act.” Internet Policy Review.

Hollanek, T., Pi, Y., Peters, D., Yakar, S., & Drage, E. 2025. “The EU AI Act in Development Practice: A Pro-justice Approach.” arXiv preprint.

Crawford, Kate. 2021. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.

Srnicek, Nick. 2017. Platform Capitalism. Polity Press.

How AI is reshaping social structures

Author

Canhui Liu

Share

Sign up to our newsletter

Stay up to date with IFOW research, insights and events.

You can unsubscribe at anytime by clicking the link at the bottom of our emails or by emailing data@ifow.org.

Read our full privacy policy including how your information will be stored by clicking the link below.