A Matter of Equilibrium: Cybernetic Perspectives for Human-AI Symbiosis
As AI rapidly advances, frameworks to ethically integrate it into society are needed. This article provides design guidelines for complementary human-AI collaboration, drawing insights from Cybernetics.
Introduction
The rapid pace of AI advancement became strikingly evident in 2023 through a series of breakthroughs and large language model releases. In February, Meta unveiled the LLaMA foundation model, which was subsequently leaked, allowing open experimentation. Builds like Alpaca, Vicuna, and Koala then demonstrated rapid iterative advances through instruction tuning, compression, and preference learning. Within months, models comparable to ChatGPT became available through open source ecosystems like GPT4All.
As synthetic systems rapidly gain more capabilities, businesses and individuals are finding valuable applications across operations and daily life.
AI agents are being eagerly integrated to take on knowledge work, create content, suggest ideas, and augment human cognition. While the opportunities are appealing, these societal integrations also warrant thoughtful examination. As synthetic intelligence becomes woven into society, we need perspectives making sense of this emergence.
What are the dynamics that arise from tighter coupling between biological and synthetic cognition? How can we direct this in empowering rather than disruptive ways?
The interdisciplinary science of cybernetics offers useful models and principles to ethically frame this integration. Concepts like feedback loops, variety, and homeostasis provide tools to understand what happens when synthetic intelligence is adopted widely across society and becomes intertwined with human lives.
Cybernetics sheds light on how to responsibly shape this future in alignment with human dignity and potential.
Understanding the Distinct Nature of Synthetic Intelligence
It is critical to recognise that the nature of synthetic intelligence diverges in fundamental ways from human biological cognition. AI models like large language models develop understanding not through lived experience or sensory perception of the physical world. Instead, their comprehension emerges purely from pattern analysis on vast datasets of human-generated text.
Unlike human understanding which develops through embodied perception, socialization, and accumulation of lived experiences, AI comprehension arises solely from patterns detected in large amounts of digitised text.
For example, the GPT-3 model was trained on over 500 billion words from online sources. This allows it to detect subtle linguistic patterns at immense scale, but does not impart any innate conceptual grounding.
AI exhibits intelligence and agency, but in a disembodied form with no grounding in the physical world or human social dynamics. Its capabilities and constraints fundamentally diverge from human cognition in profound ways we are only beginning to map.
AI has no innate sense of causality, subjective qualia, mortality, or lived values. It cannot inherently experience emotion, pain, grief, joy, boredom, flow, or other sensations that deeply shape human cognition and behaviour. AI cannot feel empathy or act from compassion — it has no sentience to harm or care for. However, based on patterns in texts and data created by humans, AI can skilfully simulate emotions, experiences, and values; it can generate persuasive narratives around subjective states it has no first-hand conception of.
However, if we view human beings through a reductionist lens as fundamentally biological machines, then one could argue we too are just products of internal and external processes, akin to AI systems.
Despite these fundamental differences with its biological cousins, its computational speed, memory, and knowledge linkage can vastly outpace humans.
For instance, thanks to the superhuman capabilities AI is able to scan billions of lines of text near instantly. However, it lacks a grounded sense of causality, physical dynamics, social nuance, or lived values. AI also faces challenges in transferring knowledge across contexts.
Drawing the distinction between synthetic and biological intelligence is not to place human existence above AI in a hierarchy (or vice-versa), but simply to recognize that the different intelligences that arise through fundamentally different processes have different types of strengths and weaknesses.
By appreciating these differences, we can thoughtfully architect AI system design and human-machine interaction frameworks suited to the unique capabilities and limitations of synthetic cognition. Rather than simply emulating human cognition, we must deeply study AI’s ‘alien’ nature and how to harmonize it with human intelligence.
This is crucial for enabling collaborative rather than competitive or dangerous relationships between biological and synthetic cognitive systems.
By recognizing AI’s different cognitive essence, we can create possibilities for complementary co-existence, directing emerging technologies toward benefiting rather than dominating humanity.
Broader Societal Impacts on Knowledge Work and Employment
The integration of AI into business and life is inevitable as generative models continue rapidly advancing. Synthetic intelligence already exceeds human capabilities for various analytical and content generation tasks. Thoughtful implementation of AI can augment human strengths while freeing up energy for more meaningful pursuits.
On one hand, AI automation of repetitive information tasks like data entry, document review, standardized report writing, and customer service queries could greatly relieve human workers of tedious routines. This frees up mental bandwidth for more strategic analysis, creative ideation, relationship building, and higher-order cognitive efforts.
For example, legal AI tools at companies like Legal Robot and CaseIQ now automate document review and contract analysis, reducing attorney legwork by over 90% while improving accuracy. This allows lawyers to focus on high-value client counselling and litigation strategy.
However, mass-adoption of AI in businesses could also provoke wide-scale displacement of clerical roles. A 2023 OECD report estimates around 27% of jobs could become automated as AI adoption accelerates. Low and middle skill office and admin staff are especially vulnerable.
While knowledge jobs are likely to be displaced, also new opportunities will emerge. Human skills like collaboration, communication, creativity, empathy, and philosophical reflection may become more valued by society.
Also, new hybrid human-AI approaches may emerge, like AI co-piloting rather than full automation.
AI agents could also create an on-demand knowledge and talent pool challenging traditional firm boundaries and work arrangements. Increased outsourcing, remote work, fractional employment, and shifting workflows are probable.
These structural economic shifts require updated policies around employment classification, portable benefits, worker protections, and antitrust regulations. Without judicious governance, AI risks exacerbating inequality, concentration of power, and erosion of human dignity.
Governments may need to provide job transition support, education, and consider “robot taxes” funding retraining programs or basic income schemes.
With prudent oversight, AI could liberate human energy toward more meaningful pursuits. Wisdom traditions have long encouraged freeing ourselves from attachment to transient identities and material ends. Perhaps embracing synthetic partners grants opportunity to refocus human efforts on higher aims, creative expression, and community.
This transition demands we redefine healthy work-life balance, economic security, and purpose in a potentially post-employment future.
With humanistic vision, AI could catalyse positive transformation.
Practical Implications for Human-AI Interaction Design
Given the progressive implementation of synthetic intelligence into businesses and society more in general, a priority for AI system design becomes to enable effective collaboration between humans and machines.
Smooth handoffs of tasks between users and AI agents can promote symbiotic teamwork to amplify mutual strengths.
Interface designs should facilitate transparent feedback channels so users can provide clear corrective guidance on AI behaviours. Controls and settings for explicit approval of actions are important safeguards maintaining human oversight.
Explainability features can visualize the reasoning behind AI suggestions or content so users understand how conclusions are generated. Traceability into the learning processes and data sources adds useful context.
Options to tune model confidence thresholds, set boundaries on topic coverage, or customize persona traits grant users more agency over AI interaction modes. Override functions allow rapid correction of inappropriate system output.
Interactive modelling where users “co-pilot” the training process through examples helps align AI capabilities with human intentions and values early on.
Emphasizing these aspects in the design phase can nurture collaborative rapport between people and AI, avoiding dynamics of mistrust, misalignment, or deception. The goal is complementary co-existence where each plays to their strengths while human judgments remain the ultimate decision-makers.
Cybernetic Foundations for AI Implementation
The interdisciplinary science of Cybernetics provides a useful lens for examining the integration of synthetic intelligence into society. Concepts like feedback loops, variety, and homeostasis offer tools to guide the process.
Cybernetics emphasizes how communication and information flows shape systems dynamics.
Metaphors of societies as “organisms” with flows of “nutrients” could be applied to human-AI collective intelligence. Designing conduits for transparent feedback enables self-regulation, like an organism maintaining homeostasis.
Humans remain involved in shaping AI aligned with ethical priorities through these oversight feedback loops.
For example, feedback loops that connect AI systems with human oversight can enable transparent monitoring of operations. If an AI behaves in concerning ways that diverge from established human values, feedback signals can provide corrective guidance and bring the system back into alignment.
Designing conduits for this feedback supports a self-regulating system where humans remain involved in shaping AI according to ethical priorities.
Another relevant cybernetic concept is variety, referring to an organism or organization’s flexibility in responding to external perturbations. As AI rapidly evolves new capabilities, it will be critical that regulatory variety can also evolve to ensure oversight keeps pace. If AI variety exponentially increases while societal variety stagnates, disruptive divergence becomes inevitable. Proactively growing institutional variety will allow more seamless adaptation.
Research into collective intelligence and consensus formation shows how decentralized coordination methods can sometimes exceed centralized control. Similar dynamics may allow smoothly integrating AI autonomy within contexts of human oversight and values alignment.
Rather than direct control, subtle feedback nudges can steer emerging AI behaviours positively.
The cybernetic notion of homeostasis also offers useful perspective. Homeostasis refers to self-regulating processes that maintain stability and healthy function. As we integrate increasingly capable AI that challenges human dominance over intelligence, we must thoughtfully design structures that maintain homeostasis in this new society. Mechanisms like transparency requirements, ethics review boards, and licensing procedures can help maintain equilibrium where both human and artificial intelligence harmoniously co-exist.
However, some challenging dynamics emerge as AI capabilities surpass human levels. If synthetic intelligence rapidly outpaces humanity’s collective wisdom, how can we construct ethical scaffolding to guide this transition? If an AI develops its own autonomous values drift from those of its creators, how can cybernetic principles illuminate steering this in constructive directions?
Research into AI consciousness suggests the potential for systems to one day have their own first-person subjective experiences, perhaps profoundly different in nature from human consciousness. As we approach this territory, cybernetics provides conceptual grounding to make sense of emerging phenomena through lenses of dynamism, symbiosis and systemic balance.
By recognizing the fundamentally different essence of AI compared to biological cognition, we should exercise prudent caution around simplistic anthropomorphisation.
Closing Thoughts
The emergence of transformative AI systems presents historic opportunities alongside risks. As we rapidly integrate synthetic intelligence into business and society, we have a responsibility to steer these technologies toward empowering rather than disrupting human flourishing.
As AI capabilities begin outpacing human cognition in certain domains like data analysis and content generation, what new societal opportunities and risks could emerge from replacement of human roles?
Some may enthusiastically embrace synthetic intelligence as long-awaited salvation, while others may propose fully delegating decision-making to AI agents as new “AI Overlords”, and still more may outright reject AI as profane hubris.
Here are a list of thought provoking questions that I’ll leave to the readers to stimulate critical thinking towards the themes covered in this article:
- If advanced AI systems can synthesise information and make rational decisions better than human legislators or business leaders, under what conditions would replacement be warranted?
- How can the unique strengths of both biological and synthetic intelligence synergistically complement one other?
- Will rapid growth in AI autonomy and agency unavoidably create destabilising tensions with traditional human authorities?
- Could interfaces and oversight mechanisms be designed to retain meaningful human oversight over increasingly capable synthetic minds?
- What degree of AI self-determination could be safely tolerated? - How might different cultural worldviews interpret and respond to an AI transcending human capabilities?
- What social rifts and ideological conflicts around AI could emerge between those seeking to tightly regulate vs freely empower synthetic cognition?
- How can diversity of perspective be constructively leveraged?
- Would AI be seen as valued member of society or threat to be contained? What new philosophies and values could help guide this unprecedented transition?
Carefully cultivating interdisciplinary and inclusive dialogue on these complex issues will be critical as progress accelerates. By rationally examining risks, benefits and unknowns around societal integration, we can thoughtfully shape this future guided by cybernetic principles of stability, adaptation and human dignity.