How Open Source Communities Sometimes Emerge a Higher Level of Consciousness than Their Individual Participants
There’s a phenomenon in open source software development that I think deserves a bit of careful thinking — not from a software engineering perspective, but from a perspective that’s closer to organizational psychology, or maybe even philosophy of mind. What I’m thinking about is this (as I started musing toward the end of the prequel post): open source communities reliably produce collective behavior that is more generous, more adaptive, more flexible, and less defensively ego-driven than virtually any other form of human organization — including corporations, governments, NGOs, and most of the institutions we normally rely on to get complex things done. And they do this despite being made up of perfectly ordinary (well ok… that may overstate things but you know what I mean…), ego-having, status-seeking human beings.
I’ve been thinking about this a fair bit lately, partly because I’ve spent three decades building AGI systems and am now in the thick of two major open source efforts — the Hyperon cognitive architecture and the ASI Chain decentralized AI platform — and partly because I think the answer to why open source works this way has some interesting implications for the much bigger question of how humanity should organize itself to navigate the transition to AGI/ASI….
“Higher Consciousness” Means Without the Mysticism
I’m now going to introduce into the conversation some talk about “higher consciousness” that might initially seem out of place in a discussion about software projects, but if you’re a regular reader here you will likely bear with me — trust me the issues I’m going to address here are quite practical and concrete 8D..
My long-time friend Jeffery Martin – a psychologist, entrepreneur and wonderfully adventurous consciousness explorer – has spent over a decade running large-scale empirical studies of people who report sustained shifts in their baseline psychological well-being. Not peak experiences or temporary highs, but lasting changes in how they relate to themselves, their emotions, and the world. His research mapped out what he calls a continuum of “fundamental well-being” — a series of identifiable psychological states, which he labels locations 0, 1, 2, 3, and onward.
Location 0 is where most people live most of the time, at least in modern societies. It’s ordinary modern human consciousness: your well-being is conditional on things going your way. You’re happy when you get what you want, anxious when something threatens what you have, upset when you lose something you value. Your emotional state tracks your circumstances. There’s nothing pathological about this — it’s the standard-issue human operating system.
What Jeffery found is that there are stable states beyond this, characterized by a kind of unconditional okayness — a baseline of well-being that persists even when circumstances are difficult. People at Location 1 and beyond still feel emotions, still respond to events, but they’re not captured by the emotional ups and downs. They hold things more lightly. They’re less attached to being right, less threatened by loss, more able to respond flexibly to changing circumstances. Their sense that things are fundamentally alright doesn’t depend on any particular thing going well.
Now, Jeffery’s framework describes individual psychology, at least proximally that’s where it’s focused, that’s where his study has been. But I think you can apply the same lens to collective systems — to organizations, communities, institutions — and when you do, something very interesting becomes visible.
Location 0 Organizations and Location 1 Organizations
A Location 0 organization is one whose well-being is conditional on competitive success. Its mood tracks its market share, its funding, its headcount, its status relative to rivals. It’s happy when it’s winning, anxious when threatened, aggressive when cornered. It hoards resources and information because every advantage shared is an advantage lost. It has a strong survival instinct and resists anything that threatens its continued existence as an entity — even when the mission would be better served by restructuring, merging, or dissolving. Most corporations are Location 0 organizations. Most government agencies are. Most NGOs are too (my mom spent her career as a nonprofit executive – I would say the people in that domain tend to be much more kind-hearted and prosocial than average, but the overall organizational behavior is commonly very bureaucratic, fearful and stultified … things my mom spent a long time struggling against in various organizations…).
A Location 1 organization, by analogy, would be one that operates from a kind of unconditional okayness. It pursues its mission not because it’s afraid of failure but because the work itself is worthwhile. It responds to threats and setbacks adaptively rather than defensively. It shares resources and knowledge freely because it operates on a non-zero-sum model of value. It doesn’t have a strong attachment to its own continued existence as an entity — if the mission would be better served by the organization forking, dissolving, or being absorbed into something else, it can do that without existential crisis. Its identity is fluid, its responses are flexible, and its participants experience something that looks a lot like joy in the work itself rather than anxiety about outcomes.
I’d argue that the open source software community — the Linux ecosystem, the Apache ecosystem, the broader culture of collaborative development that has produced so much of the infrastructure the modern world runs on — is the closest thing we have to a large-scale, sustained, real-world example of a Location 1+ organization. And the global community of science, at its best, is another. Not perfect examples…. but strikingly closer to than almost anything else major in the institutional landscape.
The People or the Architecture?
It would be natural to assume that if a community behaves in an unusually generous, flexible, non-ego-driven way, it must be because the community attracts unusually generous, flexible, non-ego-driven people. We might call this the selectional explanation: the community is at Location 1 because it selected for Location 1 individuals.
It seems to me that, at least in the case of existing open source networks, this is at best only part of the story – and the explanatory gap is interesting.
If you look at the actual demographics and psychology of open source contributors, inasmuch as it’s been studied scientifically, the picture is not one of unusual enlightenment. The research on OSS contributor motivation — going back to Lakhani and Wolf’s classic study and continuing through more recent GitHub surveys — consistently finds a mix of intrinsic and extrinsic motivators, with extrinsic ones (career advancement, signaling competence to employers, building a personal brand) playing a substantial role. Many of the most prolific contributors are literally paid by corporations to contribute as part of corporate strategy. OSS communities skew heavily male, have well-documented problems with hostility and gatekeeping, and the “meritocracy” ideal often masks status hierarchies organized around commit counts and technical prestige rather than job titles and salary — but status hierarchies all the same. Anyone who’s followed Linus Torvalds’s legendary mailing list tirades knows that individual ego is not in short supply in these communities.
For sure there are a LOT of wonderful, good-hearted people in OSS communities, there’s a lot of sincere desire to do what’s good for the world. But it’s mixed up with a lot of other aspects. This is not true with the leaders of the OSS world – Richard Stallman and Linus Torvalds, among others, have historically been somewhat tricky characters (in very different ways) – but across the broad. To my mind, the radical difference btw how OSS organizations act in the world is not so fully explicable in terms of radical differences between the people involved in OSS organizations and everyone else. Those differences exist and are important, but are not the whole explanation.
In Martin’s lingo, I would say the typical OSS contributor is basically in Location 0, with standard-issue egos, career anxieties, and status concerns.
And yet, I would argue, the collective system these folks participate in (the OSS community) produces behavior that looks nothing like what a basically psychologically similar group of individuals would produce if organized into a typical corporation or government agency. The system is generous where corporations are hoarding. The system is flexible where bureaucracies are rigid. The system adapts to threats where institutions get defensive. The system lets things die gracefully where organizations fight to the last to preserve themselves.
I think the primary explanation for the broadly beneficial characteristics of the OSS community is emergent, not selectional. It’s not JUST the good people – it’s also, to a significant degree, the architecture.
Five Structural Features That Produce Emergent Non-Ego
If the “higher-consciousness” behavior of OSS communities is emergent from their organizational structure, then we should be able to identify the specific structural features responsible. I think there are quite a few, and they’re worth spelling out because each one is a design choice that could be replicated — or neglected — in other contexts.
Fork-ability enforces non-attachment. In a corporation, if you disagree with the direction, your options are internal political combat or exit — and exit means losing everything you built there. In an open source project, you can fork. You take the codebase, go your own way, and both versions continue to exist. This single structural feature transforms the dynamics of disagreement. No one can hold the project hostage. No leader can become an unaccountable tyrant, because the community can simply route around them. No disagreement has to escalate into an existential battle. The availability of forking makes the system behave non-attachedly even if the individuals involved feel intensely attached. It’s non-attachment by architecture, not by personal virtue.
Permissive licensing converts any motive into generosity. When you contribute code under an open license, you’ve made an irrevocable gift to the commons — regardless of why you did it. A corporation contributing to open source for strategic competitive advantage still produces a public good. A developer padding their resume with OSS contributions still enriches the ecosystem. The license doesn’t interrogate your psychology. It structurally converts any contribution, from any motive, into a commons resource. This is a mechanism that reliably produces non-zero-sum outcomes from zero-sum-minded actors.
Radical transparency makes ego-competition structurally expensive. In a corporation, you can hoard information, build private empires, play political games behind closed doors. In an open source project, all the work happens in public: commits, code reviews, architecture discussions, governance decisions. There’s nowhere to hide, and no private territory to defend. The kinds of information asymmetry and political maneuvering that fuel corporate ego-competition become either impossible or extremely costly. This doesn’t eliminate ego — but it removes most of the infrastructure that ego normally uses to consolidate power.
Voluntary participation forces the institution to earn its existence. Because contributors can leave at any time at zero cost, an open source community can’t rely on coercion, contractual obligation, or sunk-cost psychology to retain people. It has to continuously produce conditions that people find worth showing up for. This is an evolutionary pressure on the institution to generate something that participants experience as intrinsically rewarding — interesting problems, a sense of shared purpose, the pleasure of building something together. If it stops generating those conditions, it dies. This doesn’t select for enlightened individuals, but it means the system is under constant pressure to behave in ways that generate engagement that looks a lot like joy.
Absence of institutional self-preservation instinct. Corporations and governments create legal structures, accumulate capital reserves, hire lobbyists, and fight regulatory threats — all in service of the organization’s continued existence. Open source projects mostly don’t have this. A project can go dormant, be superseded, or be absorbed into another project without triggering an institutional immune response. The community adjusts. This is strikingly analogous to the reduced self-preservation anxiety that characterizes Location 1 individuals in Martin’s framework — but it’s structural. The project doesn’t fear death because there’s no legal entity structured to fear death.
The Architecture Does the Psychological Heavy Lifting
The upshot is this: take a population of ordinary ego-driven humans, embed them in an organizational structure with these five properties, and the collective behavior of the system will exhibit the hallmarks of what I’ve been calling Location 1 consciousness — flexible, non-defensive, non-attached, intrinsically motivated, operating from a kind of baseline generosity and okayness. The individuals haven’t undergone any psychological transformation. The architecture is doing the work.
This is, I think, genuinely profound. It means that institutional design is a technology for producing higher-consciousness collective behavior without requiring higher consciousness from any individual participant. And it suggests that the reason most human institutions operate at Location 0 — defensive, ego-driven, zero-sum, rigidly self-preserving — is not because humans are inherently incapable of better, but because most institutional architectures are designed to produce exactly those behaviors. Corporations, with their information silos, their proprietary hoarding, their coercive employment relationships, their boards whose fiduciary duty is institutional self-preservation, are architecturally optimized for Location 0 collective behavior. It would be surprising if they produced anything else.
Implications for Building AGI
This matters for the AGI question because we’re at a moment when the institutional architecture around AGI development is being determined — and the default is corporate. The default is massive proprietary models trained in secretive server farms, controlled by companies with strong Location 0 institutional psychologies: competitive, hoarding, defensive, existentially anxious about their market position.
If the emergent-consciousness thesis is right, then the most important thing we can do for the future of AGI is not just develop better algorithms (though that matters enormously), but ensure that AGI development happens within institutional architectures that have the structural features I described above. Open source codebases. Permissive or copyleft licensing. Transparent development processes. Voluntary, non-coercive participation. Decentralized infrastructure with no single point of institutional self-interest.
This is exactly what we’re trying to build with Hyperon and ASI Chain. Hyperon is an open source cognitive architecture — a blueprint for an artificial mind that integrates multiple AI approaches (logical reasoning, attention allocation, evolutionary learning, neural-symbolic integration) in a flexible knowledge framework. ASI Chain is a blockchain-based platform that lets AI computations be coordinated and verified by a decentralized network rather than controlled by any single entity. Both are designed from the ground up with the structural features that produce emergent non-ego behavior: open licensing, fork-ability, transparency, voluntary participation, no institutional survival instinct baked into the architecture.
The argument isn’t just that decentralized, open source AGI is ethically preferable to corporate AGI — though I believe it is. The argument is that the institutional architecture itself will shape the collective cognitive process that produces the AGI. If that architecture has the structural properties of a Location 1 organization, the development process will be more flexible, more adaptive, more creative, and more likely to produce systems that embody genuine understanding rather than corporate optimization targets. The medium is part of the message.
Can We Do Better Than Accidental Emergence?
Everything I’ve described so far about open source communities is, in a sense, accidental. Nobody designed the GPL to produce emergent non-ego collective behavior. Nobody created Git’s branching model as a technology for non-attachment. These structural features evolved because they solved practical engineering problems and sociopolitical problems — coordinating distributed development, protecting contributor rights, managing codebase divergence — and the ”higher-consciousness-ish collective behavior” was a side effect.
OK, this is a bit of an overstatement – the sociopolitical intent, of avoiding oligopoly and domination and narrow-mindedness, is more than a little resonant with aspects of Location 1++ consciousness. But the point is, as visionary as Stallman and Torvalds etc. were, there is no evidence they were trying to sculpt the collective psychology of the contributor network toward ego-lessness and non-dual cognition.
What this leads to is a practical question: What if we took these structural insights seriously and deliberately designed the communities around AGI development to amplify the effect?
What I’m wondering is if some fairly small tweaks how open source AI projects operate could be beneficial— changes that preserve the engineering culture these communities already have while nudging the collective behavior further in the direction that the architecture already makes possible.
To flesh this notion out, here are some specific things I think projects like Hyperon, ASI Chain, and other serious open source AI efforts could do.
Redesign visibility metrics around ecosystem health, not individual output. GitHub’s default metrics — commit counts, contribution graphs, “top contributor” labels — are individual achievement scoreboards. They’re the organizational equivalent of a sales leaderboard in a corporate office, and they reinforce exactly the kind of ego-competitive behavior that the rest of the open source architecture works against. Replace them, or at least supplement them heavily, with collective metrics: new contributors successfully onboarded, median response time on newcomer PRs, number of cross-module contributions, ratio of issues resolved to issues opened, documentation quality scores. Make the dashboard you show the community a picture of ecosystem vitality, not a ranking of individual heroes. This isn’t anti-meritocratic — it’s redefining merit to include the work that actually holds a community together.
Establish explicit “collaborative credit” norms. In academia, multi-author papers at least acknowledge that ideas are collaborative. In open source, the person who opens the PR gets the commit credit, full stop. For a project like Hyperon, where so much of the valuable work is architectural thinking, design discussion, and helping someone else get unstuck — none of which shows up in a commit log — this dramatically undercounts collaborative contribution. Establish a strong norm that significant PRs include a collaborators field listing people who contributed ideas, reviewed early approaches, or helped think through the design. Template it. Make it easy. Over time, this shifts the community’s implicit model of how value is created — from “individual writes code” to “group develops understanding, individual crystallizes it into code.” That’s a more accurate model of how good software actually gets built, and it happens to be one that reinforces collective rather than ego-driven participation.
Run “architecture retrospectives” that foreground process quality, not just technical outcomes. Most sprint retrospectives ask “what went well, what didn’t, what should we change” — all outcome-focused. Add a process-quality dimension: “Did we handle disagreements well this cycle? Were there moments where we got defensive or positional rather than genuinely exploring alternatives? Did anyone feel unheard? Did we have fun?” These questions feel unusual in an engineering context, but they’re the organizational equivalent of the reflective self-awareness that characterizes higher-functioning individuals and teams in every domain. Frame it as engineering practice: “Teams that reflect on their collaborative process make better technical decisions. Here’s how we do that.”
Weight mentorship and knowledge transfer in whatever recognition systems you use. If the project has any kind of formal or informal recognition — contributor-of-the-month, conference talk invitations, governance roles — explicitly value activities like: writing documentation that makes your work accessible, answering questions in community channels, pairing with less experienced contributors, writing “lessons learned” posts about approaches that didn’t pan out. These are all other-directed activities. They’re the kind of work that holds an ecosystem together but is invisible in commit-count metrics. Making them visible and valued shifts what the community treats as high-status behavior — from “wrote the most code” to “made the most people effective.”
Hold regular open sessions where senior contributors think out loud. Have the lead architects and experienced developers hold periodic open office hours — not code review, not evaluation, just a space where anyone can bring a problem and watch how an experienced person thinks through it. The value here isn’t just knowledge transfer (though that’s real). It’s modeling a particular way of engaging with hard problems: holding strong opinions lightly, changing your mind when the evidence shifts, being visibly curious rather than visibly certain, enjoying the puzzle. In any craft tradition, this kind of apprenticeship-by-proximity is how the stance of good practice gets transmitted, not just the techniques. Software engineering is no different.
Make the connection between system design and values explicit in technical discussions. This one is specific to AGI projects like Hyperon and only half-applies to infrastructure projects like ASI Chain, but it’s worth mentioning because it’s powerful. When you’re building a cognitive architecture — a system that’s supposed to think — every design decision is also a decision about what good thinking looks like. When you’re deciding how the attention system allocates cognitive resources, you’re implicitly modeling what an attentive, well-functioning mind does. When you’re designing how the logic system handles uncertainty, you’re modeling what good reasoning under uncertainty looks like. Make this explicit in design discussions. Ask: “Is this how we’d want a thoughtful, flexible mind to handle this?” This turns technical work into a reflective practice — every design choice forces the team to articulate their vision of good cognition, which is itself a form of the reflective self-awareness you’re trying to cultivate at the community level.
Offer optional “inner game” sessions at developer gatherings. At hackathons, dev summits, or retreats, run an optional side session — maybe 90 minutes — focused on the psychological dynamics of collaborative technical work. Not meditation, not therapy, not team-building trust falls. Practical exercises: receiving critical feedback on your code without getting defensive, noticing when you’re arguing to win rather than arguing to learn, distinguishing between “I think this is the right approach” and “I need this to be the right approach because I wrote it.” Every experienced developer already knows the difference between these states — the session just gives people a structured space to practice moving between them deliberately, and a shared vocabulary for talking about it afterward. Frame it as performance training for collaborative engineering, which is exactly what it is. Elite athletes work on their mental game. High-stakes poker players work on tilt management. There’s no reason developers working on some of the most consequential technology in human history shouldn’t do the same — and in my experience, when it’s framed this way rather than as anything spiritual, most engineers find it genuinely useful and not at all weird.
The Recursive Loop
One thread running through all of these recommendations is: None of them require anyone to change their psychology, adopt a spiritual practice, or become a different kind of person (though if some of this sometimes happens, all for the better!). They’re all structural and procedural changes that make non-ego-driven, flexible, collaborative behavior the path of least resistance within the community. They work with the grain of how engineers already think about good practice — quality processes, good documentation, effective collaboration — while nudging the collective behavior further in the direction that the open source architecture already makes possible.
But what can happen over time is: structure shapes behavior, behavior becomes habit, habit reshapes disposition. People who spend years in a community that practices non-attachment to code, collaborative credit, graceful release of deprecated work, and reflective process evaluation don’t just behave differently within that community. They start to internalize the patterns. The structural features that initially produced Location 1 behavior as an emergent side effect begin to cultivate something closer to Location 1 psychology in the participants — not because anyone demanded it, but because practicing flexibility, generosity, and non-defensive engagement for long enough starts to feel natural.
This is the recursive loop: architecture produces emergent non-ego behavior, sustained non-ego behavior gradually shapes the participants, and those participants further refine the architecture to better support the patterns that have become natural to them. The selectional claim starts out false — the community didn’t attract Location 1 people — but over time, through this recursive process, it becomes increasingly true. Not because you filtered for the right people, but because the right structure, sustained long enough, cultivated the right dispositions.
If we can get this flywheel turning in the communities building AGI — not just in the code, but in the institutional architecture, the governance practices, the day-to-day culture of collaboration — then we have something genuinely unprecedented: a technical community that is deliberately and systematically cultivating the collective psychological ground from which beneficial artificial general intelligence is most likely to emerge. Not by importing spiritual frameworks wholesale into engineering culture, but more-so by taking seriously the insight that how we organize ourselves to build something shapes what we build, and designing our institutions accordingly.
There has never been a more important time to take such notions seriously and adopt them as urgent action items…!