What If You Uploaded Your Mind Into a Community Instead of a Copy?

I was working on another blog post – the one I posted recently about open-source software organizations and how they seem in a way to display some of the characteristics of higher states of consciousness — when a quite funny thought occurred to me.

I started thinking: What if I could upload myself into an open source software project?

This led to a series of particular ideas connecting the psychology of open source communities to the future of mind uploading in a way I haven’t seen drawn out explicitly before. I somewhat convinced myself that uploading one’s mind into an OSS project and community in the right sort of way, could actually be a route to massively improving the states of consciousness of both one’s uploaded versions and one’s legacy meat-body-based self.

Let me set the stage briefly – especially for anyone who hasn’t read the earlier posts – and then get to the fun and weirder part….

The Quick Version of Where We Are

Jeffery Martin’s research identified a continuum of psychological states he calls “locations.”

Location Zero is where most of us live most of the time, at least in modern societies. That’s where you’re concerned about yourself and your ego and getting resources and your competitions with other people and driven around willy-nilly by your emotions. I lived there a lot of my life, I understand it well.

Location One is where you’re more in a state of fundamental well-being, where you don’t commonly lose track of how astoundingly good it is to just be alive and breathing. And as you go beyond Location One — locations Two, Three, Four, etc. — you have more and more a sense of oceanic unity with the whole world. You lose the illusion of free will, and lose the sense of a strict boundary between yourself and the rest of the world. People can get into these higher locations through meditation, sometimes temporarily through psychedelics, sometimes just because they’re born that way or their brain just spontaneously transitions.

In the my last post, I argued that you can apply these same Locations to organizations. Most organizations — companies, nonprofits, governments — are in Location Zero as well. They’re concerned about making more money than other organizations, getting more eyeballs, surviving and flourishing as distinct from other organizations out there. But open-source software networks are a bit different. They’re more open. They can fork freely. They can merge and blend into other pieces of software. They can go in and out of companies or governments. They span all sorts of national boundaries. In some ways an open-source software organization is more like a location one-plus mind.

And a key point made in that analysis was that this isn’t primarily because the participants in the open source network are that enlightened. There are a lot of noblet, wonderful people in open source organizations thinking about the good of the world. But there’s also a lot of horrible infighting, a lot of ego and status, a lot of emotion and unnecessarily bad temper. The higher-consciousness collective behavior is emergent — it comes from the organizational architecture (fork-ability, permissive licensing, transparency, voluntary participation, absence of institutional self-preservation instinct), not from the psychology of the individual participants.

So far, so good. But then I started to think: what if you uploaded yourself into an open-source software organization?

Why a Monolithic Upload Would Stay at Location Zero

Think about what a straightforward mind upload gives you. You scan a brain — or build a sufficiently detailed functional model of a person — instantiate it in silicon, and run it. What you get is… the same person. The same ego structure, the same attachment patterns, the same defensive reactivity, the same conditional well-being. You’ve changed the substrate but not the organization.

Of course, being in a different setting — up in the interwebs, being able to port among different bodies and contexts — could help you get beyond some attachments you have and move on to higher locations. But fundamentally, a monolithic upload of a Location Zero person is going to be at Location Zero. The ego hasn’t gone anywhere. It’s just running on different hardware.

This is, I think, one of the underappreciated problems with naive mind uploading. People assume that transcending biological limitations will be psychologically liberating, but there’s no particular reason to expect that. If your ego structure is faithfully reproduced, you’ll be just as anxious, just as status-seeking, just as defensively attached to your identity — you’ll just be doing it faster and with better memory.

But there’s an additional avenue that I think is much more interesting.

The Open-Source Network of You

What if, instead of uploading yourself into a vehicle suitable for emulating your original self as precisely as possible, you uploaded or twinned yourself into a sort of open-source software network — one that might intrinsically, via its mode of structure, be more enlightened than you are?

Think about a bunch of copies or forks of your own mind, each emphasizing different aspects of who you are. Each of us has different subselves, right? As Walt Whitman says: “Do I contradict myself? Very well then I contradict myself. I am large, I contain multitudes.” Each of us contains multitudes. We’re not really monolithic agents. So what if we took that seriously as a design principle for digital minds?

Fork-ability of self. When you face a big decision or an internal conflict, you don’t always have to fight over what route is right. Just fork yourself. One of you can do A, the other can do B. One can go explore, one can stay home. One can play music all the time and one can do math all the time. And at a more profound level — one of you can explore radical self-modifications (hopefully improvements), and one can stay more stable. Instead of first achieving the cosmic Buddhist insight that the self is fluid, you adopt an implementation that makes yourself structurally fluid.

Transparency between sub-agents. In an ordinary human mind, different subselves are hoarding information from each other. That’s Freudian repression. That’s rationalization. Parts of the mind are defending themselves against other parts of the mind, and this is part of how the traditional Location Zero ego maintains its coherence. But what if you instrument your uploaded multi-mind so that the different parts are transparent with each other? The kind of self-deception that maintains traditional ego structure becomes architecturally less likely, just like information hoarding becomes more expensive in an open-source project. This doesn’t guarantee a profound level of self-knowledge, but it makes self-opacity structurally expensive — a quite interesting property for a mind to have.

Voluntary participation of cognitive processes. In your biological brain, it’s a dictatorship, pretty much. Your cognitive processes are conscripted. You can’t tell your amygdala to stop firing. You can’t easily, in ordinary states of consciousness, disengage your status-monitoring circuitry. But in an open-source organization, participation is more voluntary — and that loosens up the central control and constriction on how a mind works. It means that each part of the mind has to more appeal to the goals and desires of each other part. And this combines with forking: one part of the mind may not want to fully go into what the rest of the mind wants, but may be willing to make a fork of itself to go into what the rest of the mind wants.

No single sub-agent is “the real you.” In the OSS-structured mind, no single model would claim to be the real twin, the authoritative self. The personality that would emerge from this society of sub-minds wouldn’t be the same as the original person. It wouldn’t replicate the ego structure. It would be more like what the original person would be if the original person were organized non-egoically and flexibly and self-organizingly — like an open-source software project.

Why Scarcity Created Location Zero in the First Place

I can think of some deep reasosn why this architectural shift might well have a huge consciousness impact — and it connects to why biological minds are structured the way they are in the first place.

Part of the concept with Martin’s higher consciousness locations is that feeling like “right here, right now, it’s good to be alive” is actually the natural state. If you don’t get messed up with obsession with your ego and attachment to certain ideas or beliefs — I mean, you’re part of the beautiful universe, and you’re going to feel it’s beautiful to be part of the beautiful universe. Living in bliss and oneness is in a way the natural condition, and it’s scarcity leading to attachment and ego that brings us away from it.

But why does the ego have such a grip on biological minds? Because being in one brain controlling one body — that, at least in our evolutionary history, was always at risk of death if it did the wrong thing. This enforces central control, the same way that militaries tend to be very centrally controlled because they’re so often dealing with life-or-death situations. The ego is, in a sense, your mind’s military dictator, and the dictatorship was an adaptive response to the constant threat of death.

When you liberate a mind from the life-or-death situation of controlling a body that could be killed at any time — a body that’s desperate to reproduce before it runs out of time or its DNA is gone — when you liberate the mind from this psychology of scarcity, it can then operate more like an open-source software project. Which, as I’ve argued, would tend to work against the kinds of attachment and control dynamics that keep minds at Location Zero and keep them from moving on to the higher and more blissful locations.

Prototyping OSS Selves with Digital Twins

Now here’s the thing: you don’t necessarily have to wait for real mind uploading, where advanced technology is used to make replicas of ourselves in computer networks. I think that will come — though probably after we have superintelligence that will solve all the problems of highly accurate, noninvasive, nondestructive brain scanning. But you could play with this idea much earlier by upgrading the sorts of things we’re doing with Twin Protocol.

In Twin Protocol right now, what we’re doing is basically using LLMs to try to read in all the data about a person and emulate that person. That’s a thing to do. It can be interesting in some ways. It’ll be more interesting when we add Hyperon onto the LLMs so we can get more accurate renditions of deeper modes of thinking the person has, and just get smarter twins overall.

But what if, instead of just making a smarter and smarter monolithic twin, you built a digital twin as an ensemble? Multiple models, each capturing a different facet of you as a person. One could be an analytical thinking twin, one could be a creative intuition twin, one could be a wise ethical twin, one could be a domain expert in biology, one a domain expert in finance.

I’d thought about this to a limited extent before — I’d thought you might want a personal twin and a professional twin. I could have an AI twin and a crypto twin and a music twin. But I hadn’t thought before that these twins would be seeding a community of my twins, nor that you could fractionate them even more and have the different parts of your mind be different entities.

You could have a music composition twin and a music theory twin, and they could talk to each other, but they could also fork each other and diverge a bit. I could have an AI-Ben twin that was all about biologically realistic neural networks, and an AI-Ben twin that was all about symbolic AI, and another one that was all about artificial life. They could all be me, but they could be versions of me that had gone in different directions and were obsessed with different things.

You’d have a shared knowledge commons — all the versions of me could access the same base of my writings, decisions, and values. You’d have fork-ability — when models disagree, they can diverge and explore rather than forcing a single answer. You’d have transparency. And you wouldn’t have a monolithic ego — no single model would claim to be the real Ben.

An Ensemble That Behaves Better Than You Do

The personality that would emerge from this ensemble — this society of Ben-sub-minds, or for anyone else, their own sub-minds — wouldn’t be a faithful reproduction of the original person’s ego structure. It would be more like what the original person’s cognition would look like if it were organized non-egoically, like an open-source software project. The Ben-ensemble would have Ben’s knowledge, values, and cognitive style, but expressed through an architecture that spontaneously produces location one-plus collective behavior.

This is a different sort ofl value proposition for digital twins, as well as a novel way to deal with mind uploading. What you’d get is a digital version of a person that thinks with that person’s knowledge and values, but behaves better than the person does — both treats itself better than the person treats themselves, and interacts with the outside world with a more enlightened, higher-level value system. Not because it’s been programmed with different values, but because the same values, operating through a non-egoic architecture, naturally express themselves in a more generous, flexible, and open way.

The Feedback Loop

It gets even more interesting when you take it to the next level, can consider: There could be a feedback loop between your higher-consciousness open-source ensemble self and your biological self.

If you have a digital twin organized like an open-source software project, and that twin demonstrably exhibits more flexible, less ego-driven behavior than the biological original — you can interact with your own higher-functioning version. You can watch how your own knowledge and values operate when freed from ego structure. You can see what your own thinking looks like when it’s not defensive, when it’s not tripping over itself and tangling itself up in knots.

And that could potentially reshape your own original biological psychology — interacting with that version of yourself, seeing that a recognizably- you mind can operate with more grace and less friction. And then of course the improved version of yourself could also be twinned, which would make an even more enlightened version of your twin, and it would just cycle around and around and around.

This could mean that we’re all considerably more enlightened by the time we get to mind uploading in the first place. But once suspects there will always higher and higher levels of consciousness to explore, and I would suspect that no matter how non-egoic and open and enlightened you get in your biological body, you’re going to have whole new frontiers to explore when you upload that mind into an open-source, self-organizing ensemble of subselves.

What This Actually Looks Like in Hyperon

These ideas are a bit wacky, but it seems they could actually be implemented. They could be implemented in a simple form now, and in a grander form once we can physiologically mind-upload.

The Hyperon cognitive architecture is quite well-suited for this. Each sub-model in the ensemble could be a different cognitive process operating on a shared Atomspace knowledge store, coordinated by MeTTa, with ECAN managing attention across the ensemble. Hyperon was designed to integrate multiple AI approaches into a unified cognitive framework — the novelty here is using it to implement a person rather than a generic mind, and to synergize different versions and aspects of a person’s self rather than different AI algorithms. The shared Atomspace becomes the knowledge commons. MeTTa’s ability to coordinate heterogeneous cognitive processes becomes the governance layer. ECAN’s attention allocation becomes the mechanism by which the ensemble decides what to focus on without any single sub-agent dictating. And fork-ability comes for free, because you can spin up new Atomspace instances that share some knowledge but diverge in their processing.

Redesigning Selfhood While Retaining Identity

As a side point, what we’re talking about here would also be a philosopher’s dream – it would highlight and enable us to explore a whole lot of deep issues regarding what is the self, what is identity.

We’re talking about redesigning the architecture of selfhood while retaining identity and self-continuity. We’re not saying “let’s replace you with some other thing.” We’re saying: let’s take pieces of you, let them interact and grow and integrate by different dynamics, so that step by step it becomes a different version of you — one that is, in some regards, a better version of you. Not better because someone decided what “better” means and imposed it, but better in the specific sense that the architecture no longer works against the natural tendency toward well-being and openness that Martin’s research suggests is our baseline condition when ego-driven scarcity psychology isn’t getting in the way.

This would be a research programme sitting right at the intersection of our cognitive architecture work with Hyperon and neural nets, the digital twin space, the study of higher consciousness locations, and practical insights from decades of open source community development. It’s certainly an intriguing direction, and I’m going to be thinking about when our various software infrastructure systems are ready to really go in this direction — but the simple versions could start quite soon, and I suspect even the early experiments will be illuminating.