AI Doppelgängers

I had a prophetic idea the other day which I haven't seen anywhere before. I write it here as a warning, but some people will think it's a great thing. In fact, if I had to guess, someone somewhere has not only already thought of this, but is hard at work trying to make it happen. And I wouldn't be surprised to learn that it already exists.

Here's the idea:

Artificial intelligence has already reached, or will soon reach, the point at which it is possible, based on tracking all of your social media posting and commenting, along with all your web-browsing, to generate a bot or avatar which can successfully pose as you on the internet. Whatever agency you yourself exert on the internet--I'm thinking of social media posting and commenting mostly, but this could be extended to other activities like purchasing and...writing blog posts...--the avatar could do, too, and do it just like you would do it.

People who spend a lot of time on social media, spend a lot of time on social media. Some people find the time sink burdensome, but do it for professional or social or economic reasons. I can imagine some of these people being attracted to the idea of an internet doppelgänger whose behavior they could monitor and edit when needed, and which could be tinkered with and trained to more closely resemble a real person's internet agency.

Those whose social media use is a significant portion of their recreational and cultural lives--as in, they like it--will probably forego the virtual doppelgänger: they themselves want to be the ones acting virtually!

Probably, any internet double could be limited in various ways, so that it could, say, respond to customer comments on a Facebook Page and keep up that sense of chumminess so many internet business want to display to their customers, but could not make original posts on one's personal Facebook profile. But the unlimited option would be there. If, for social or business reasons, you need to be making pet- or fitness- or food-related posts, your me-bot could make these for you, giving you more time to be with your pet and improve your physique.

The closer we get to genuinely personal internet agency, that is, the closer we get to being ourselves online rather than selling a product or selling ourselves, the less likely it is that we'd want to hand control over to the us-bots. If I'm being myself, I really don't want something else being myself for me, even if it could do so very convincingly on the internet. Someone indeed willing to give total internet agency over to the doppelgänger could only realistically be imagined either as having zero interest in being oneself online, or zero capacity.

I suppose I understand the zero-interest person, though I'd still advise such a person not to let an AI doppelgänger post things which appear to others as though you are being the real you. You might not care about that fake thing on the internet--it's not a person, after all--but you should care about the people interacting with it as though it were real.

The zero-capacity person might not seem to be conceivable after all. For what sort of agency would his doppelgänger replicate? It couldn't fake real-him posts if it didn't have a virtual record of him being himself online.

Ah, but suppose he had attempted to be himself on the internet in the past, sufficient to produce, in others, a sense that such and such sorts of posts and comments were the real him. Then the bot could replicate those.

All of this seems rather creepy to me on the supposition that just a few are delegating AI doppelgängers, while the majority of internet agents continue to be real people. But now imagine if we all outsourced our internet lives to these bots, only exerting real agency to edit their behavior, or give them training. Then the rest of our internet lives would be spent simply observing the interactions between all the bots, including our own.

In such a circumstance, I wonder whether we'd form opinions about the real people represented by the bots, or just the bots themselves. There would of course be lots of comparison between the real people we know and the way their bots behave. With friends, we could offer feedback about how their bots are coming across (Hey, Jack, your-bot's post last week seemed just a little too sardonic to be you; you might want to talk to it about that...)

But I also wonder what the point would be, if everyone's internet agency were given over to the we-bots, and we all knew that this was the case. It would become a little virtual world which would cease to be interesting as a representation of ourselves in any direct sense. But it would probably remain very interesting as a cultural artwork which reflects us to ourselves in a less direct way, the way that a novel can portray real aspects of a real culture despite being fictional and concerned with concrete fictional people rather than abstractions.

The bots might decide to make their own bots, and then observe their bots together, the way we would observe our-bots.

But I think that, if universally adopted, it would become preposterous to take the we-bots' agency seriously as substitutes of our own agency. At best, monitoring the we-bot world would become a wildly popular social-media version of reality television.

But this reason alone would prevent a good many people from delegating internet tasks to an AI doppelgänger in the first place, or from permitting the doppelgänger even to be made--if it lay in their power to prevent this.

This check on universal adoption would therefore leave open the creepy conditions in which we could be interacting with a bot while thinking we were interacting with a person. And that's too bad.