THE SINGULARITY IS NOT THE FUTURE. IT IS THE PAST.
I remember first having the conversation about immortality in 1995. Much like how my buddies and I discussed the coming attack of killer bees and imminent nuclear war in the 70’s, the discussion about living forever felt imminent. It wasn’t “if” it was going to happen, but “when.”
To accomplish this medical miracle, humans would simply connect their brain to a computer of some sorts and the machine would map their mind and forever code their behavior, thoughts, feelings and soul into a binary code. I have heard this referred to as The Singularity. The stage at which human and machine combine into a single being. This may not be the concept that those who first coined the term may have meant, but it seems to be the common understanding of it these days (at least in the circles I run in). Perhaps a better or more straightforward term for this is “Mind Uploading.” From Wikipedia:
“Mind uploading is a speculative process of whole brain emulation in which a brain scan is used to completely emulate the mental state of the individual in a digital computer. The computer would then run a simulation of the brain's information processing, such that it would respond in essentially the same way as the original brain and experience having a sentient conscious mind.”
Whatever you may want to call or whether it is possible to scan a brain and upload it to a computer is a moo point (Huh. A moo point? Yeah, it's like a cow's opinion. It just doesn't matter. It's moo.) And it is a moo point because it’s already happened. It’s the past. Not the future.
In 2015 researchers at Cambridge and Stanford about how well the Facebook (now Meta) algorithm knew users. The New York Times article concluded:
“Given enough data, the algorithm was better able to predict a person’s personality traits than any of the human participants. It needed access to just 10 likes to beat a work colleague, 70 to beat a roommate, 150 to beat a parent or sibling, and 300 to beat a spouse.”
The algorithm at Facebook knows you better than all of your friends and family…probably better than all of them combined…..and probably better than you know yourself.
Additionally, marketers have been collecting data on you for years. Through your web searches, time spent on pages and other data, corporations have a mountain of data that can predict your buying behavior and what motivations drove you to make those purchases. And it’s no longer information via a web browser. We have all been creeped out when after discussing a vacation with a friend, and then the ads you are displayed start showing discounted flights and hotel deals. Our devices are listening. Always.
You may not have had sensors connected to your head that read your brain waves to build a digital emulation of yourself, but whether you like it or not, “you” have already been reborn digitally. The singularity has already happened.
If you are like me and the rest of the capitalist consumer class, this probably isn’t a big deal. So what? So what that there is a binary version of yourself that exists and some companies are trying to make a buck off it trying to peddle their goods and services to me? Yeah, it’s probably not a big deal….at least not right now.
Bill Gates speculated in his November 2023 post, “AI is about to completely change how you use computers,” that in the future we will all have our own personal AI assistants. These assistants will not only help with requests like sending an email or making dinner reservations, but they will be “proactive.” They will be “capable of making suggestions before you ask for them.” Gates continues, “They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.”
So based on a digital version of you, your virtual assistant will be able to proactively help you coordinate, plan and execute your life. But, as Bill suggests, you will always have agency over the outcomes and make the “final decisions” about what you actually do. Sounds nice and convenient.
Let’s set aside that view of the singularity and AI for a brief moment and talk about my daughter. I recently went to visit her at her university where we were going to watch a football game. I picked her up from her sorority house and we drove to the game. When I asked her the best way to get to the stadium she said something that shocked me. She said she didn’t know. She continued, “I just use Google when I need to get around campus.” My jaw dropped and I was dumbfounded. How could she not know how to get around a place where she had lived for three years. What exactly was my tuition money going towards?
Turns out that it is not just my daughter that has lost the sense of direction. Younger generations have stopped remembering how to get somewhere because they have full-time access to applications such as GoogleMaps that can not only tell you the directions of how to get somewhere, but also tell you the best route based on traffic conditions. This phenomenon even has a name: The Google Effect. According to The Decision Lab:
“The Google effect, also known as digital amnesia, is the tendency to forget information that is readily available through search engines like Google. We do not commit this information to our memory because we know that this information is easy to access online.”
This seems intellectually lazy to me. It’s the equivalent of forgetting how to cook because there are seven fast-food joints down the street that will give me all the fried food I desire. But maybe I am just a luddite.
Nevertheless, luddite or not, as a people, we have grown to depend on technology in ways that were inconceivable only a few decades ago. And this is what is concerning about the future of the singularity. Yes, our AI assistants will allow us to make the “final decisions,” but who is selecting the options that we get to choose? It’s like a child choosing to eat the broccoli or the cauliflower their parents said they had to eat. It feels like they are in control. But in reality, their parents are choosing healthy vegetables over Big Macs for them.
This is not to say that personal assistants are a sinister conspiracy by technologists to enslave the human race (but I am not saying that it isn’t). Even an AI personal assistant with the best of intentions robs us of new experiences. If AI makes suggestions based on previous behavior or likes or dislikes, what happens to those things that may not fit into that algorithm? Just because I hated chocolate covered grasshopper doesn’t mean that I won’t love caramel toasted fire ants. Moreover, we are defined just as much by our negative experiences as we are the positive ones. Eating bad food. Seeing a horrible movie. Getting lost. All of those things are an essential part of the human experience. What happens to those defining moments?
Between 2011 and 2013, I more or less lived in Munich. One of the things I loved most about my time there was the weekends where I would just wander the streets alone and stumble into various places and activities that I found interesting. I got lost in a city that I didn’t know where I was a stranger to everyone. And I loved it. It’s some of my favorite memories. In a future structured by our uploaded singularity, these unscripted moments don’t happen. Our lives are structured by an algorithm. And when this happens, who is the robot? The machine that runs the algorithm? Or the person that lives life according to one?