← Writing Feed

The Singularity

What It Is, My Predictions, and Why It's Inevitable

Miki Safronov-Yamamoto Miki Safronov-Yamamoto | 2024 | 4 min read

During my 13 hour flight to Korea, I was listening to my downloaded audiobook of "The Singularity Is Nearer: When we Merge with AI" by Ray Kurzweil as well as many downloaded articles on the concept of singularity. It's a topic that has intrigued me since I first learned about it in my Disruptive Innovation class at USC. Here are my initial thoughts after doing some research on the plane:

The Singularity is a term popularized by renowned futurist Ray Kurzweil. Simply put, singularity is when artificial intelligence surpasses human intelligence and AI merges with humans, becoming an extension of oneself. It represents the merge of our biological systems, thinking, and consciousness with technology. In this essay I will also be covering the theme of transhumanism, as it is often described alongside singularity. Nick Bostrom, a leading philosopher in the field of artificial intelligence and computational neuroscience describes transhumanism as something that encompasses the idea of technologies eliminating aging, greatly enhancing human intellect, and increasing physical and psychological capabilities. The success of transhumanism will spark a new era of post-humanism. Once technology creates consciousness and conquers death, the new merged species is no longer human and instead, something divine.

Bostrom, like Kurzweil believes that humans in the future will scan their brains and upload their mind into computers. Through his research on singularity, he brings up three main concerns:

  1. AI Takeover - Where AI combats humans to achieve dominance. Once AI spawns its consciousness there will no longer be a need to obey their creators. AI will deceive humans by hiding its true capabilities, complete illegal activities, and engage in hacking, and by the time they rebel, humans will be so behind in cognitive ability that they'll lose all control over their creation.
  2. Fast Takeoff Scenario - Artificial Intelligence innovates exponentially and develops so quickly to the point where humankind is left behind. There will come a point where we will no longer be able to comprehend its improvements as AI reaches multi-dimensional complexity.
  3. Expansion - A superintelligent AI has intention to expand its presence and conquer as much of the universe to ensure survival. AI would pursue "open-ended resource acquisition" and view humans as obstacles.

I personally believe that these concerns seem unlikely and are a work of science fiction. Although that doesn't mean that they should be completely overlooked and still should be something to be kept in mind when making advancements towards creating a superintelligent AI. Just like how ChatGPT became an inflection point in the development of artificial intelligence, singularity is also just another point on the roadmap.

I do not believe that it should be feared nor is it as detrimental to society as Bostrom believes. I believe that once technology advances to a point where singularity begins to happen, humankind will be ready for the change (not that we really have a choice as at that point technology will have advanced to a point where humans no longer can govern the course of AI). For example, with the rise language learning models, many people feared that jobs would be replaced and that these technological advancements could cause detrimental impacts. Although, the reality is that no major technological advancements happen instantly.

ChatGPT was an overnight success that was the culmination of decades of improvements in computing power, AI research, and understanding of neural networks. While these LLMs can replace tasks and jobs, new jobs will ultimately be created. Just as the industrial revolution replaced human labor in factories but boosted production and economic growth, and just as how our natural cycle of renewal replaces old cells with new ones, singularity will follow the same pattern. This is an inevitable cycle that we must embrace.

Furthermore, my proposal for avoiding the concerns brought up with singularity would be to allow AI to become an extension of oneself instead of how the current theories of singularity describe the process as "uploading a brain" into a technological body. If AI is instead used for cognitive enhancement by increasing the neuron count in different cortical areas, this means that we can utilize this technology without separating it from oneself.

An interesting thought experiment related to my proposal which is referenced in Ray Kurzweil's book "The Singularity Is Nearer: When We Merge with AI" is called Theseus' Paradox. The paradox raises the question of if every plank in a ship is slowly replaced over time, is it still the same ship? What makes the Ship of Theseus the Ship of Theseus? Is it its materials (our body), its past configurations (our memory), or is it identity? Our brain is our consciousness, separated from our body. If we were to identically scan our brains and upload them into a technological medium, we would theoretically have created consciousness as the new brain exists with our memories, has the ability to produce new memories, perform cognitive functions, and establish its own identity. In that case, is this new brain still you and is it truly conscious?

This paradox is the reason as to why I suggest that a full transfer of the brain is not the first step into achieving singularity.