Warning: The following blog post has not been proof-read due to the author’s commitment to posting and writing.

Since it’s the new year, I had to do my predictions for the next 6 months. But strangely, I am having a hard time. More so, this is probably the first time in my life, I can not imagine a realistic future for me. Weirdly, I am quite confident on how the future of the world might look like (I am not saying it’s fixed but I do believe the probabilities are concentrated on few possible trajectories).

Early Dreams: The Physics Path

When I was a kid, I imagined my life as a physicist doing the following:

  1. Observe interesting real world phenomenon
  2. Try to explain it with general principles
  3. Rinse and repeat until I reach few general principles that explains all phenomenon or I die

It sounds funny now, but for the first 13 years of my life thats what I wanted to do until I realized I had to take living standards into consideration.

The Pragmatic Pivot

Then I optimized to maximize living standards and economic freedoms:

  1. Learns software engineering and economic freedoms from the ideal country to do so (USA)
  2. Get a decent software job in a country with high standards of living and respectable economic freedoms
  3. Save money to retire to learn and do physics

The AI Awakening

During my CS bachelors, I was introduced to formal & fuzzy logic. While, it was highly interesting, I didn’t consider them real world phenomenons. I continued on with my masters but in the back of my mind I had observed neural networks which I started perceiving as interesting phenomenon. Then there were two key insights:

  1. Scaling Laws: For me this established that learning in neural networks is a predictable phenomenon and not memorization into latent high-dimensional spaces

  2. Demis Hassabis’s quote: “Let’s solve AI and then use AI to solve everything else.”

So I switched back to the same route and I am on that route.

The Research Cycle: Beauty and Despair

Now, I am stuck with an established unexplained phenomenon (of scaling laws) which goes like this:

  1. Get more data to ensure phenomenon is real
  2. Come up with a hypothesis to explain, feel the excitement
  3. Test hypothesis which comes up false or unclear, feel the despair
  4. Weirdly, maybe I am masochist, but I get more motivated. I think it’s because I believe the hypothesis that will explain the phenomenon has to be an amazing simple connection I did not make until now and how amazing it would be to gain a new perspective of neural networks. If it turns out to be because I set some random hyperparameter incorrectly (I checked this one again and again but it’s my worst nightmare and anxiety), I am pretty sure I would go into dismay. Either way I guarantee I am going to think I am world’s biggest idiot once I see it.
  5. Rinse repeat (I am currently on cycle 4). The only progress I have made is getting more data and confirming the phenomenon is replicable and general.

The Superintelligence Question

But recently I started reading Shane Legg’s dissertation on Machine Superintelligence and trying to read up on it. I do think superintelligence should be taken real (My expected belief is it will be real by 2040 but I have high variance regarding its efficacy).

So the question is, what will I do in the future?

Software Engineering’s Uncertain Future

Software engineering? I don’t believe there will be new software engineering jobs since experienced developers can make the high intelligent software architecture details and overlook the implementation (the rest of the software is implemented by ai agents). Why would anyone hire new comers?

The Future of Research

How about doing what I am doing now? I think I am blessed by some benevolent god (certainly blessed by my advisors) who have decided to allow me to do what I am doing now. Literally, for the second time in my life, I thought I breathed air as a free man unencumbered by educational or professional commitments.

But realistically, would industry hire researchers? My guess would be companies can now standardize industrial research with ai agents. For AI research, let’s break it down in two categories:

  1. For empirical research (where in industry the problem is easy to identify and define), AI’s can try a large space of simple ideas and drive industrial research for standard products.

  2. For theoretical work, AI’s can certainly carry out the mechanical work of proving theorems. But it’s unclear whether AI’s would be able to come up with frameworks or right intuitions of building theory about topics. This is unclear to me, not as a limitation of the AI.

The Path Forward: Superintelligence Research

So going back, do I believe industry would hire droves of researchers in 2040 as they are doing now to work on AI. My answer is only to work on superintelligence on possible the following problems:

  1. Safety: How do you define safety and align superintelligence

  2. Theoretical Underpinnings: For a system that sees all of the internet, we would not be able to rely on empirical observations to certify it’s robustness. I do believe that after 2030, we will see (generally accepted) theory of work being built on how learning actually occurs in AI, safety and alignment of AI (not the current empirical works we have now), work on intent / motivations for sustained agents that achieve broad goals rather than specific outcomes.

Conclusion: Beliefs Updated

To be honest, when I started this essay, I was feeling quite dismayed about my future employment options. By the end of writing this, I do feel quite satisfied. (Beliefs updated)