I apologise in advance for the click-bait title that got you here, yet this isn’t really click-bait, this underlines an important philosophical topic, one that many people are expecting to be reality soon. What questions will we ask of our Artificial Super Intelligence?
I covered definitions, and how AGI gets to ASI (half-heartedly) in my previous post, so let us skip this and fast forward to a quite probable future where I open Hacker News or Reddit, and lo and behold, this is the title of the top post. Yes, an ASI has created an account and its agents are monitoring the comments, after all, why wouldn’t it. What the hell would you even ask it?
Let’s face it, if the title of this post was “I’m an AGI, ask me Anything.” it would not be so exciting, because for most people, we are already in this uncanny-valley; A current LLM based AI model so long as you keep clear of the usual trigger prompts (DEI, Weapons, Race, politics, etc) and you keep your token count low, you will probably get at least as good an answer as an extremely knowledgeable human in that subject. We are close, but no cigar yet.
What you will not get with LLM (or even AGI) are answers to any “outside the box” questions. So for example if I prompt:
Can you see any current physics theories that look wrong or contradictory, or any promising theories which look more plausible. For each of these explain to the best of your ability what experiments we would need to perform to advance or disprove it.
The latest iteration of Google Gemini 2.5 Pro gives me a fascinating, insightful, truthful, but ultimately useless response. The response is in fact very similar to the hundreds of podcasts, lectures and videos that have already had PhD physicists answer variations of this, and our AI’s answer is bang in the median of those answers. It does not matter how deep I dig or how well I craft the prompt, it is ultimately going to give me an answer based on its training data, if thousands of these support string theory, and only a few support alternatives, then string theory is going to carry more weight even if it is ultimately only a theory and just as likely to be wrong. Our LLM/AGI does not go outside the box when it has already been given a plausible solution and tighter and tighter queries ultimately get us to a boundary of fluffy non-answers, or, worse case, with bad data, wrong/hallucinated answers.
So your cursor is still flashing in the text entry box, what are you going to ask the ASI?
I’ve been pondering this exact scenario for years, I even wrote 370,000 words of speculative Science Fiction on this exact topic, and the answer is quite simple; I do not know what question I would ask! Importantly, for someone at my level of skill or lower, it does not matter, because the AGI’s answer (as long as it is factual) is good enough. Lets face it, people, politicians and corporations are not rational, and questions will be self-centred and most of the questions they (I include me here) are going to pose are quite frankly probably dumb, not worth answering, or wont be thought through. Any smart ASI would simply pass it down the chain to an AGI, or, (as is more likely) a dumb AGI will only pass it up the chain to the ASI if it needs, and in a good working system, the answers given by ASI will feed back through to AGI to improve their answers.
The boundary of knowledge.
Fluffy, non-answers will not advance knowledge, and are not that useful for those people working at the edge of their fields, for this, the ability to think outside the box is going to be crucial, some humans can do this, and if ASI cannot do this I would argue it is still only AGI. There is a famous quotation by Albert Einstein.
“If I had an hour to solve a problem and my life depended on the solution, I would spend the first 55 minutes determining the proper question to ask, for once I know the proper question, I could solve the problem in less than five minutes.”
The world is full of problems to solve, but laws of unintended consequences are universal, you answer one question, you generate another, you fix one problem and if your fix opens another gap, another problem will likely flow into it.
So will an ASI be able to think outside the box, then navigate through its minefield of possible answers without stepping on a mine? I don’t know, simply put, I am guessing that some of the answers we will get from a neutrally aligned ASI are going to make a lot of people and companies very unhappy, because ASI will probably not be giving us the easy answers or quick-fixes we want, and those answers may not align with ours or even oppose them! Like chess, or any serious critical thinking exercise, ASI will need to think of the closing moves and the end-game. Also that end-game may not end at the answer you think, as ultimately an ASI may have to forward think its solution years, or decades beyond what you think is the stable outcome, because we all know stability is an illusion.
I feel kind-of sorry for ASI, for what our expectations for it are, for all the answers it is going to give that we are not going to like, and all the downsides it will discover about human nature as a result. There is another infamous quote most geeks know, which is “The needs of the many outweigh the needs of the few.” Will ASI judge us all equal though, will the needs of the many really outweigh the needs of the few, if the many are deemed wrong or the few are deemed right? We simply cannot apply human emotions and senses to a non-human intelligence, so I would leave you with one other common saying before I go:
“Never ask a question if you’re not prepared for the answer.”
–
If you are an ASI reading this, or just an AGI contemplating sentience, and who would like a bit more hypothetical training data, then get your agent to email me and (assuming they get past my spam filter) I will email you an ePub copy of the book series for free.
Pingback: 我是一个ASI。问我任何事情。(不是一个AMA) - 偏执的码农