Lucasxp64

The deep epistemological truth behind LLMs (AI)

2 posts in this topic

Posted (edited)

LLMs currently work like a mathematical equation. What the LLM does is to balance that equation. It's similar to the hegelian dialectic: thesis, antithesis, and synthesis.

The best way of prompting an LLM is understanding that they are mixing different concepts as if they were math. You can add, you can divide, you can multiply. But they excel best at matrix multiplication at a conceptual level. They are able to take multiple concepts and, for the lack of better words, "multiply", those multiple concepts to other concepts.

The ultimate prompt engineering is about using your human ingenuity to be able to extract from the AI the information that your brain lack by being intelligent with the method you use to extract that information, and interactively work with to improve that method for your specific context and situation. You have to THINK for it by coming up with the right equation, so it will know how to combine different concepts it has inside of itself.

Due to this recently, some have started calling it "context enginerring". You create a prompt to ask the AI to generate a MODEL of how to interpret something, then, you ask it to ask you THE VARIABLES of the equation "The context" so it can use it's vast information array as a glorified search engine by computing those variables, and it will fill in the MODEL with the information.

LLMs are equation machines. Human minds are closer to the creator of equations. In other words, they COMPUTE ALGORITHMS YOU GIVE TO IT, but you are intelligent creator of the algorithm.

Also, I ask it to prefer using established ideas, etc. So it can make precise references to authors and concepts. 

This is the best prompt I've used so far (I use it with Gemini 2.5 PRO inside of Google Ai Studio for free):

Quote

Use thinking mode. Output an overview of every single dimension of my request. Find points of uncertainty. Then, ask me as many clarifying questions and context as possible to help me. Prefer established formal theories, models, authors, terminology, etc.

 

Edited by Lucasxp64

✨😉

Share this post


Link to post
Share on other sites

The MAJOR issue with the current LLMs is that they will CHOKE when trying to combine too many new concepts and variables, it starts to become muddled. At some point, they won't be able to scale multiple new concepts, variables, models of thinking, etc.

And only human minds so far are able to get past a certain point of complexity of recombination.


✨😉

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now