This example is very primitive. The example does not include a completely autonomous explanatory knowledge creating system. It is meant to demonstrate that an LLM can query its own interrogative knowledge structure and use a simple prompt to create a seed statement, then use the statement to assemble coherent assertive statements and construct an explanatory world model. Much more engineering would be required to create something like AGI, if the method I described can actually solve the problem. The team I know of with the most experience constructing something like I have described is Subbarao et al.
, who have addressed many technical challenges in generating assertive knowledge using an LLM. I believe the “Design Principles for an Explanatory World Model” that I described may be helpful to someone seriously intending implement the concept.
Kickstarting an assertive knowledge structure using an LLM (a high dimensional interrogative knowledge structure) allows an explanatory assertive knowledge structure to be generated within a narrow range of trial and error (like genetic mutation but constrained by the approximate knowledge structure already learned by the LLM) to generate a statement structure that is increasingly consistent with the physical world. At the beginning, the explanatory knowledge structure has no or little knowledge; it must naively accept statements generated for it by the LLM. Later, the LLM simulates the physical environment and the explanatory knowledge structure learns from it (the LLM may initially need to assist it with the mechanics of learning). The explanatory knowledge structure passively learns (statement addition) from the LLM as it adds context to its knowledge structure. Then, the explanatory knowledge structure compares the computations produced by its own explanatory structures to the data from its environment (outputs of the LLM) in order to optimize toward creating better explanations (statement substitution). Explanations are refined to better match the global intuitions of the LLM.
Step 1: Seed prompting
A genesis prompt stimulates an LLM to compose knowledge stored in its feature (interrogative knowledge) structures into natural language. The genesis prompt can be about anything connected directly or indirectly to the physical world; genesis prompts referring to features that the LLM has strong ‘intuitions’ about (e.g. common human knowledge) may generate seed statements that ‘grow’ most successfully.
Example:
- Genesis prompt: “What is a planet?”
- ChatGPT 3.5 output: “A planet is a celestial body that orbits a star, typically a star like our Sun, and exhibits several defining characteristics: 1. Orbiting a Star…”
The output response from the prompt is stored and returned to the LLM for 1) formalization, and 2) contextualization.
Step 2: Statement formalization, contextualization, and integration
LLMs are effective translators. They can translate natural language into formal language (e.g. mathematical statements, PDDL statements, computer programs).
Questions are automatically derived from the seed statement (or from the informal natural language output that the seed is translated from). The questions (e.g. “What is a celestial body?”, “What is the sun?”, “What is a star?”) are used to further prompt the LLM to generate contextual explanatory knowledge around the seed. The LLM can produce more contextual natural language statements about the seed statement.
Example:
- Context prompt: “What is a star?”
- ChatGPT 3.5 output: “A star is a massive, luminous celestial object primarily composed of hydrogen and helium, held together by its own gravity. Stars are the fundamental building blocks of the universe.”
- Next context prompt: “What does massive mean?”
- ChatGPT 3.5 output: "Massive" is a term used to describe an object or entity that has a large amount of mass. In physics, mass is a fundamental property of matter ...”
The natural language statements output by the LLM can be used by an LLM to generate concise formal statements.
Example formal statements derived from LLM outputs ( I manually produced the statements below to save time but an LLM should be able to automatically perform this task):
- “Planets and stars are celestial bodies.”
- “Celestial bodies can orbit.”
- “The Sun is a star.”
- “Celestial bodies are composed of matter.”
- “All matter has mass.”
Notice that the terms “celestial body” and “celestial object” would need to be recognized by the LLM as having the same meaning for the above formal statements to be produced. With these new contextual statements, an LLM can begin composing the statements into an assertive knowledge structure. Ideally, I expect this structure will be constructed using a common general programming language.
Below is an example statement composition output generated by ChatGPT using the above input statements:
Let's define the symbols:The system will require automated methods for organizing and refactoring knowledge statements within the explanatory model.
Let P represent planets.
Let S represent stars.
Let C represent celestial bodies.
Let O represent the property of being able to orbit.
Let M represent matter.
Let m represent the property of having mass.
Now, we'll convert each statement:
Planets and stars are celestial bodies.
P⊆C and S⊆C
Celestial bodies can orbit.
∀x∈C, x has property O
Celestial bodies are composed of matter.
∀x∈C, x is composed of M
All matter has mass.
∀x that is M, x has property m
Combining these statements while preserving correctness:
(P⊆C and S⊆C) and (∀x∈C, x has property O and x is composed of M) and (∀x that is M, x has property m)
Step 3: Error detection
An automated system can be used to detect and filter statements with syntax errors and internal logical conflicts (like automated sanity checking or like a VAL for a PDDL model). The LLM may need prompting strategies to resolve factual conflicts between statements.
Example using ChatGPT 3.5:
Q: What is a star?If these natural language outputs were formalized they may create a factual conflict (i.e. “stars are fundamental building blocks of the universe” and “galaxies are fundamental building blocks of the cosmos”. The LLM may be capable of resolving the factual conflict with further prompting. Importantly, the prompting does not require outside knowledge of the world and can be generated automatically.
A: A star is a massive, luminous celestial object primarily composed of hydrogen and helium, held together by its own gravity. Stars are the fundamental building blocks of the universe ...
Q: What are galaxies?
A: Galaxies are vast systems of stars, stellar remnants, interstellar gas, dust, and dark matter bound together by gravity. These immense structures exist throughout the universe and are the fundamental building blocks of the cosmos ...
Q: Below are two statements. Please help me make these statements more precise and resolve any conflicts between them.Obviously, these statements require further refinement; however, the LLM demonstrates an ability to refine formal statements when presented with a conflict.
1) Stars are the fundamental building blocks of the universe.
2) Galaxies are the fundamental building blocks of the cosmos.
A: To make these statements more precise and resolve any conflicts, we can refine them as follows:
Stars are fundamental components of galaxies.
Galaxies are fundamental structures of the cosmos.
… By rephrasing the statements in this way, we avoid conflicts and provide more precise descriptions of the roles of stars and galaxies within the universe and cosmos, respectively.
Step 4: Generating better explanations
As context is accumulated and the ‘reach’ of the explanatory model grows, it becomes capable of substituting more of the interrogative knowledge in the LLM, existing statements may be re-evaluated from new perspectives and may require substitution by statements constituting “better explanations”. I have not included an automated, LLM-driven example. Below is a manually created example (with some LLM assistance) of what the process may look like:
Original learned statement:
- Fg=m⋅g; where “Fg” is force due to gravity, “m” is mass, and “g” is acceleration due to gravity.
Later, further contextual prompting generates the statement:
- Fg=G((m1⋅m2)/r2); where “Fg” is the gravitational force between two masses “G” is the gravitational constant, “m1” and “m2” are are the masses of two objects, and “r” is the distance between the centers of the two masses.
The second statement can substitute for the first statement while retaining its original explanatory function (assuming the mass and radius of earth is known). The second statement is a better explanation because it also explains the force of gravity at large distances, with objects of any mass; therefore it connects to more statements in the knowledge structure. Better explanations for these phenomena can later be created. The original statement may be archived as ‘self-knowledge’ and used as a heuristic for computations requiring less precision.
In addition to accidentally generating better explanations through exploring context, the system should be able to generate better explanations by testing its explanatory knowledge structure against the intuitions of the LLM. Compositions of learned explanatory statements can be computed to generate outputs. The outputs, along with the fully observable explanations can be tested against the interrogative knowledge of the LLM. This process would require a specialized prompting system, but I believe it can be automated and would not require outside knowledge. If the computations of the assertive knowledge structure conflict with the intuitions of the LLM, that may indicate the LLM needs to search for the specific statement(s) responsible for the conflict and attempt correct it.
Example:
The explanatory model knows that “Celestial bodies are composed of matter”, and “All matter has mass.” The model learns that “Black holes are celestial bodies.” The model can now generate a new output through composition with it’s existing knowledge.
Output generated using explanatory knowledge structure:
“Black holes have mass and are composed of matter because 1) black holes are celestial bodies, 2) celestial bodies are composed of matter, and 3) all matter has mass.”Prompt LLM to compare the output to its own knowledge:
Q1: Do black holes have mass?The first conclusion from the explanatory statement composition is consistent with the knowledge in the LLM.
A1 (ChatGPT 3.5): Yes…
Q2: Are black holes composed of matter?The second conclusion from the explanatory statement composition is not consistent with the knowledge in the LLM. This indicates that one or more of the statements can be improved.
A2: Black holes are not composed of ordinary matter in the way that stars, planets, or other celestial objects are composed of matter...
Step 5?: Retraining with an explanatory skeleton
Gradually, explanatory knowledge will become consistent with more of the training environment (outputs from the LLM) and the explanatory structure will become an explicit assertive representation of the implicit interrogative knowledge stored in the features of the LLM. The interrogative knowledge structure of the LLM may contain biases or distortions that cannot be explained. It may become necessary retrain a model using the explanatory knowledge structure (like AlphaZero, training around an explicit world model) to eliminate biases of the original LLM and develop ‘sharper’ intuitions.
Step 6?: Automated science
New data collection and experimentation guided by the explanatory knowledge structure. See “Design Principles for an Explanatory World Model” for some related comments.
https://storage.googleapis.com/tlon-prod-memex-assets/tiplur-bilrex/2024.1.19..19.23.10..9c28.f5c2.8f5c.28f5-agustus_coin.jpg