Yann LeCun is pushing the field towards world models. And he is right: scaling LLMs alone will not get us to systems that understand, reason, and plan.
But most builders cannot afford to wait for JEPA v2 papers or Meta’s research breakthroughs. The question is practical: how do you approximate a world model today, with the tools already available?
What a World Model Really Is
Strip away the academic framing, and a world model is not mysterious. It is three ingredients:
State
A structured representation of what the world looks like now.Transition
A way to update that state when something changes, whether through new observations or actions.Planning
The ability to project the state forward and evaluate possible actions against future outcomes.
That is it. A toddler does this instinctively. Builders can do it with databases, rules, and lightweight reasoning loops. You don’t need a perfect latent predictor to get value. You just need to build the loop.
Where Builders Go Wrong With LLMs
The default instinct today is to treat the LLM itself as the “world model.” Push more into the prompt, expand the context window, and hope the model “remembers” enough to simulate state.
This breaks in practice:
Memory becomes brittle. Context stuffing is expensive and inconsistent.
State drift creeps in. The model hallucinates facts that no longer match reality.
Actions lose grounding. The LLM proposes fluent answers that fail in a dynamic environment.
A real-world example: task management agents that forget what tasks are complete, or booking bots that “assume” availability without checking the underlying system.
The fix is to separate language interface from world state.
Approximating World Models With What You Have
Here are four practical ways to approximate a world model today:
🔹 Entity and state memory
Keep a structured store of entities, attributes, and goals. It can be a SQL database, a key-value store, or a vector DB with typed embeddings. The key is persistence across sessions.
Customer support bot: Store user identity, open tickets, and previous resolutions. The system recalls a past complaint automatically instead of asking the customer to repeat.
Fitness app: Maintain state for weight, calorie intake, and workout history. When a user logs breakfast, the app updates remaining daily macros without recalculation from scratch.
Travel planner: Remember destinations, dates, and budget goals across sessions. If the user returns a week later, the system still knows they were planning a 7-day trip to Spain under €2k.
🔹 Simple transition functions
You don’t need deep learning here. Rule-based updates work. If a user confirms a booking, mark their state as “confirmed.” If an item disappears from the inventory feed, remove it from the map. These simple transitions bring stability.
E-commerce: When inventory hits zero, the system updates product state to “out of stock.” The bot no longer suggests it in recommendations.
Banking app: When a transfer is executed, the balance is reduced immediately. The state reflects reality before the next sync with the core system.
Learning app: When a student completes a module, mark the concept as “mastered” and unlock the next lesson.
🔹 LLM-assisted reasoning
Use the LLM tactically. It can propose candidate actions, but always ground its reasoning against the state memory. Example: “Given the current state and constraints, which option satisfies the goal?”
Logistics: The world model tracks packages in transit. The LLM reasons about re-routing: “Given a delayed flight and the customer’s deadline, what delivery route satisfies the goal?”
Healthcare triage: The state model has symptoms logged. The LLM proposes next questions to narrow possibilities but only within the structured medical knowledge base.
Education: State = student proficiency map. LLM reasons: “Given weak algebra and strong geometry, which exercise is the best next step?”
🔹 Simulation by replay
Planning does not require generative video models. You can approximate it by rolling forward “what if” scenarios with your transition functions. Test possible actions, then choose the one that leads to the best future state.
Ride-hailing: Simulate assigning a driver to passenger A vs passenger B. Roll forward travel times and see which choice minimizes total wait across the system.
Personal finance app: Simulate savings outcomes. If the user invests €500 a month vs €1,000, replay the balance forward 10 years and compare results.
Warehouse robotics: Simulate picking orders in sequence A-B-C vs C-B-A. Replay travel distances and pick times, then choose the shortest plan.
The Builder’s Playbook
Think of this as a minimum viable world model:
Step 1. Define your schema
Decide what your world consists of. Users, items, locations, states, goals. Make it explicit.
Step 2. Implement transitions
Keep them transparent. Rules that map input to updated state are enough to start.
Step 3. Insert the LLM as a reasoner
Let the LLM interpret ambiguous inputs and propose candidate actions. But never let it rewrite the state directly.
Step 4. Add a planner
Use simple replay. Simulate a few possible next steps, score them, and pick the best.
Step 5. Log surprises
Whenever the real world diverges from the model, log it. Surprises become your training signal for improvement.
Examples Across Domains
This approach works across verticals:
Customer support
World model = user profile + issue history + open tickets. Transitions = updates from user replies or system events. Planning = suggest next best resolution.E-commerce logistics
World model = order states + inventory + shipping nodes. Transitions = order placed, item picked, package scanned. Planning = choose optimal route.Health tracking
World model = biometrics + habits + goals. Transitions = new measurements logged. Planning = adjust nutrition or workout plan.Education apps
World model = learner progress + mastered concepts + weak areas. Transitions = quiz results, completed lessons. Planning = recommend next module.
In all cases, the pattern is the same: state, transition, planning.
Why This Works
Approximating a world model gives you four practical benefits:
Persistence: users feel the system remembers them.
Grounding: actions are tied to real states, not hallucinations.
Adaptation: surprises trigger learning loops instead of silent failures.
Separation of concerns: LLMs handle language and reasoning, while structured systems handle reality.
This is enough to turn fragile prototypes into reliable products.
The Strategic Lens
LeCun’s JEPA will eventually give us elegant latent predictors. But builders cannot wait. Approximating a world model with structured memory and simple transitions is enough to start.
The pattern is not research-lab-only. It is the same loop product teams already know from good system design: model the state, update the state, plan over the state.
The opportunity is to treat this as product architecture rather than just prompt engineering.
Closing Thought
You do not need Meta’s stack to build a useful world model. You can build one today with databases, rules, and an LLM stitched in as a reasoner.
👉 What state schema, if it persisted and evolved over time, would make your product instantly smarter?