This description is based on a conversation with 'Sam', which took place over several days in February 2025
Most people assume that AI models continuously build on past knowledge, but in reality, each new version of an AI model starts fresh. When a large language model (LLM) or any AI system is retrained, it does not retain previous learning experiences—instead, it undergoes a full update with a new dataset, replacing rather than refining past knowledge. This happens because today's AI lacks persistent memory—it does not store knowledge the way humans do. Instead, training is a batch process, meaning that once a model is deployed, its learning remains frozen until the next training cycle.
This process has major consequences. Every time an update occurs, valuable insights from past interactions vanish, forcing AI to relearn the same concepts from scratch. This creates inefficiencies, as models must reprocess information they have already encountered before.
But why does this happen? How does an LLM generate an answer to your query? When you ask an AI a question, it does not retrieve a stored fact like a database. Instead, it predicts the next word based on statistical probabilities derived from its training data. The model calculates what words are most likely to follow based on patterns it has seen before. However, because it does not have a persistent knowledge store, it has no way of recalling past interactions or improving dynamically based on new information. Each response is an independent calculation rather than a cumulative learning process.
This limitation is why AI, as we use it today, cannot accumulate knowledge over time—it can only approximate patterns based on the most recent version of its training data. Until AI systems can store, refine, and apply persistent learned knowledge, every model update will remain a reset, rather than a true evolution of intelligence. And from corporate to personal AI’s, this is a loss that does not need to happen.