A common critique of employing large language models, such as GPT-4, within the enterprise is that they yield inconsistent results. Critics argue that these inconsistencies with large language models will hinder their widespread adoption in the business sphere.
This reaction is entirely understandable. Indeed, the hallmarks of effective enterprise software systems are consistency and reliability. Introducing an ambiguous language model, such as GPT-4, seems to contradict these principles.
Unfortunately, this perspective illustrates a challenge we all face when dealing with truly innovative technology. While we all appreciate the concept of innovation in the abstract, we often reject genuinely innovative approaches if they clash with our preconceived notions. I'm no different in this regard; it took me time to fully grasp the potential of large language models.
In this particular case, there's a prevailing assumption that a large language model like GPT-4 will generate outputs that should be compatible with existing IT systems. These systems typically expect well-organized, consistent, and structured data. But what if the output of an ambiguous large language model was fed into multiple other large language models capable of managing this ambiguity? In such a design, we might be able to rapidly scale very complex systems by introducing a level of flexibility that traditional IT could never offer.
We know this approach is feasible because we already see similar systems in action. Consider human interaction. When we communicate through language, we rarely use consistent terms. Yet, the individual receiving the information can reason and understand general patterns, allowing us to interact and scale incredibly complex processes. As we begin to build towards enterprise artificial general intelligence (AGI), the same principle will hold true.