The recently leaked memo from a Google researcher has ignited an intense online debate about the future of large language models and whether specific large tech companies or open source solutions will dominate. The reality is that nobody knows what's going to happen. The technology is changing rapidly, there are simply too many variables, and we can't even predict what will happen next week.
Instead of trying to build your AI strategy based on predictions, make ambiguity a foundational principle. We call this "preserving your optionality." In the case of large language models, start by designing your applications, organization, and relationships with the assumption that you will want to switch between them.
Unfortunately, some companies are inadvertently making bets on Microsoft/OpenAI because it is currently the category leader. Although building your initial applications with GPT-4 is a great idea because it is the best solution available, you may want to quickly switch models based on cost, strategy, performance, or data security considerations. Your choice of initial use cases can help preserve your optionality.
A great starting point is selecting use cases that can leverage programmatically-generated embeddings. You can easily regenerate them for different models. On the other hand, beginning with applications that require a corpus of user-generated prompts highly optimized for one particular model may not be the best approach. Not only will it take more time, but you also have no guarantees that some of the clever optimization techniques you read online will be backward-compatible with future OpenAI models, let alone solutions from other vendors.
Unfortunately, most large companies are structured to make decisions based on careful planning and projections about the future. These traditional approaches will need reconsideration as we enter an era where rapid change is the norm and the future is constantly ambiguous.