In the ever-evolving world of AI, it's easy to get caught up in the hype surrounding new developments. One topic that's been making waves recently is the concept of prompt engineering as an emerging skill and career path. Prolego has been building applications with large language models (LLMs) like GPT-4 for years, and I'm skeptical of these claims. Here is a bit about prompt engineering and why it might not be the game-changer it's made out to be.
What is Prompt Engineering?
The term "prompt engineering" has become synonymous with the process of crafting inputs for LLMs to generate specific, desired outputs. It's often portrayed as a hot, new skill, but in reality, it can be broken down into two distinct domains:
- System design through OpenAI's API: This involves cost optimization, interface design, embeddings, latency, data compression, and many other complex issues. To succeed in this area, you don't need a "prompt engineer"—you need a top-notch systems and application engineering team.
- Optimizing the use of ChatGPT: Contrary to popular belief, learning how to optimize ChatGPT isn't a difficult skill to master. It mainly involves practice and a few tips to write prompts that elicit the desired creativity and format. Examples are showing the model example formats, generating more creative outputs through framing (e.g. you are in a contest …), and iterating through prompts by pasting answers back into the prompt itself. Here are examples I’ve used to illustrate use cases for financial services compliance.