Yesterday, Hugging Face made its entry into the Large Language Model (LLM) arena with the release of a new API called Transformers Agent. This move places them in direct competition with LangChain as the emerging framework for constructing Enterprise Artificial General Intelligence (AGI) applications. Today, I will provide a brief overview of Hugging Face, and discuss the implications this announcement holds for your large language model strategy.  

In my essay on Enterprise AGI, I delve into how intelligent software agents, particularly those powered by LLMs such as GPT-4, are the catalysts that will spur swift and widespread automation within your enterprise. LLM agents are the pioneering technology capable of executing complex reasoning and task orchestration decisions, a domain until now exclusively managed by humans. Hugging Face's foray into this sector underscores the rapid pace at which the ecosystem is aligning with my prediction.

Hugging Face is renowned for offering libraries and tools that are instrumental in building machine learning applications, particularly with Natural Language Processing (NLP) transformers. At Prolego, we have been leveraging Hugging Face's tools in our client projects for over five years. It's safe to say that most developers rely on its libraries as the starting point for any NLP project. While it does face hurdles typical to any large open source library, the tooling and documentation generally stand out for being well-written, greatly accelerating application development.

The Transformers Agent API presents a well-documented, uncomplicated interface for creating applications utilizing LLMs as agents. Notably, it seems they have thoughtfully addressed some practical considerations relevant to the enterprise. For instance, they offer the ability to regulate whether and how arbitrary Python scripts can be executed by the agent. To the best of my knowledge, LangChain currently lacks these types of safeguards. This measure of control is likely to be a convincing reason for most application teams to start using it.

Now, you might be asking, should you kick off your LLM project using LangChain or Hugging Face? Unless there's a substantial reason to introduce a new framework, Hugging Face appears to be the ideal starting point. It's likely already sanctioned by your security review team, it comes with an extensive set of pre-existing tools and libraries, and your data scientists are almost certainly already familiar with it. Hence, unless there's a compelling reason to experiment with a new framework, initial prototypes of LLM agents in the enterprise should probably begin with Hugging Face.

Kevin Dewalt
Chief Executive Officer & co-founder

More Ideas

AI Abundance:

Why you have only five years to prepare for the inevitable business extinction event.

download