Redefining Prompt Manipulation: Seeking Clarity in Language Models’ Input Optimization

Ruben Orduz
3 min readMay 27, 2023
Generated via Bing Image Creator

It has been approximately 7 months (as of the time of writing) since ChatGPT was released to the public. During this relatively short period, numerous new Large Language Models (LLMs) have emerged, prompting the business world to hastily navigate the profound transformation caused by these developments.

As LLMs continue to gain widespread usage and integration, a fresh lexicon has emerged. While some terminologies are rooted in traditional AI/ML concepts such as tensors, perplexity, and embeddings, others, like “Prompt Engineering,” are more recent additions. For the remainder of this post, I will concentrate on the latter and present a case for the need of a more suitable term.

Prompt Engineering

First and foremost, the term “Prompt Engineering” lacks a precise definition. It can mean different things to different individuals. For some, it involves optimizing prompts to obtain consistent output from the LLM. For others, it entails creating prompt templates to minimize free-form variance. Yet another group perceives it as a (pseudo) methodology for optimizing prompts.

Secondly, the term implies that a specific skill set, discipline, or rigor is necessary to perform this task. However, this notion is fundamentally flawed, as anyone with basic trial-and-error learning capabilities can discover how to achieve consistent output from an LLM.

Thirdly, I propose that this term may be a somewhat desperate attempt by the software engineering/development profession. Over the past five decades, we have invested significant effort into structuring code, information, and data into orderly categories, striving for the pinnacle of achieving 5NF in our data models. The axiomatic adoption of determinism has been embedded in almost every existing system, library, utility, and database framework. However, LLMs challenge these principles. They operate on unstructured data, employ unsupervised or semi-supervised learning, and produce statistically-driven outputs. This represents a novel and unprecedented challenge that our profession is currently facing.

Better Alternatives

Indeed, it seems necessary to establish a term that accurately reflects the objective of manipulating, adjusting, or templating prompts to achieve a consistent and reliable output for downstream consumers. It might be beneficial to introduce more than one term to encompass different aspects of this process.

One possible term is “Prompt Templating,” which describes the creation of templates in software systems to structure prompts before they are presented to the LLM. This term emphasizes the preparation of standardized prompt formats for consistent input.

Another term, “Prompt Tuning,” could be employed to describe the iterative process of refining prompts through additions, removals, wording adjustments, and strategic placement of words. This iterative approach aims to generate a range of desired results, bearing in mind that LLMs provide statistically-driven outputs and cannot guarantee exactness.

By utilizing these distinct terms, we can delineate between the act of establishing prompt templates and the ongoing refinement and adjustment of prompts to optimize outputs.

In closing, I’m not enamored to any of the above, I’m just presenting them as an alternative to “Prompt Engineering”, but I welcome alternatives that are spoused by the majority of the community.

--

--

Ruben Orduz

Software, 3D Printing, product reviews, data, and all things AI/ML.