The HMEC Principle: Finding the Sweet Spot for Generative AI

Chris Gorgolewski
5 min readMar 17, 2024

--

Generative AI (such as language models for generating text and diffusion models for generating images and videos) has taken the world by storm. Daily, we see increasingly impressive demonstrations of AI’s capabilities. As an AI practitioner, I have been involved in numerous attempts to apply AI to various problems. In the process, I developed a simple yet powerful principle that aids in identifying problems suitable for AI and designing AI integrations that provide optimal user value.

Generative AI is most helpful when assisting humans by solving problems where solutions are Hard to Make, but Easy to Check (HMEC).

Let me give you a couple of examples:

  1. Creative writing. Let’s say I am writing a letter to my landlord. I wrote a draft, but the letter sounds too casual. Rewriting it to make it more formal would be hard for me. I can ask a language model to do it for me. Evaluating the output (both in terms of style and factual accuracy) is easy for me because I know all the facts and I can recognize the writing style I wanted. This is a good fit for AI.
  2. Medical advice. Let’s say I am having some medical symptoms that are sort of similar to something I had a couple months ago. I still have some leftover prescription drugs. I want to know if taking them would help and if so how much I should take. Figuring this out for me would be very hard, because I don’t have medical training. I can ask AI for help right? Language models can generate responses to any prompt — including asking for medical advice (unless guardrails are put in place). However, because I am not a medical doctor it would be hard for me to evaluate if the advice given by AI is correct. This is not a good fit for AI.

In other words AI excels when being used with humans in the loop and helping them solve non-trivial problems that those humans are capable of evaluating. There are two main reasons why this is currently the case:

  1. Generative AI is (still) prone to hallucinations. Despite many efforts in grounding AI in verified facts it is still prone to providing information or suggestions that are not accurate. This necessitates the validation stage. This property of Generative AI might change in the future, but has been a consistent problem in the past few years.
  2. Generative AI rarely has all the necessary context. AI cannot read your mind. It might not know your exact preferences, expectations or the broader task context. It will make some assumptions when completing the request. Those assumptions might be wrong and thus require verification. This limitation might become less relevant as personalization of AI becomes easier via long context windows and AI is getting integrated deeper into tools (so the model gets to know more than just what you typed in the prompt).

Tool assisted validation

Verification of outputs means the most productive employment of AI is when there is a human in the loop performing the verification. In many cases the human needs a certain level of expertise to evaluate the outputs. This is in general true, but it’s worth noting that validation of the output can take various forms that could be greatly improved by deterministic systems. Here are a couple of examples:

  1. Building software. Let’s say you want to build a personal web page. Does it mean that according to the HMEC principle AI can only be useful if you are proficient in web technologies (HTML/CSS/JS)? Not necessarily. With the help of a tool that takes the code produced by AI and deploys the website for you can evaluate the final product. Evaluating the look and feel of your personal website is easy even if you don’t know any HTML.
  2. Fact checking. A while ago I was traveling with my family. We have two dogs and had a long car drive ahead of us. We wanted to find some dog friendly tourist attractions along the way. I asked a language model for recommendation and I got a list of a few places. Generating such a list would take me a long time — I would have to divide our route into sectors and make multiple Google Maps queries. But now I had a list of potential candidates. Using deterministic tools such as Google Maps I could easily verify if the suggested places exist and meet my criteria.

Compounding productivity gains

Hard, and easy are relative terms. For AI to bring value to users it needs to save them time. In other words we can rephrase the principle in terms of an inequality that needs to be met for an AI application to be useful:

Time required by the user to complete the task is greater than the time taken by the user to evaluate and rectify the suggested solution.

Tasks for which this inequality holds true are potentially good fits for AI applications (of course, actual success depends on how accurate the AI responses are for a given application, but that’s a story for another time). This is true even if the task is not considered “hard.” As long as there is a nontrivial net difference in the human cost of generation and cost of validation, there is potential for AI application. A good example of such applications is text autocompletion (Gmail Smart Compose or code completion provided by GitHub Copilot). The tasks those systems accelerate are not arduous on their own (completions are usually short), but they occur often enough to add up and significantly improve users’ productivity.

Conclusion

The HMEC principle provides a simple yet powerful framework for identifying problems where generative AI can bring the most value to users. By focusing on tasks that are hard for humans to complete from scratch but easy for them to evaluate and correct, we can unlock significant productivity gains and create AI applications that truly augment human capabilities.

As AI practitioners and enthusiasts, our goal should be to design AI integrations that leverage the strengths of both humans and machines, with a human-in-the-loop approach that ensures the quality and accuracy of the outputs. By using deterministic tools to assist with output validation, we can further expand the range of problems where AI can be productively applied.

The HMEC principle is not a silver bullet, but it provides a useful starting point for thinking about how to harness the power of generative AI to solve real-world problems. As the technology continues to evolve and improve, we can expect to see even more impressive applications that push the boundaries of what’s possible. But for now, focusing on the sweet spot of hard-to-make, easy-to-check problems is a surefire way to create value for users and advance the field of AI.

--

--

Chris Gorgolewski
Chris Gorgolewski

Written by Chris Gorgolewski

Working at @Google. Formerly: @StanfordPsych, @OpenNeuroOrg, @BIDSStandard, @MPI_CBS, @VaultNeuro. Opinions are my own.

No responses yet