One prompt, multiple outputs?

May 13, 2024
5 minute read

Today, I had a major breakthrough.

It seems like going on a holiday and taking my mind off AI things has helped me a lot to be more creative.

Note to self: take rest more often.

So here’s what I was working on today.

I have a workshop coming up this week, where I’ll be teaching two dozens PR professionals on how to use AI in their work.

I know jack shit about PR. And that’s true for most automations we build for clients.

I don’t have to know the profession to be able to automate tasks in it. I just have to be good at asking the right questions and facilitiating the information out of people.

So that’s what I did. The organizers were super helpful in articulating some of the most time consuming tasks they do, and one of them was creating what’s called a clipping book.

I didn’t know what this was, so if you’re like me and don’t know either, it’s basically a spreadsheet with the URLs of the published articles that the agency delivered to the client, with extra information about each article.

At the end of the campaign, they collect this and send it to the client in a pdf report format.

I thought this would be great to automate during the workshop, because it’s a multidisciplinary project that has a lot of skills that can translate to other workflows.

In this type of clipping report they showed me as an example, you have to put in the medium of who published the article, what was the final title, and when was it published.

So here is the process they do:

  1. Collect URL of article (google search with keywords or some software)
  2. Open URL and copy-paste the date to the proper cell
  3. Copy-paste the title. If the title is not in English, it has to be translated.
  4. Copy-paste the medium.

It’s a mundane process, nothing challenging, and it’s not long, like a few minutes each time, but you have to do it for every publication. So the tiny minutes really add up.

And if you have a similar process like this, you can consume a lot of tokens if you run each as separate GPT completion module in Make.

Even more if you have to write a short summary, etc.

So here’s what I did:

I told GPT-4 to respond in a JSON format.

If you don’t know what JSON is, it’s basically the standard way of passing payload from one app to another.

In our case, the JSON format was simple:

{
“medium”: “Qubit.hu”,
“title”: “First ChatGPT Courses Launch at Hungarian Universities”,
“date”: “2023/11/28”
}

This is ONE response from a GPT completion, but as you can see, it contains THREE outputs:

  1. Medium = Qubit.hu
  2. Title = First ChatGPT Courses Launch at Hungarian Universities
  3. Date = 2023/11/28

To get the text of the article, I used a 0CodeKit function that can get the text of an HTML object and I used that in my prompt.

And now I can use these as separate variables in my Make scenario and do all kinds of stuff with them.

This makes me think that with my Carousel GPT, I could make carousels fully automated, because I can define them as:

slide_1, slide_2, slide_3, etc. and replace the text on an Adobe Illustrator template file (or canva, or Google Slides)

I could make an automation that takes a long form youtube video, writes 3-5 short scripts from different parts, and then generates an AI clone TikTok video/Reel/Shorts whatever.

The possibilities are endless.

One thing to keep in mind is the output limit of the Language Model that you use, and if it’s GPT-4-Turbo, you’re still limited to 4K tokens in the output.

And sometimes if you ask for a longer response, or multiple things in the output, it could degrade performance.

So make sure you find a sweet spot between efficiency and performance.

LLMs are most useful for these kinds of data entry, and with a JSON output, asking for multiple things from one prompt, you can save A LOT of tokens.

I’ve heard of a company that was spending $3k each month on Make operations alone because the scenarios were not optimized for the least amount of operations.

In my case, I only saved one operation, because next to the GPT completion module, I also had to add another module that parsed the text into an actual JSON.

But I saved 2x amount of GPT completions and with more outputs this can really amount to a lot of savings every month.

Final automation:

End result:

Oh and just a reminder: you don’t have to think about crazy AI wizardry to save time.

A simple 3-5 module automation with one GPT module can do more than you think.

An AI Employee is not one massive software that’s almost AGI. It’s a combination of many many tiny simple automations that save your time.

If this is interesting to you, consider becoming a Prompt Master. These are the things you’ll be building every day, for yourself or for clients.

We’ve created a self-paced 12 hour course that will bring you from complete beginner to an intermediate Prompt Master.

You’ll be able to write ChatGPT prompts that work and save time.

You won’t have to use prompt libraries written by others.

You’ll be able to configure a CustomGPT that can access a knowledge base and save time.

You’ll be able to build and run no-code automations just like the one above in less than a month.

And if you don’t achieve these within 90 days, we’ll refund your entire purchase!

So there is no risk, click here to become a Prompt Master.

Talk soon,

Dave

P.S: Do you have 2 minutes to spare? We’re running a survey to help understand you better so we can help you more. Please click here to fill it.