We get this question/request all the time:
"I want a GPT that can do X [like write marketing emails] for me, without me having to prompt it every single time. I know that it learns information from the chat, but it's not really working well and reliably."
Yup, you are absolutely right.
There are a couple of things we need to dissect here.
The difference between context memory and training, the repeated actions and system integrations.
Let's take it step by step.
Context memory is not training!
So when you are chatting with ChatGPT and give it information, it seemingly remembers it but you are not training ChatGPT this way.
Training is done by OpenAI, and it fundamentally changes how the model responds (to every user).
Most of the time, training is not necessary. GPT-4 is a highly capable model that can understand a lot of context.
It's also called an "attention" model, which means that it can pay attention to the words around the other words, and store and use this contextual information.
Just like how if I say "Queen Elizabeth" vs. "Queen to D8 checkmate", the Queen means two very different things based on the context.
So when you are using ChatGPT to make content for you, you are using this function of it. GPT-4-Turbo can use a 124K token input and generate 4K token output, resulting in a 128K token context window.
This means that if you were to write down ALL information about your business into a document, and then copy-paste that document into ChatGPT, it could incorporate that information into its responses, making it a useful assistant for work use cases.
It feels like you are teaching the model on your information. The only downside is that you have to teach it repetitively because this training doesn't carry on to new chats.
This is where Custom GPTs come in handy.
Custom GPTs can store information
When you configure a Custom GPT, you can give it two things that are relevant here:
- Custom Instructions
- Knowledge Files
Custom instructions are basically where you are "programming" your GPT without code.
So if you have a REPETITIVE task, that you do over and over with ChatGPT, configure a Custom GPT for it.
This way, you can stop copy-pasting prompts, and even share with your team or clients.
And in the instructions, you can store all relevant information. I don't remember hitting a character limit there, but if you are looking to "teach" it more than an A4 page of information, it's better to:
Make new Google Document ➔ Enter information (proper formatting like Headings, Paragraphs, Lists, etc.) ➔ Print ➔ Save as PDF ➔ Upload to Custom GPT Knowledge with a descriptive file name.
Then write this in your instructions:
When doing a task, ALWAYS query your knowledge file "Filename.pdf" and find relevant information to the task, then proceed.
This will trigger the Custom GPT to actively retreive information from the file. If you have more files (you can upload maximum of 10, but no limit on pages), just say "query your knowledge files" or "knowledge base".
If you do all this, you should have a CustomGPT, that can use your company's information and assist with tasks.
How to add Custom GPTs to any automation
The biggest downside with Custom GPTs is that you HAVE TO BE THERE to use them, to trigger them.
For tasks like writing marketing emails based on human, always changing input, this is fine.
But for a lot of other tasks, this is not ideal, as an automation could easily just send relevant information to a pre-configured GPT, and process it while being integrated into your systems and being fully autonomous.
To do this, you need to go to OpenAI's API platform, and create a new Assistant.
The Assistant is basically the API version of a Custom GPT.
It has the exact same functions:
- Custom Instruction Prompt
- Knowledge Files
- Ability to turn on Code Interpreter or Browsing
- External actions
But you can communicate with it through an API (and through the playground).
Which means, you can integrate it into any Make flow, and make it do your work for you.
You can also make it output information into the software you already use, like Airtable, Notion, Google Docs, Gmail, whatever, so you don't have to copy-paste from ChatGPT.
It's an advanced level of using AI and we teach this entire process in our new Prompt Master AI Course.
It's a 12-module course where you'll learn prompt engineering, become skilled at configuring these Custom GPTs, and even be able to put together simple AI Automations that can work for you while you sleep or do other things.
We also taught this same course at 2 Hungarian Universities this year, but don't be afraid, it's not an academic difficulty level AI course. You don't need to know how to code or anything like that.
It's highly practical and down-to-earth, simple, easy to understand. It's designed for people with domain expertise who don't have a technical or programming background, but want to learn how to use AI in their work.
For university students, it's to make them prepared for the widespread adoption of AI, and make them a competitive workforce.
In a US job fair, companies stated that they are not hiring new employees who don't know how to use ChatGPT.
For you, it might help with something similar. To be competitive in business, to offload your tasks to AI so you can focus more on the high-value tasks that are your zone of genius, and let AI handle the rest. (The repetitive shit no business owner likes to do)
Also, we are now official Make Academic Alliance Partners, which means that if you join the Prompt Master AI course, we'll also send you our students only link for Make that gives you 3-months totally free, so you can build no-code AI bots with more freedom.
We also have a 90-day money back guarantee and raving reviews from our students.
For example, here's what a few of them said about us:
I hope this helps you decide whether you're in or out.
Got any questions, on the verge of signing up but not sure? Reply to this email and we'll get back to you!
Best,
Dave
P.S: Don't forget, the price increases tomorrow (April 30th)!!! Join the Prompt Master AI course today to save $100!