Dynamic LLM Prompting in Bubble
How to use branching, conditional rules, content from your database and external APIs to get much better responses back from OpenAI and other LLMs. I walk through some real examples in my own app.
Hey there!
Welcome to this week’s edition of the NoCode SaaS newsletter, where I take you along on the ride of building a profitable software business using AI and NoCode tools.
This week I want to talk a bit about one of the most powerful things you can master in the age of LLMs - prompting. And in particular, I want to talk about building dynamic prompts in Bubble, where you inject user specific content and content from external services into your prompts to do some really incredible things.
Before we dive into that though, I want to give you a quick update about my visit last week to NoCode Summit in Paris.
This was my second time visiting, and it didn’t disappoint. I’m always blown away by how friendly people in the NoCode community are, and I feel really lucky to be a part of it and have made so many friends, including some of you who subscribe to this newsletter!
For me the big takeaway from the event is how AI is continuing to redefine the entire space, and really blur the lines between traditional development and NoCode. Indeed even the term NoCode is being used less and less, with more of the tools preferring to talk about ‘visual development’.
I think this is a good way of framing how the tools work, I’ve never fully understood why tools like Bubble are under the NoCode umbrella, many of the best apps have lots of custom code in them (even if it’s increasingly generated by LLMs 😅)
If you’re interested in hearing more about what happened at the summit, how to plan your time there, and all the after hours fun check out the latest Create With Podcast which I recorded right from the venue at Station F in Paris with Kieran and Ash.
Now let’s dive into this week’s topic…
Dynamic Prompting in Bubble for AI
If you’ve been building with AI and Bubble, I’m sure you’ve wondered like me about the best way to create prompts to get the highest quality outputs in your app.
I covered the basics of returning structured data from OpenAI in this previous edition which you might find helpful to read before following the below if you’re not familiar with it.
The fact is, if you write simple prompts without a whole lot of context - you’re always going to get fairly generic responses back from the model.
To build truly magical user experiences that go way beyond what people could do directly in ChatGPT you need to get serious about writing dynamic prompts.
This means customizing the prompt every time it runs, depending on things like…
The current user
What this specific user is trying to achieve, and what other tasks they’ve already completed in your product
Where the user is located, the time of day and the date, what language do they speak
Pulling in things from external APIs, like scraping the user’s website to understand the context in which to execute your request
Changing a prompt behaviour depending on user preferences and settings
By passing data like this into your requests to LLMs like OpenAI and Anthropic, you can dramatically improve the quality of the responses you get - and ultimately how useful your app is to users.
In this edition I want to cover a few of the most important basics for writing dynamic prompts in Bubble based on showing you an example from my own app, UserLoop.
I’m going to show you an example of one of my app’s most popular AI features - our AI powered survey question generator.
This feature not only helps with our onboarding flow by generating personalised survey questions for each user, it also is one of the most frequently used features in the app.
It’s all driven by dynamic prompting
I’ve made a short video outlining the main techniques you can watch below, or read on for the written tutorial version.
Here’s the written version if you prefer that to watching the video…
Use Option Sets for Prompt Management
Option sets are the easiest way to manage prompts in Bubble. I use them for everything - survey types, languages, you name it.
Just so long as it’s not sensitive information you can store it in an option set!
Why they work:
Easy to update prompts without touching your workflows
Simple to test different versions
Can be attached to entities in your database
Here's how to get started:
1. Create option sets for different scenarios (e.g., survey types, languages) - it all depends on what your app does.
2. Add a "prompt" field to each option in your option set
3. Store the actual prompt text within these options
Example
Option Set: Survey Types
Here you can see I’ve created a field called OpenAI Prompt where I’m going to store the prompt I want to inject for each option.
This makes your prompts easily maintainable and allows for quick updates without changing your workflow.
Prompt Branching Logic
Use Bubble's conditional logic to create dynamic branches in your prompts. Here's the basic pattern:
1. Use "Is empty: formatted as text" to check field values
2. Create Yes/No branches for different content
3. Nest conditions for complex logic
Example Structure:
Here’s an example from one of my prompts.
Here we are creating a branch in the logic for the prompt which is going to change depending on whether thie company’s Brand Text field is empty or not.
We do this by using the expression Brand Text is empty: formatted as text
Then in the formatting for the text we leave yes empty (we don’t want to inject any prompt if this record doesn’t have any data. And for no (which mean a record exists) we want to inject the content from the database along with some instructions for the AI to follow.
But if we just injected these instructions when the record was empty, it would lead to us getting a very poor quality output from the model.
Database Content Injection
This is the big one - using what you already know about your users makes AI responses 10x better.
In UserLoop, we check what questions are already in a survey to avoid duplicates. But this could be used to pull any kind of information from your database and inject it into your prompt.
Format your data clearly for the AI
Use the :format as text operator on a Search expression to add written context to results from your database
Add specific instructions ("Don't use any of these:")
Keep lists clean and readable, use line breaks to separate items
Here’s an example of that in practice…
Here I’ve created a branch based on…
Search for Questions: first item is empty formatted as text
That means for yes (there are no records) we don’t want to inject anything.
But, if data does exist (in the Formatting for no section) we want to inject a prompt and the content from the database.
Here you can see the prompt and us running another Search for Questions operation, where we are formatting the output as text.
Then in our text box, we want to tell the LLM the name of each record, and add a comma and a line break in between each record.
This means the LLM is going to output a list of questions and for each one all the data listed above.
External API Enrichment
This is a bit more advanced but worth it. We use Perplexity to scan user websites and understand their brand. This means our AI writes questions that match their style.
We then take that data and inject it into our prompt - this gives the model additional information about the user making the request and their context. Which in turn helps it write higher quality questions.
Remember to:
Have a backup plan if the API fails - use conditional branching!
Only use external data when it helps
Make sure you use the :formatted as JSON safe on all data you get back to avoid breaking your prompt with rogue characters
I covered using Perplexity AI for this in a recent edition, you can read it here.
Remember, the key to effective dynamic prompting is finding the right balance between providing enough context and keeping your prompts manageable. Start simple and build up complexity as needed depending on the results you get.
That’s it for this week!
I hope you found this issue helpful, I’m excited to dive into more AI topics in the next few issues. I’m going to be covering Replicate, Bland AI, Replit and more on Cursor and Cloudflare - so stay tuned!
Anything else you’d like me to cover, or have thoughts on what I should be writing about? Drop me a reply or a comment, I love to hear from you!
Happy building! James.